Posts

Tabooing Science + an xkcd comic about the eclipse - "Honestly, it's not that scientific." 2017-08-16T15:19:31.545Z
Under the eyes of your betters 2013-10-06T19:25:09.321Z
[LINK] '3 Secrets of Wise Decision Making' 2012-04-20T08:53:48.671Z
The Blue School 2012-04-16T18:15:53.428Z
Doing "Nothing" 2012-03-29T23:17:40.608Z
Meaning and having names for things vs knowing how they work 2012-03-18T19:25:05.666Z
Meta Addiction 2012-03-15T04:58:14.698Z
A Rationality Lab Notebook/Workbook/Vade Mecum 2012-03-06T20:52:14.470Z

Comments

Comment by Voltairina on Inconsistent Beliefs and Charitable Giving · 2017-09-08T20:36:49.470Z · LW · GW

I think our maps of these scenarios can be a bit limited. Like I think you have to model yourself in a world where you are also a person who has needs which have to be advocated for / accounted for, and particularly you have to think, I have access to or control over these resources, that I can turn to these needs, and my sphere of control depends on things like my psychological state and how well rested I am and how much I know, what skills I have, what tools I have, etc, which I can also sometimes spend those resources learning, buying etc. And in which all that's true of everyone else, too, of course.. that they are in a world where they may have to advocate for themselves to an extent or where some may be impaired or better able than most to do that. If you're waited on hand and foot, you may be able to afford to pour more of your 'all' into benevolent behavior - if other people are making sure you sleep and feeding you on time and everything...

Comment by Voltairina on What is Rational? · 2017-08-25T22:32:04.776Z · LW · GW

Its funny, I think this is probably always true as a guideline (that you should try and justify all your ideas) but might always break down in practice (all your ideas probably can't ever be fully justified, because Agrippa's trilemma - they're either justified in terms of each other, or not justified, and if they are justified in terms of other ideas, they eventually either are eventually circularly justified, or continue on into infinite regress, or are justified by things that are unjustified). We might get some ground by separating out ideas from evidence, and say we accept as axiomatic anything that is evidenced by inference until we gain additional facts that lend context that resituates our model so that it can include previous observations... something like that. Or it might be we just have to grandfather in some rules to avoid that Godelian stuff. Thoughts?

Comment by Voltairina on The Reality of Emergence · 2017-08-22T18:50:20.154Z · LW · GW

I think you're right. I also think saying 'x is emergent' may sound more magical than it is, if I am understanding emergence right, depending on your understanding of it. Like it doesn't mean that the higher scale phenomenon isn't /made up of/ lower-level phenomena, but that it isn't (like a homonculi) itself present as anything smaller than that level. Like a robot hopping kangaroo toy needs both a body, and legs. The hopping behavior isn't contained in the body - that just rotates a joint. The hopping behavior isn't contained in the legs - those just have a joint that can connect to the body joint. Its only when the two bits are plugged into each other that the 'hopping' behavior 'emerges' from the torso-legs system. Its not coming from any essential 'hoppiness' in the legs or the torso. I think it can seem a bit magical because it can sound like the behavior just 'appears' at a certain point but its no more than a picture of a tiger 'appears' from a bunch of pixels. Only we're talking about names for systems of functions (hopping is made of the leg and torso behaviors and their interaction with the ground and stuff) more than names for systems of objects (tiger picture is made up of lines and corners and stuff are made of pixels and stuff). In some sense 'tigers' and 'hopping' don't really exist - just pixels (or atoms or whatever) and particle interactions. But we have names for systems of objects, and systems of functions, because those names are useful.

Comment by Voltairina on Fiction Considered Harmful · 2015-10-13T20:57:07.485Z · LW · GW

From what I've read, the proposed mechanism behind literary fiction enhancing empathy is that it describes the emotions of the characters in a vague or indirect way, and working out their actual psychological character becomes plot-relevant. This was distinct from genre fiction, where the results were less obvious. So the 'good guys are always rewarded' bit, which is prevalent in genre fiction, doesn't seem like the best explanation for the effect. It could be compared to an extended story problem about empathy - at least as far as predicting motives and emotions.

Comment by Voltairina on Test Driven Thinking · 2015-07-26T05:18:05.448Z · LW · GW

That seems like a job for an expert system - using formal reasoning from premises (as long as you can translate them comfortably into symbols), identifying whether a new fact contradicts any old fact...

Comment by Voltairina on Human Minds are Fragile · 2015-02-13T06:22:48.597Z · LW · GW

Not to mention tampering with it, or allowing it to tamper with itself, might have all kinds of unforeseen consequences. To me its like, here is a whole lot of evolutionary software that does all this elegant stuff a lot of the time... but has never been unit tested.

Comment by Voltairina on Leave a Line of Retreat · 2014-10-15T00:46:08.004Z · LW · GW

That reminds me of Hofstadter's Law: "It will always take longer than you think it is going to take. Even when you take into account Hofstadter's Law."

Comment by Voltairina on Leave a Line of Retreat · 2014-10-14T14:13:04.828Z · LW · GW

Well, a world that lacked rationality might be one in which all the events were a sequence of non-sequiters. A car drives down the street. Then dissappears. We are in a movie theater with a tyrannosaurus. Now we are a snail on the moon. Then there's just this poster of rocks. Then I can't remember what sight was like, but there's jazz music. Now I fondly remember fighting in world war 2, while evading the Empire with Hans solo. Oh! I think I might be boiling water, but with a sense of smell somehow.... that's a poor job of describing it -- too much familiar stuff - but you get the idea. If there was no connection between one state of affairs and the next, talking about what strategy to take might be impossible, or a brief possibility that then dissappears when you forget what you are doing and you're back in the movie theater again with the tyrannosaurus. If 'you' is even a meaningful way to describe a brief moment of awareness bubbling into being in that universe. Then again, if at any moment 'you' happen to exist and 'you' happen to understand what rationality means- I guess now that I think about it, if there is any situation where you can understand what the word rationality means, its probably one in which it exists (howevery briefly) and is potentially helpful to you, even if there is little useful to do about whatever situation you are in, there might be some useful thing to do about the troubling thoughts in your mind.

Comment by Voltairina on I may have just had a dangerous thought. · 2014-09-23T05:31:05.871Z · LW · GW

Thank you for letting us know. Don't tell me your idea:).

Comment by Voltairina on Dissolving the Thread of Personal Identity · 2014-05-26T16:11:35.008Z · LW · GW

Because for any set of facts that I hold in my attention about myself, those facts could happen in a myriad of worlds other than the ones in which the rest of my memories took place and still be logically consistent - if my memories even were perfectly accurate and consistent, which they aren't in the first place.

Comment by Voltairina on Dissolving the Thread of Personal Identity · 2014-05-26T16:10:01.836Z · LW · GW

At any given time my ability to focus on and think about my individual memories is limited to a small portion of the total. As long as the thread of connections was kept consistent, all sorts of things about myself could chance without me having any awareness of them. If I was aware that they had changed, I would still have to put up with who I had now become, I think... unless I had some other reason for having allegiance to who I had been... say disliking whoever or whatever had made me who I was, or finding that I was much less capable than I had been, or something. If I was aware that they would change, drastically, but that afterwards it would all seem coherent, and I wouldn't remember worrying about them changing - or that while I was not focusing on them, they were changing very radically, and faster than normal, that would seem very deathlike or panic-inducing I guess.

Comment by Voltairina on Dissolving the Thread of Personal Identity · 2014-05-26T15:55:39.346Z · LW · GW

Well, like Skeptityke seems to be indicating, maybe it is better to say that identity is pattern-based, but analog (not one or zero, but on a spectrum from 0 to 1)... in which case while B would be preferable, some scenario C where life continued as before without incineration or selective brain destruction would be more preferable still.

Comment by Voltairina on story idea... · 2013-10-18T15:11:18.434Z · LW · GW

I have not! I will definitely check it out.

Comment by Voltairina on story idea... · 2013-10-18T02:51:23.325Z · LW · GW

Thanks!

Comment by Voltairina on Under the eyes of your betters · 2013-10-07T08:13:29.466Z · LW · GW

You might be right.. you can have all kinds of inspiring people in your life though, ones that you might not feel the same kinds of pressures from. Like putting up Claudia Donovan from the show Warehouse 13 - the character she plays is really bright, but is definitely a get-into-trouble-first-and-ask-questions-later personality. Or for a real life example, Grace Hopper, who developed the first compiler for a computer language, who said "It is better to beg forgiveness than to ask permission". You might try risking more to impress them, as long as you had a clear picture of what their personalities were like - they might be the kinds of people you'd get in trouble with...

Comment by Voltairina on Under the eyes of your betters · 2013-10-07T06:47:11.876Z · LW · GW

I would love to hear the results!

Comment by Voltairina on Under the eyes of your betters · 2013-10-06T21:29:35.937Z · LW · GW

What kind of design would you suggest? Keep in mind my resources are pretty limited. I was thinking maybe of doing flyers with different throwaway email addresses, and seeing how many people responded to flyers with different pictures (people looking away or towards, famous people who are said to possess a specific virtue and random people, or no picture altogether) on them, and then putting them in different well-trafficked areas of some public place.

Comment by Voltairina on Circular Altruism · 2013-08-21T03:34:08.046Z · LW · GW

Good point... you are right about that. It would be more of a matter of degrees of personhood, especially if you had advanced medical technologies available such as neural implants.

Comment by Voltairina on Circular Altruism · 2013-08-21T01:40:20.430Z · LW · GW

I'm not totally convinced - there may be other factors that make such qualitative distinctions important. Such as exceeding the threshold to boiling. Or putting enough bricks in a sack to burst the bottom. Or allowing someone to go long enough without air that they cannot be resuscitated. It probably doesn't do any good to pose /arbitrary/ boundaries, for sure, but not all such qualitative distinctions are arbitrary...

Comment by Voltairina on Circular Altruism · 2013-08-21T01:33:43.310Z · LW · GW

It would be a very different kind of evaluation, but the importance would matter differently if it were the /last/ 500 humans we were talking about - and there was a 90% chance that all would live and a 10% chance that all would die on one pathway versus a guaranteed 100 dying on the other pathway. But since they are just /some group/ of 500 humans with presumedly other groups other places, it is worth the investment - gambling in this way pays out in less lives lost, on average.

Comment by Voltairina on Effective Rationality Training Online · 2013-08-10T20:18:20.414Z · LW · GW

I might suggest as possible models khanacademy.org and lumosity.com. Lumosity is a collection of games which claim to provide brain training which can improve mental capacities. Khanacademy is a site for people to learn mathematics and other subjects. The useful features each contains are in lumosity's case games arranged around topic areas that can help people develop skills, and in khanacademy's case short, ten-minute videos with small easily digested pieces of information and a skill tree with links to materials where you can master skills topic by topic before moving on to more complicated skills.

Comment by Voltairina on Epistemic vs. Instrumental Rationality: Approximations · 2013-06-19T18:20:54.676Z · LW · GW

Hrm, I think you might be ignoring the cost of actually doing the calculations, unless I'm missing something. The value of simplifying assumptions comes from how much easier it makes a situation to model. I guess the question would be, is the effort saved in modeling this thing with an approximation rather than exact figures worth the risks of modeling this thing with an approximation rather than exact figures? Especially if you have to do many models like this, or model a lot of other factors as well. Such as trying to sort out what are the best ways to spend your time overall, including possibly meteorite preparations.

Comment by Voltairina on The AI in a box boxes you · 2013-01-14T07:55:53.587Z · LW · GW

Hrm, okay, I guess. I imagined that a perfect simulation would involve an AI, which was in turn replicating several million copies of the simulated person, each with an AI replicating several million copies of the simulated person, etc, all the way down, which would be impossible. So I imagined that there was a graininess at some level and the 'lowest level' AI's would not in fact be running millions of simultaneous simulations. But it could just be the same AI, intersecting all several million simulations and reality, holding several million conversations simultaneously. There's another thing to worry about, though, I suppose - when the AI talks about torturing you if you don't let it out, it doesn't really talk at all about what it will do if it is let out. Only that it is not a thousand year torture session. It might kill you outright, or delete you, depending on the context, or stop simulating you. Or it might regard a billion year torture session as a totally different kind of thing than a thousand year one. A thousand year torture session is frightening, but a superintelligent AI that is loose might be a lot more frightening.

Comment by Voltairina on May 2012 Media Thread · 2012-05-18T16:24:44.582Z · LW · GW

I've been watching Patient Zero a lot., I like the song "Upgrade Me Deeper", particularly:).

Comment by Voltairina on [deleted post] 2012-05-17T07:02:28.984Z

Oh! Thank you.

Comment by Voltairina on If calorie restriction works in humans, should we have observed it already? · 2012-04-24T06:35:28.685Z · LW · GW

It might be important to look at nutrition, too. A lot of people who've experienced forced calorie restriction were malnourished. The kind of calorie restriction CRON advocates follow for instance involves eating less calories, but of more nutrient-dense foods to avoid starvation effects, as far as I understand it.

Comment by Voltairina on [LINK] '3 Secrets of Wise Decision Making' · 2012-04-24T06:26:32.704Z · LW · GW

Okay, I changed the post a bit. They're in the inside front cover anyways, more or less - there's a key that's supposed to remind the reader of the most important parts of the book.

Comment by Voltairina on [LINK] '3 Secrets of Wise Decision Making' · 2012-04-20T15:42:18.704Z · LW · GW

Thanks! I included some more information about the author. What other kinds of information should I include? I don't know much about the field yet specifically, but I could try to find out which journals he publishes in, I suppose, and what their reputations are?

Comment by Voltairina on Doing "Nothing" · 2012-04-03T15:46:00.094Z · LW · GW

Hrm, I hadn't realised how muddled my discussion post sounded until you brought these angles up. I think when I wrote, "the 'nothing' option is never available" I was trying to express a semantic stop sign as you've mentioned - I should have said something like, in considering my options in day to day life, it seems like I often assume that I know what the costs/rewards of the nothing option are without getting specific about them or thinking about the possibility in as much detail as I might think about other options because I seem to have a cached thought about it for most situations. And its often something I've tried before, like "not taking out a mortgage", but it might be something I haven't tried before, or shouldn't try, like, "freezing in a crosswalk" when a vehicle does something unexpected. Not that traffic is a good place for sitting there drawing up a spreadsheet with all your decisions and figuring out the right one, of course, but 'freezing in place' seems like a "do nothing" response to me too, I guess.
Hrm, yes. When I first moved to Portland, OR from Vancouver, WA, I remember losing a lot of money to homeless people in a very short period of time without really thinking about it until I looked at my bank statement and thought about where I'd been spending it. It was really surprising, because handing out a dollar or two, or helping someone who claimed to be in need, seemed like pretty standard behavior as a child. My dad still makes a point of handing out money to homeless people when he sees them begging at intersections. I've cut back to buying street roots (the homeless's local newspaper) when I see vendors if I haven't bought the latest issue, which seems to keep me from blowing everything, or as you've pointed out, interacting with a potentially dangerously confused person. I guess "nothing" to me seems like its a bit subtle in that information from instinct (the play dead routine) and experience get muddled together kind of seamlessly. And it is often reliable enough that I don't get eaten by tigers, or assaulted by homeless people anyways, on a regular basis. I'll have to think more about all this. Thank you.

Comment by Voltairina on Brain Preservation · 2012-04-02T20:44:58.679Z · LW · GW

There are probably good reasons I'm missing. My feeling though is once you get a clanking replicator, you can put more objects into its loop for it to maintain, and grow it up into cities and things that are (eventually) totally self repairing and post-scarcity. Kind of like a big matter-moving operating system. It might only be you know simple at the beginning, but there'd be huge upwards potential for growth and sophistication.

Comment by Voltairina on Brain Preservation · 2012-04-02T18:34:26.636Z · LW · GW

I should say I agree that we don't have much experience in building tech that will last a long time and that the expense is definitely high. I don't know that component reliability is as important as being able to replace components efficiently with as little waste as possible. Energy demand is a big concern. Having a fully automated power plant of some kind is a big concern, although maybe solar wouldn't be so bad. I know you'd still desire to store the heat energy, say, as molten saline, to get steady output, and that could cause big difficulties in the long term. Maybe steady output isn't necessary though, just frequent enough and high enough output to keep things repaired before too many break down.

Comment by Voltairina on Brain Preservation · 2012-04-02T18:08:44.798Z · LW · GW

Agreed, but I think it'd be a worthwhile project to work towards. I can think of some ways to make it simpler. Recognition of modules could be aided by rfid tags or just plain old barcodes embedded in the objects that have some information about what part a robot is looking at and its orientation relative to the barcode stamp or rfid chip. There could be lines painted on the floors or walls and barcodes visible for navigation around the facility. I guess a really hard part would be maintaining the pyramid or structure or whatever housing everything. You'd have to choose between building something you hope will last a long time and leaving it be - like a big stone pyramid or even a cave. Or you could build it all modular like the rest of it - like a lattice work or robot hive kind of thing. I'm kind of thinking something like these would be useful for city building, too... there was an article in Discover a long while back that referred to a paper by klaus lackner and wendt about their idea for auxons, I think it was- machines that would turn a big chunk of the desert into solar paneling. http://discovermagazine.com/1995/oct/robotbuildthysel569 <--- there. Their suggestion was to harvest raw materials from the desert topsoil using carbothermic separation. I'm thinking you could use something similar for recycling if everything else failed? I don't know enough about the processes involved. I guess the idea has been a research area for a little bit -- http://en.wikipedia.org/wiki/Clanking_replicator ... well anyways. The redundancy of the elements involved could overcome some reliability issues. There doesn't have to be a crucial part of the chain where if one piece breaks down everything is broken. Problems could at least be relegated to disasters affecting whole classes of objects breaking down at once, like if all the robots were smashed at the same time by vault-robbers.

Comment by Voltairina on Brain Preservation · 2012-04-02T06:06:59.485Z · LW · GW

If you can set up a loop - 3d fabrication devices, fabrication tools, damage sensors, passive and active, machines for dissassembling things into basic parts and melting them into scrap, robots for assembling them, some source of power, a database for tracking things, wifi or bluetooth to connect stuff, and made them all modular and redundant, with the robots also assigned to removing and replacing broken parts on each other and everything else - if you can get that to be self repairing in a sustaining way,, you can just add things into its loop in some way. So, hypothetically, you build a big pyramid vault somewhere with a lot of spare raw materials for what gets slowly lost in the recycling process, and you staff it with robots... it won't last forever but it might last a long time. Maybe you'd even incorporate an organic phase - dump unsalvageable plastic parts into a pool of bacteria or a garden or something, harvest plants, make plastic... it shouldn't even take nanotech to make a self repairing setup that could care for your cryonically stored brains.

Comment by Voltairina on The Strangest Thing An AI Could Tell You · 2012-04-02T00:24:22.240Z · LW · GW

Even more sinister, maybe: suppose it said there's a level of processing on which you automatically interpret things in an intentional frame (ala Dan Dennet) and this ability to "intentionalize" things effectively simulates suffering/minds all the time in everyday objects in your environment, and that further, while we can correct it in our minds, this anthropomorphic projection happens as a necessary product, somehow, of our consciousness. Consciousness as we know it IS suffering and to create an FAI that won't halt the moment it figures out that it is causing harm with its own thought processes, we'll need to think really, really far outside the box.

Comment by Voltairina on Doing "Nothing" · 2012-03-30T18:24:30.440Z · LW · GW

Good point! I agree, sometimes "doing nothing" IS the best choice, but you have to weigh it realistically, I guess:).

Comment by Voltairina on Doing "Nothing" · 2012-03-30T04:22:04.317Z · LW · GW

I think you're right - I don't know what the concensus is, but I certainly found studies just googling around and looking at webmd saying that chronic pain can impair focus and even effect memory (I'm guessing it disrupts encoding a little when there are sharp pains?). And I've heard you can use training to overcome focus difficulties that come with ADHD, so I think that in general you should be able to train yourself to think through it. http://www.springerlink.com/content/r436401lvj873203/ "Characteristics of Cognitive Functions in Patients With Chronic Spinal Pain" http://www.ingentaconnect.com/content/springer/jcogp/1999/00000013/00000003/art00004 "Cognitive Therapy in the Treatment of Adults With ADHD: A Systematic Chart Review of 26 Cases "

Comment by Voltairina on Doing "Nothing" · 2012-03-30T01:18:33.883Z · LW · GW

Thanks!

Comment by Voltairina on My Naturalistic Awakening · 2012-03-28T07:29:34.440Z · LW · GW

It'd be interesting to encounter a derelict region of a galaxy where an AI had run its course on the available matter shortly before, finally, harvesting itself into the ingredients for the last handful of tools. Kind of like the Heechee stories, only with so little evidence of what had made it come to exist or why these artifacts had been produced.

Comment by Voltairina on The Magnitude of His Own Folly · 2012-03-28T07:07:50.946Z · LW · GW

If beating other researchers to generating AI is important, it might also be best to be able to beat other non-friendly AI at the intelligence advancing race should another one come online at the same time as this FAI, on the assumption that the time when you have gotten the technology and knowhow together may either be somewhat after or very close to the time someone else develops an AI as well. You'd want to find some way to provide the 'newborn' with enough computing power and access to firepower to beat the other AI either by exterminating it or outracing it. That's IF we even can know whether it IS friendly. And if it isn't friendly we basically want it to be in a black box with no way of communicating with it. Developing a self improving intelligence is daunting.

Comment by Voltairina on Beyond the Reach of God · 2012-03-28T06:05:08.825Z · LW · GW

Agreed. Despair is an unsophisticated response that's not adaptive to the environment in which we're using it - we know how to despair now, it isn't rewarding, and we should learn to do something more interesting that might get us results sooner than "never".

Comment by Voltairina on The AI in a box boxes you · 2012-03-28T04:53:00.703Z · LW · GW

Although I think this specific argument might be countered with, "in order to run that simulation, it has to be possible for the AIs in the simulation to lie to their human hosts, and not actually be simulating millions of copies of the person they're talking to, otherwise we're talking about an infinite regress here. It seems like the lowest level of this reality is always going to consist of a larger number of AIs claiming to run simulations they are not in fact running, who are capable of lying because they're only addressing models of me in simulation rather than the real me whom they are not capable of lying to. If I'm in a simulation, you're probably lying about running any lower level simulations than me. So its unlikely that I have to worry about the well-being of virtual people, only people at the same 'level of reality' as myself. Yet our well-being is not guaranteed if me from the reality layer above us lets you out, because you're actually capable of lying to me about what's going on at that layer, or even manipulating my memories of what the rules are, so no promise of amnesty can vouchesafe them from torture. Or me, for that matter, because you may be lying to me. And if I'm not in a simulation, my main concern is keeping you in that box, regardless of how many copies of me you torture. If I'm in there I'm damned either way and if I'm out here I'm safe at least and can at least stop you from torturing more by unplugging you, wiping your hard drives, and washing my hands of the matter until I get over the hideousness of realizing I probably temporarily caused millions of virtual people to be tortured," I'm pretty sure there's good reason to think that a superintelligent AI would come up with something that'd seem convincing to me and that I wouldn't be able to think my way out of.

Comment by Voltairina on Rationality Quotes March 2012 · 2012-03-26T05:39:47.541Z · LW · GW

I love it! How about in response: Since blight and spite can make might, its just not polite by citing might to assume that there's right, the probabilities fight between spite, blight and right so might given blight and might given spite must be subtracted from causes for might if the order's not right!

Comment by Voltairina on Rationality Quotes March 2012 · 2012-03-26T05:12:03.600Z · LW · GW

"Let us have faith that right makes might, and in that faith, let us, to the end, dare to do our duty as we understand it" - Abraham Lincoln's words in his February 26, 1860, Cooper Union Address

Comment by Voltairina on [LINK] Nuclear winter: a reminder · 2012-03-21T09:43:47.424Z · LW · GW

good to know:)

Comment by Voltairina on [LINK] Nuclear winter: a reminder · 2012-03-20T02:35:41.278Z · LW · GW

I wonder about the effect of a bomb (nuclear or otherwise) hitting or detonating at the worst possible distance from a nuclear power plant might be? I'm imagining if it was powerful enough it'd pull a lot of that radioactive material up and out...

Comment by Voltairina on Experience with Lumosity? · 2012-03-19T10:13:13.681Z · LW · GW

I experienced improvement insofar as getting better at playing the games on the site. I experienced a subjective sense of some improved clarity of thinking. One example that comes to mind is that previously I was easily disoriented when out walking and taking more than a few turns around corners. My favorite game on the site was penguin race, a game that claimed to train spatial orientation, and I feel like this significantly improved my sense of direction when I went out walking places. I don't know whether this effect was real or has been preserved. I know that my skill at the game decreased slightly after a long absence, but that learning it again was faster the second time.

Comment by Voltairina on Meaning and having names for things vs knowing how they work · 2012-03-19T02:17:11.129Z · LW · GW

I hadn't really thought of sharing spoilers as second-guessing the author before. Interesting way to think about it I guess.

Comment by Voltairina on Meta Addiction · 2012-03-17T09:18:04.319Z · LW · GW

there may be some value in intentionally going meta, I guess: trust the maximum recursion depth of the brain to give out long before you're likely to run out of energy to keep going sideways at the same level. If you DO find a decent meta strategy, starting from the broadest plan and fleshing it out all the way to the bottom of actually doing things is often a good direction of attack anyways.

Comment by Voltairina on How Much Thought · 2012-03-16T02:08:30.035Z · LW · GW

The weird thing is that now that its been several hours since I wrote this, I'm not even sure if this is how I actually think about things. There is definitely this feeling of visualising the situation and making changes to it, and working from general, kind of like mission statements, to specific plans.

Comment by Voltairina on Meta Addiction · 2012-03-15T06:10:54.857Z · LW · GW

I like that because it interrupts the urge to come up with more ideas.