Comment by evand on Satoshi Nakamoto? · 2017-11-03T01:26:13.777Z · score: 1 (1 votes) · LW · GW

Obvious kinds of humans include:

Dead humans. (Who didn't manage to leave the coins to their heirs.)

Cryonically preserved humans hoping to use them later. (Including an obvious specific candidate.)

Humans optimistic enough about Bitcoin to think current prices are too low. (We know Nakamoto had resources, so it seems a safe bet that they could keep living on ordinary means for now.)

And the obvious: you don't know that all of Nakamoto's coins fit the standard assumed profile. It's entirely possible they intentionally mined some with the regular setup and are spending a few from that pool.

Comment by evand on Inadequacy and Modesty · 2017-10-30T03:18:34.924Z · score: 1 (1 votes) · LW · GW

The advanced answer to this is to create conditional prediction markets. For example: a market for whether or not the Bank of Japan implements a policy, a market for the future GDP or inflation rate of Japan (or whatever your preferred metric is), and a conditional market for (GDP given policy) and (GDP given no policy).

Then people can make conditional bets as desired, and you can report your track record, and so on. Without a prediction market you can't, in general, solve the problem of "how good is this prediction track record really" except by looking at it in detail and making judgment calls.

Comment by evand on Scope Insensitivity · 2017-06-20T05:32:49.309Z · score: 0 (0 votes) · LW · GW

I hope you have renter's insurance, knowledge of a couple evacuation routes, and backups for any important data and papers and such.

Comment by evand on Bet or update: fixing the will-to-wager assumption · 2017-06-11T16:54:39.793Z · score: 1 (1 votes) · LW · GW

I'm not aware of any legal implications in the US. US gambling laws basically only apply when there is a "house" taking a cut or betting to their own advantage or similar. Bets between friends where someone wins the whole stake are permitted.

As for the shady implications... spend more time hanging out with aspiring rationalists and their ilk?

Comment by evand on Bet or update: fixing the will-to-wager assumption · 2017-06-08T14:24:10.022Z · score: 0 (0 votes) · LW · GW

The richer structure you seek for those two coins is your distribution over their probabilities. They're both 50% likely to come up heads, given the information you have. You should be willing to make exactly the same bets about them, assuming the person offering you the bet has no more information than you do. However, if you flip each coin once and observe the results, your new probability estimate for next flips are now different.

For example, for the second coin you might have a uniform distribution (ignorance prior) over the set of all possible probabilities. In that case, if you observe a single flip that comes up heads, your probability that the next flip will be heads is now 2/3.

Comment by evand on [deleted post] 2017-05-30T19:57:56.918Z

Well, in general, I'd say achieving that reliability through redundant means is totally reasonable, whether in engineering or people-based systems.

At a component level? Lots of structural components, for example. Airplane wings stay attached at fairly high reliability, and my impression is that while there is plenty of margin in the strength of the attachment, it's not like the underlying bolts are being replaced because they failed with any regularity.

I remember an aerospace discussion about a component (a pressure switch, I think?). NASA wanted documentation for 6 9s of reliability, and expected some sort of very careful fault tree analysis and testing plan. The contractor instead used an automotive component (brake system, I think?), and produced documentation of field reliability at a level high enough to meet the requirements. Definitely an example where working to get the underlying component that reliable was probably better than building complex redundancy on top of an unreliable component.

Comment by evand on [deleted post] 2017-05-26T13:11:22.155Z

You might also want a mechanism to handle "staples" that individuals want. I have a few foods / ingredients I like to keep on hand at all times, and be able to rely on having. I'd have no objections to other people eating them, but if they did I'd want them to take responsibility for never leaving the house in a state of "no X on hand".

Comment by evand on [deleted post] 2017-05-26T13:06:27.086Z

Those numbers sound like reasonable estimates and goals. Having taught classes at TechShop, that first handful of hours is important. 20 hours of welding instruction ought to be enough that you know whether you like it and can build some useful things, but probably not enough to get even an intro-level job. It should give you a clue as to whether signing up for a community college class is a good idea or not.

Also I'm really confused by your inclusion of EE in that list; I'd have put it on the other one.

Comment by evand on [deleted post] 2017-05-25T23:54:06.058Z

However, I'm skeptical of systems that require 99.99% reliability to work. Heuristically, I expect complex systems to be stable only if they are highly fault-tolerant and degrade gracefully.

On the other hand... look at what happens when you simply demand that level of reliability, put in the effort, and get it. From my engineering perspective, that difference looks huge. And it doesn't stop at 99.99%; the next couple nines are useful too! The level of complexity and usefulness you can build from those components is breathtaking. It's what makes the 21st century work.

I'd be really curious to see what happens when that same level of uncompromising reliability is demanded of social systems. Maybe it doesn't work, maybe the analogy fails. But I want to see the answer!

Comment by evand on Hidden universal expansion: stopping runaways · 2017-05-16T17:29:19.070Z · score: 0 (0 votes) · LW · GW

What happens when the committed scorched-earth-defender meets the committed extortionist? Surely a strong precommitment to extortion by a powerful attacker can defeat a weak commitment to scorched earth by a defender?

It seems to me this bears a resemblence to Chicken or something, and that on a large scale we might reasonably expect to see both sets of outcomes.

Comment by evand on Change utility, reduce extortion · 2017-04-28T18:11:17.999Z · score: 2 (2 votes) · LW · GW

What's that? If I don't give into your threat, you'll shoot me in the foot? Well, two can play at that game. If you shoot me in the foot, just watch, I'll shoot my other foot in revenge.

Comment by evand on Defining the normal computer control problem · 2017-04-27T16:27:38.800Z · score: 1 (1 votes) · LW · GW

On the other hand... what level do you want to examine this at?

We actually have pretty good control of our web browsers. We load random untrusted programs, and they mostly behave ok.

It's far from perfect, but it's a lot better than the desktop OS case. Asking why one case seems to be so much farther along than the other might be instructive.

Comment by evand on Defining the normal computer control problem · 2017-04-27T15:58:28.810Z · score: 0 (0 votes) · LW · GW

Again, I'm going to import the "normal computer control" problem assumptions by analogy:

  • The normal control problem allows minor misbehaviour, but that it should not persist over time

Take a modern milling machine. Modern CNC mills can include a lot of QC. They can probe part locations, so that the setup can be imperfect. They can measure part features, in case a raw casting isn't perfectly consistent. They can measure the part after rough machining, so that the finish pass can account for imperfections from things like temperature variation. They can measure the finished part, and reject or warn if there are errors. They can measure their cutting tools, and respond correctly to variation in tool installation. They can measure their cutting tools to compensate for wear, detect broken tools, switch to the spare cutting bit, and stop work and wait for new tools when needed.

Again, I say: we've solved the problem, for things literally as simple as pounding a nail, and a good deal more complicated. Including variation in the nails, the wood, and the hammer. Obviously the solution doesn't look like a fixed set of voltages sent to servo motors. It does look like a fixed set of parts that get made.

How involved in the field of factory automation are you? I suspect the problem here may simply be that the field is more advanced than you give it credit for.

Yes, the solutions are expensive. We don't always use these solutions, and often it's because using the solution would cost more and take more time than not using it, especially for small quantity production. But the trend is toward more of this sort of stuff being implemented in more areas.

The "normal computer control problem" permits some defects, and a greater than 0% error rate, provided things don't completely fall apart. I think a good definition of the "hammer control problem" is similar.

Comment by evand on Defining the normal computer control problem · 2017-04-27T15:25:36.065Z · score: 0 (0 votes) · LW · GW

It bends the nails, leaves dents in the surface and given the slightest chance will even attack your fingers!

We've mostly solved that problem.

I'm not sure that being able to nearly perfectly replicate a fixed set of physical actions is the same thing as solving a control problem.

It's precisely what's required to solve the problem of a hammer that bends nails and leaves dents, isn't it?

Stuxnet-type attacks

I think that's outside the scope of the "hammer control problem" for the same reasons that "an unfriendly AI convinced my co-worker to sabotage my computer" is outside the scope of the "normal computer control problem" or "powerful space aliens messed with my FAI safety code" is outside the scope of the "AI control problem".

It is worth noting that the type of control that you mention (e.g. "computer-controlled robots") is all about getting as far from "agenty" as possible.

I don't think it is, or at least not exactly. Many of the hammer failures you mentioned aren't "agenty" problems, they're control problems in the most classical engineering sense: the feedback loop my brain implements between hammer state and muscle output is incorrect. The problem exists with humans, but also with shoddily-built nail guns. Solving it isn't about removing "agency" from the bad nail gun.

Sure, if agency gets involved in your hammer control problem you might have other problems too. But if the "hammer control problem" is to be a useful problem, you need to define it as not including all of the "normal computer control problem" or "AI control problem"! It's exactly the same situation as the original post:

  • The normal control problem assumes that no specific agency in the programs (especially not super-intelligent agency)
Comment by evand on [Stub] Extortion and Pascal's wager · 2017-04-27T14:57:59.936Z · score: 3 (3 votes) · LW · GW

They usually don't have any way to leverage their models to increase the cost of not buying their product or service though; so such a situation is still missing at least one criterion.

Modern social networks and messaging networks would seem to be a strong counterexample. Any software with both network effects and intentional lock-in mechanisms, really.

And honestly, calling such products a blend of extortion and trade seems intuitively about right.

To try to get at the extortion / trade distinction a bit better:

Schelling gives us definitions of promises and threats, and also observes there are things that are a blend of the two. The blend is actually fairly common! I expect there's something analogous with extortion and trade: you can probably come up with pure examples of both, but in practice a lot of examples will be a blend. And a lot of the 'things we want to allow' will look like 'mostly trade with a dash of extortion' or 'mostly trade but both sides also seem to be doing some extortion'.

Comment by evand on Defining the normal computer control problem · 2017-04-26T23:58:48.250Z · score: 2 (2 votes) · LW · GW

We've (mostly) solved the hammer control problem in a restricted domain. It looks like computer-controlled robots. With effort, we can produce an entire car or similar machine without mistakes.

Obviously we haven't solved the control problem for those computers: we don't know how to produce that car without mistakes on the first try, or with major changes. We have to be exceedingly detailed in expressing our desires. Etc.

This may seem like we've just transformed it into the normal computer control problem, but I'm not entirely sure. Air-gapped CNC machinery running embedded OSes (or none at all) is pretty well behaved. It seems to me more like "we don't know how to write programs without testing them" than the "normal computer control problem".

Comment by evand on Background Reading: The Real Hufflepuff Sequence Was The Posts We Made Along The Way · 2017-04-26T23:47:11.942Z · score: 1 (1 votes) · LW · GW

You May Not Believe In Guess[Infer] Culture But It Believes In You

I think this comment is the citation you're looking for.

Comment by evand on I Want To Live In A Baugruppe · 2017-03-18T22:22:17.429Z · score: 1 (1 votes) · LW · GW

On the legality of selecting your buyers: What if you simply had a HOA (or equivelent) with high dues, that did rationalist-y things with the dues? Is that legal, and do you think it would provide a relevant selection effect?

Comment by evand on Increasing GDP is not growth · 2017-02-19T18:35:19.663Z · score: 1 (1 votes) · LW · GW

We might also want to compute the sum of the GDP of A and B: does that person moving cause more net productivity growth in B than loss in A?

Comment by evand on True understanding comes from passing exams · 2017-02-10T03:47:02.219Z · score: 1 (1 votes) · LW · GW

Possibly a third adversarial AI? Have an AI that generates the questions based on P, is rewarded if the second AI evaluates their probability as close to 50%, is rewarded for the first AI being able to get them right based on P', and for the human getting them wrong.

That's probably not quite right; we want the AI to generate hard but not impossible questions. Possibly some sort of term about the AIs predicting whether the human will get a question right?

Comment by evand on Hacking humans · 2017-02-03T01:23:30.263Z · score: 3 (3 votes) · LW · GW

That seems amazingly far from a worst case scenario.

Comment by evand on A question about the rules · 2017-02-03T01:21:30.722Z · score: 0 (0 votes) · LW · GW

Have you read Politics is the Mind-Killer? I get the vague sense you haven't, and I see lots of references here to it but no direct link. If you haven't, you should go read it and every adjacent article.

Edit: actually there is a link below already. Oops.

Comment by evand on Project Hufflepuff · 2017-01-19T18:55:15.864Z · score: 4 (4 votes) · LW · GW

I strongly favor this project and would love to read more on the subject. I'm hopeful that its online presence happens here where I'm likely to read it, and that it doesn't vanish onto Tumblr or Facebook or something similarly inaccessible.

Comment by evand on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-19T18:52:15.681Z · score: 2 (2 votes) · LW · GW

I notice that I (personally) feel an ugh response to link posts and don't like being taken away from LW when I'm browsing LW. I'm not sure why.

I do too. I don't know all the reasons, but one is simply web page design. The external page is often slow to load and unpleasant to read in comparison. This often comes with no benefit relative to just having the text in the post on LW.

Additionally, I assume that authors on other sites are a lot less likely to engage in discussion on LW, whether in comments or further posts. That seems like a big minus to me.

Comment by evand on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-29T04:23:58.548Z · score: 0 (0 votes) · LW · GW

Seeing as this is an entire article about nitpicking and mathematical constructs...

perfect rationality means to me more rational than any other agent. I think that is a reasonable definition.

Surely that should be "at least as rational as any other agent"?

Comment by evand on Levels of global catastrophes: from mild to extinction · 2015-12-30T02:12:43.421Z · score: 1 (1 votes) · LW · GW

I think you're pessimistic about tech regression.

Assuming survival of some libraries, I think basically any medium-sized functional village (thousands of people, or hundreds with a dash of trade) is adequate to maintain iron age technology. That's valuable enough that any group that survived in a fixed location for more than a couple years could see the value in the investment. (You might not even need the libraries if the right sort of person survived; I suspect I could get a lot of it without that, but it would be a lot less efficient.)

It doesn't take all that much more beyond that to get to some mix of 17th to 19th century tech. Building a useful early 19th-century machine shop is the work of one or two people, full time, for several years. Even in the presence of scavenging, I think such technology is useful enough that it won't take that long to be worth spending time on.

Basically I think anything that's survivable is unlikely to regress to before 17th century tech for a period longer than a few years.

Comment by evand on LINK: An example of the Pink Flamingo, the obvious-yet-overlooked cousin of the Black Swan · 2015-11-08T21:30:35.568Z · score: 1 (1 votes) · LW · GW

So, this is exactly the sort of thing prediction markets should do well at, right? People without structural incentives to ignore a problem can make accurate predictions and make money. People who care about it can point to the market prices when making their point.

In the black swan case, I think prediction markets will do only somewhat better than alternatives, but here they should do vastly better. Right?

Comment by evand on Solstice 2015: What Memes May Come (Part II - Atheism, Rationality and Death) · 2015-11-08T21:27:04.458Z · score: 2 (2 votes) · LW · GW

Agreed. It's silly. This site needs more active tending in general, in my opinion.

In the mean time, you can bookmark this link.

Comment by evand on Probabilities Small Enough To Ignore: An attack on Pascal's Mugging · 2015-09-16T23:14:20.365Z · score: 4 (6 votes) · LW · GW

It's weird, but it's not quite the same as bounded utility (though it looks pretty similar). In particular, there's still a point in saying it has double the utility even though you sometimes won't accept it at half the utility. Note the caveat "sometimes": at other times, you will accept it.

Suppose event X has utility U(X) = 2 U(Y). Normally, you'll accept it instead of Y at anything over half the probability. But if you reduce the probabilities of both* events enough, that changes. If you simply had a bound on utility, you would get a different behavior: you'd always accept X and over half the probability of Y for any P(Y), unless the utility of Y was too high. These behaviors are both fairly weird (except in the universe where there's no possible construction of an outcome with double the utility of Y, or the universe where you can't construct a sufficiently low probability for some reason), but they're not the same.

Comment by evand on Median utility rather than mean? · 2015-09-10T03:30:35.629Z · score: 0 (0 votes) · LW · GW

I basically agree. However...

It might be more amenable to MCMC sampling than you think. MCMC basically is a series of operations of the form "make a small change and compare the result to the status quo", which now that I phrase it that way sounds a lot like human ethical reasoning. (Maybe the real problem with philosophy is that we don't consider enough hypothetical cases? I kid... mostly...)

In practice, the symmetry constraint isn't as nasty as it looks. For example, you can do MH to sample a random node from a graph, knowing only local topology (you need some connectivity constraints to get a good walk length to get good diffusion properties). Basically, I posit that the hard part is coming up with a sane definition for "nearby possible world" (and that the symmetry constraint and other parts are pretty easy after that).

Comment by evand on Median utility rather than mean? · 2015-09-09T14:37:58.960Z · score: 0 (0 votes) · LW · GW

If you can "sample ... from your probability distribution" then you fully know your probability distribution

That's not true. (Though it might well be in all practical cases.) In particular, there are good algorithms for sampling from unknown or uncomputable probability distributions. Of course, any method that lets you sample from it lets you sample the parameters as well, but that's exactly the process the parent comment is suggesting.

Comment by evand on Actually existing prediction markets? · 2015-09-03T15:03:07.468Z · score: 0 (0 votes) · LW · GW

Yes, I suppose my comment wasn't clear. There are twice as many distinct prices as there should be, not 4x. There should only be one price per candidate (plus an additional price for "other" in many cases). The "buy no" price for a single candidate should be equal to the sum of the "buy yes" prices for all the other candidates, and that relationship should be fully enforced by the exchange.

Comment by evand on Stupid Questions September 2015 · 2015-09-03T14:59:23.332Z · score: 1 (1 votes) · LW · GW

Perhaps buying coffees for people in line around me?

That seems like a cheap experiment. Have you tried it? What else have you tried for purchasing warm fuzzies?

Comment by evand on Actually existing prediction markets? · 2015-09-03T14:53:36.200Z · score: 0 (0 votes) · LW · GW

You can build systems that preserve sum of probabilities = 1. They'll still see bias away from the extremes, because of fees and because of time value of money. But you can do a lot better than PredictIt. (One thing that helps on the fees side is to make fees go down for trades near the extremes; I argued for that in detail on Augur here.

Comment by evand on Actually existing prediction markets? · 2015-09-03T14:50:16.056Z · score: 0 (0 votes) · LW · GW

Even so, at the moment there are sane interest rates available if you tie up your money that way. It's not just the lack of netting; it's the lack of netting, combined with the small deposit limits, combined with the high withdrawl fees. Fix any of those, and you'd see more arbitrage (I think).

Also, they have a really dumb system where each candidate has both yes and no shares, instead of each election having shares per candidate. Which means there are more different prices than there should be, and no system-enforced rule that the sum of the probabilities = 1.

Comment by evand on Actually existing prediction markets? · 2015-09-02T23:53:53.078Z · score: 6 (6 votes) · LW · GW

Truthcoin and its cousin Augur deserve a mention, even though neither is actually operational yet. (They're decentralized prediction markets on a blockchain.)

Idea Futures is still running (play money), but is functionally nearly dead and has very low liquidity. Once upon a time it was the best option for play money markets, and quite good.

Fairlay is a half-decent (though centralized) crypto market, though it's structured as a "betting market" and has no way to sell back predictions at a profit or a loss (you can place later predictions to hedge your risk equivalently, but you end up tying up a lot of money). Liquidity is bad.

BitBet runs some sort of weird time-weighted pari-mutuel system that I don't like, and has a lot of complaints about shady operations (eg very, very bad customer support that results in people losing money due to interface mistakes), but they often have actual liquidity.

As far as I can discern, the current state of things is abysmal, but I'm pretty optimistic about Truthcoin (less so about Augur in some ways, but optimistic there too).

Worth noting that PredictIt's odds should give you pause: there's money on the table betting "No" on all the presidential candidates, and I find it concerning that they can't interest anyone in arbitraging that away.

Comment by evand on Open Thread August 31 - September 6 · 2015-09-02T19:48:34.573Z · score: 3 (3 votes) · LW · GW

That's an excellent practical example, though it doesn't really have the explicit probability math I was hoping for.

In particular, I like that you'll see stuff like which player thinks the partnership has the better contract flips back and forth, especially around auctions involving controls, stops, or other specific invitational questions. The concept of evaluating your hand within a window ("My hand is now very weak, given that I opened") is also explicitly reasoning about what your partner infers based on what you told them.

I think the most important thing here might be that bridge requires multiple rounds because bidding is limited bandwidth, whereas giving a full-precision probability estimate is not.

Comment by evand on Open Thread August 31 - September 6 · 2015-09-02T19:39:43.205Z · score: 0 (0 votes) · LW · GW

Here's how that goes. I flip 3 coins. Say I get 2 heads. My probability estimate for "there are 4+ heads total" is now 4/8 (the probability that 2 or 3 of your coins are heads). For the full set of outcomes I can have, the options are: (0H, 0/8) (1H, 1/8) (2H, 4/8) (3H, 7/8). You perform the same reasoning. Then we each share our probability estimates with the other. Say that on the first round, we each share estimates of 50%. Then we can each deduce that the other saw exactly two heads, and on the second round (and forever after) both our estimates become 100%. For all possible outcomes, my first round probability tells you exactly how many heads I flipped, and vice versa; as soon as we share probabilities once, we both know the answer and agree.

(Also, you're not using "confidence interval" in the correct manner. A confidence interval is defined over an expectation, not a posterior probability.)

I still don't see any version of this that's simpler than Finney's that actually makes use of multiple rounds, and when I fix the math on Finney's version it's decidedly not simple.

Comment by evand on Open Thread August 31 - September 6 · 2015-09-02T18:39:12.137Z · score: 0 (0 votes) · LW · GW

What change would you make that results in multiple rounds being required?

For example, if each player flips multiple coins, and then we share probability estimates for "all coins heads" or "majority of coins heads" or expectations for number of heads, in each case the first time I share my summary, I am sharing info that exactly tells the other player what information I have (and vice versa). So we will agree exactly from the second round onwards.

Comment by evand on Open Thread August 31 - September 6 · 2015-09-01T18:42:07.146Z · score: 0 (0 votes) · LW · GW

But then everyone has the exact some information, right? I'm specifically looking for something that's like Hal Finney's game, in that the different players have different information, and communicate some different set of information (some sort of knowledge about the state of the world, like their posteriors on the joint data).

Comment by evand on Open Thread August 31 - September 6 · 2015-09-01T16:47:51.124Z · score: 0 (0 votes) · LW · GW

That seems like fertile ground for exploration, but no probability / agreement variation immediately springs to mind. Did you have something specific in mind?

Comment by evand on Open Thread August 31 - September 6 · 2015-09-01T14:59:41.671Z · score: 2 (2 votes) · LW · GW

I'm looking for a good demonstration of Aumann's Agreement Theorem that I could actually conduct between two people competent in Bayesian probability. Presumably this would have a structure where each player performs some randomizing action, then they exchange information in some formal way in rounds, and eventually reach agreement.

A trivial example: each player flips a coin in secret, then they repeatedly exchange their probability estimates for a statement like "both coin flips came up heads". Unfortunately, for that case they both agree from round 2 onwards. Hal Finney has a version that seems to kinda work, but his reasoning at each step looks flawed. (As soon as I try to construct a method for generating the hints, I find that at each step when I update my estimate for my opponent's hint quality, I no longer get a bounded uniform distribution.)

So, what I'd like: a version that (with at least moderate probability) continues for multiple rounds before agreement is reached; where the information communicated is some sort of simple summary of a current estimate, not the information used to get there; where the math at each step is simple enough that the game can be played by humans with pencil and paper at a reasonable speed.

Alternate mechanisms (like players alternate communication instead of communicating current states simultaneously) are also fine.

Comment by evand on I need a protocol for dangerous or disconcerting ideas. · 2015-07-14T16:53:11.258Z · score: 2 (2 votes) · LW · GW

But being unable to disengage from the Big Problems and live your little ordinary life is not heroism, and it actively gets in the way of solving any of those Big Problems.

Not my Big Problems; they get solved from doing just that.

How do you know? The question isn't whether obsessing fixes the problem; it's whether taking breaks speeds up the overall process. You don't need tons of hours to fix the problem; as you said earlier, a few minutes to explain the right insight is quite sufficient. What you actually need is the right few minutes of work, spent finding the right key insights.

Thinking longer about a problem is only helpful to the degree it produces new insights. As you've found, this can be very inefficient. If taking a break and not worrying about an unsolved problem increases the efficiency of future problem-solving even a little bit, it could well be worth it.

Comment by evand on Can You Give Support or Feedback for My Program to Alleviate Poverty? · 2015-06-26T03:34:05.335Z · score: 12 (12 votes) · LW · GW

To frame it from the "capitalist virtues" perspective...

If you squint a bit, your version sounds a lot like "we're going to create a lot of value for a lot of people, in a way that is neatly measured in dollars, and therefore we can't possibly make a for-profit company." That is... really weird, from where I sit.

Alternate perspective: if you're creating a lot of value for a lot of people, but you can't extract any of it to compensate yourself for the infrastructure you build and the risks you take building it, are you actually really sure you're creating as much value as you thought you were?

Comment by evand on Astronomy, space exploration and the Great Filter · 2015-04-20T14:09:05.331Z · score: 2 (2 votes) · LW · GW

Are you saying Dyson spheres are inefficient as computational substrate, as power collection, or both?

Because to me it looks like what you actually want is a Dyson sphere / swarm of solar collectors, powering a computer further out.

Comment by evand on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-20T03:18:00.271Z · score: 3 (3 votes) · LW · GW

What problems are your trying to solve? Knowing you have ADHD is useful because it offers insight into what solutions will work well. For example, it might offer suggestions as to what medications might produce useful results.

Comment by evand on Astronomy, space exploration and the Great Filter · 2015-04-20T01:04:46.724Z · score: 3 (3 votes) · LW · GW

IR from waste heat should cold to warm black bodies radiating a lot of heat over a large area. It should have relatively few spectral lines. It might look a bit like a brown dwarf, but the output from a normal star is huge compared to a brown dwarf, so it should look like a really huge brown dwarf, which our normal models don't offer a natural explanation for.

Comment by evand on Astronomy, space exploration and the Great Filter · 2015-04-19T21:15:39.382Z · score: 1 (3 votes) · LW · GW

Why would you expect to not see infrared emissions from them?

Comment by evand on Cooperative conversational threading · 2015-04-16T02:58:26.092Z · score: 2 (2 votes) · LW · GW

I like the index cards approach. I worry that the poker chips start making things distracting, which will discourage their use or reduce their effectiveness.

Comment by evand on Cooperative conversational threading · 2015-04-15T19:57:06.174Z · score: 8 (8 votes) · LW · GW

One technique we've used with moderate success is to pass a clipboard around. People can jot down notes, or conversational ideas that are tangentially related or unrelated. Sometimes that provides a convenient way for someone else to say "hey, what's this thing you wrote down about?".

It also could let you list your three things to talk about in a breadth-first manner rather than talking about each one sequentially.

It probably sounds like a better idea than it is in practice; the clipboard gets stuck when holders get distracted, or people still refrain from bringing things up, or whatever. But you might try it out anyway!

Meetup : Durham: Ugh Fields Followup

2014-04-28T00:32:15.869Z · score: 1 (2 votes)

Meetup : Ugh Fields

2014-04-16T16:32:51.417Z · score: 1 (4 votes)

Meetup : Durham: Stupid Rationality Questions

2014-03-18T17:11:40.975Z · score: 2 (3 votes)

Meetup : Social meetup in Raleigh

2014-01-21T14:27:35.517Z · score: 1 (2 votes)

Meetup : Durham: New Years' Resolutions etc.

2013-12-18T03:02:17.243Z · score: 0 (1 votes)

Meetup : Durham: Luminosity followup

2013-06-19T17:32:09.517Z · score: 2 (3 votes)

Rationality witticisms suitable for t-shirts or bumper stickers

2013-06-15T12:56:19.245Z · score: 3 (18 votes)

Meetup : Zendo and discussion

2013-06-05T00:35:23.107Z · score: 2 (5 votes)

Meetup : Durham: Luminosity (New location!)

2013-04-03T03:03:28.992Z · score: 3 (4 votes)

Meetup : Durham HPMoR Discussion, chapters 51-55

2013-04-03T02:56:22.161Z · score: 3 (4 votes)

Meetup : Durham: Status Quo Bias

2013-02-11T04:32:36.747Z · score: 4 (5 votes)

Meetup : Durham HPMoR Discussion, chapters 34-38

2013-02-09T03:59:45.544Z · score: 2 (3 votes)

Meetup : Durham HPMoR Discussion, chapters 30-33

2013-01-24T04:51:22.959Z · score: 3 (4 votes)

Meetup : Durham: Calibration Exercises

2013-01-16T21:31:02.538Z · score: 2 (3 votes)

Meetup : Durham HPMoR Discussion, chapters 27-29

2013-01-11T05:12:12.780Z · score: 1 (2 votes)

Meetup : Durham LW Meetup: Zendo

2012-12-30T19:30:34.095Z · score: 2 (3 votes)

Meetup : Durham HPMoR Discussion, chapters 24-26

2012-12-28T23:40:42.616Z · score: 2 (3 votes)

Meetup : Durham LW discussion

2012-12-18T18:38:13.094Z · score: 1 (2 votes)

Meetup : Durham HPMoR Discussion, chapters 21-23

2012-12-13T20:25:40.330Z · score: 1 (2 votes)

Meetup : Durham HPMoR Discussion, chapters 18-20

2012-11-30T17:16:21.069Z · score: 1 (2 votes)

Meetup : Durham HPMoR Discussion, chapters 15-17

2012-11-14T17:52:49.431Z · score: 2 (3 votes)

Meetup : Durham LW: Technical explanation, meta

2012-10-31T18:19:46.577Z · score: 1 (2 votes)

Meetup : Durham HPMoR discussion, ch 12-14

2012-10-31T18:13:57.163Z · score: 1 (2 votes)

Meetup : Durham Meetup: Article discussions

2012-10-16T23:00:14.776Z · score: 1 (2 votes)

Meetup : Durham HPMoR Discussion group

2012-10-16T22:34:31.727Z · score: 1 (2 votes)

Meetup : Durham NC HPMoR Discussion, chapters 4-7

2012-09-26T03:53:48.227Z · score: 4 (5 votes)

Meetup : Research Triangle Less Wrong

2012-09-19T20:03:21.463Z · score: 4 (5 votes)