Chuckling a Bit at Microsoft and the PCFG Formalism 2017-03-20T19:37:38.733Z
John Tooby on Coalitional Instincts 2017-03-05T20:52:20.345Z
[ACTIVITY]: Exploratory Visit to the Bay Area 2014-07-21T19:49:46.511Z
Job Postings? 2011-09-02T03:01:35.670Z
Significance of Compression Rate Method 2010-05-30T03:50:25.401Z
Link: Strong Inference 2010-05-23T02:49:38.419Z
Development of Compression Rate Method 2010-05-20T17:11:34.085Z
Preface to a Proposal for a New Mode of Inquiry 2010-05-17T02:11:02.211Z
Two Challenges 2010-02-14T08:31:47.668Z
Link: Interview with Vladimir Vapnik 2009-07-25T13:36:52.175Z
Link: The Case for Working With Your Hands 2009-05-28T14:16:03.492Z


Comment by Daniel_Burfoot on Feedback on LW 2.0 · 2017-10-01T17:17:11.189Z · LW · GW

First, I appreciate the work people have done to make LW 2 happen. Here are my notes:

  1. Strong feeling - the links and descriptions of the Sequences, the Codex, and HPMOR (while good) should not be at the top of the page. The top should be the newest material.
  2. Please please please include a "hide subthread" option to collapse a comment and all its responses. That is a dealbreaker for me, if a site doesn't have that feature, I won't read the comments.
  3. Current LW has a really nice alternating color scheme for comment/reply. One comment will have a grey background, the comment below it will have a beige background. That is a key feature for visually parsing a comment thread.
  4. I liked the concept of having a main section and a discussion section, where the bar for posting in the latter is lower. For whatever reason, people seem to get angry if you post something that they feel is low quality or not relevant.
  5. I can't put my finger on it exactly, but somehow I don't quite like the default font. It may be that I like a different font for reading on dead tree paper vs on a computer screen?
  6. It may be slightly evil, but the karma display on the right side of the screen makes the site more addictive, because people love to see if they get upvotes or comment replies.
  7. It seems weird to allow people to upvote/downvote an article right from the home page, do you really want people to vote for an article without reading it?
Comment by Daniel_Burfoot on Open thread, September 11 - September 17, 2017 · 2017-09-12T16:36:08.551Z · LW · GW

Why do people see Mars as a better target for human colonization than the Moon? Most comments on lunar colonization seem to refer to two facts:

  1. the Moon has quite low gravity, so it cannot maintain an atmosphere for a long period of time.
  2. the Moon has no magnetic field, so it will not protect us from solar radiation.

In my mind, both of these problems can be solved by a ceiling or dome structure. The ceiling both retains the atmosphere and also blocks harmful radiation. Note that a failure in the ceiling won't be catastrophic: the atmosphere won't drain rapidly, and the amount of radiation exposure per unit time isn't disastrously high even without the ceiling.

Comment by Daniel_Burfoot on Open thread, September 4 - September 10, 2017 · 2017-09-09T04:13:14.112Z · LW · GW

Very nice, thanks. Ahh... Haskell really is quite pretty.

Comment by Daniel_Burfoot on Open thread, September 4 - September 10, 2017 · 2017-09-09T04:11:52.370Z · LW · GW

Good analysis, thanks. I buy the first two points. I'd be shocked to see an implementation that actually makes use of the lower metadata requirements. Are there languages that provide a boolean primitive that uses a single bit of memory instead of a full byte? Also I don't understand what you mean by persistence.

Comment by Daniel_Burfoot on New business opportunities due to self-driving cars · 2017-09-07T04:57:18.858Z · LW · GW

-1, this is pointlessly negative. There's a disclaimer at the top (so it's not like he's claiming false authority), the title is appropriate (so it's not like you were tricked into clicking on the article), and it's reasonably on-topic because LW people are in the software/AI/entrepreneurship space. Sure, maybe most of the proposals are far-fetched, but if one of the ideas sparks an idea that sparks an idea, the net value could be very positive.

Comment by Daniel_Burfoot on Open thread, September 4 - September 10, 2017 · 2017-09-04T21:57:11.276Z · LW · GW

Has anyone studied the Red Black Tree algorithms recently? I've been trying to implement them using my Finite State technique that enables automatic generation of flow diagrams. This has been working well for several other algorithms.

But the Red Black tree rebalancing algorithms seem ridiculously complicated. Here is an image of the deletion process (extracted from this Java code) - it's far more complicated than an algorithm like MergeSort or HeapSort, and that only shows the deletion procedure!

I'm weighing two hypotheses:

  1. Keeping a binary tree balanced in N log N time is an intrinsically complex task.
  2. There is some much simpler method to efficiently maintain balance in a binary tree, but nobody bothered looking for it after the RB tree algorithms and analysis were published.

I'm leaning toward the latter theory. It seems to me that most of the other "elementary" algorithms of computer science are comparatively simple, so the weird overcomplexity of the tool we use for binary tree balancing is some kind of oversight. Here is the Wiki page on RB trees - notice how the description of the algorithm is extremely hard to understand.

Comment by Daniel_Burfoot on Open thread, August 14 - August 20, 2017 · 2017-08-15T04:52:53.223Z · LW · GW

Can anyone offer a linguistic explanation for the following phenomenon related to pronoun case and partial determiners:

  1. None of us want to go to school tomorrow.
  2. None of we want to go to school tomorrow (**).
  3. We want to go to school tomorrow.
  4. Us want to go to school tomorrow (**).
Comment by Daniel_Burfoot on Open thread, August 7 - August 13, 2017 · 2017-08-08T23:24:59.978Z · LW · GW

Theory of programming style incompatibility: it is possible for two or more engineers, each of whom is individually highly skilled, to be utterly incapable of working together productively. In fact, the problem of style incompatibility might actually increase with the skill level of the programmers.

This shouldn't be that surprising: Proust and Hemingway might both be gifted writers capable of producing beautiful novels, but a novel co-authored by the two of them would probably be terrible.

Comment by Daniel_Burfoot on Open thread, July 31 - August 6, 2017 · 2017-08-06T22:47:17.655Z · LW · GW

I haven't written it up, though you can see my parser in action here.

One key concept in my system is the Theta Role and the associated rule. A phrase can only have one structure for each role (subject, object, determiner, etc).

I don't have much to say about teaching methods, but I will say that if you're going to teach English grammar, you should know the correct grammatical concepts that actually determine English grammar. My research is an attempt to find the correct concepts. There are some things that I'm confident about and some areas where the system needs work.

One very important aspect of English grammar is argument structure. Different verbs characteristically can and cannot take various types and combinations of arguments, such as direct objects, indirect objects, infinitive complements, and sentential complements. For example, the word "persuade" takes a sentential (that-) complement, but only when also combined with a direct object ("I will persuade [him] that the world is flat" is incorrect without the direct object). In contrast, the verb "know" can take either a direct object or a that-complement, but not both. To speak English fluently, you need to memorize all these combinations, but before you memorize them, you need to know that the concept exists.

Comment by Daniel_Burfoot on Open thread, July 31 - August 6, 2017 · 2017-07-31T23:04:18.317Z · LW · GW

Against Phrasal Taxonomy Grammar, an essay about how any approach to grammar theory based on categorizing every phrase in terms of a discrete set of categories is doomed to fail.

Comment by Daniel_Burfoot on Open thread, July 24 - July 30, 2017 · 2017-07-25T19:22:50.325Z · LW · GW

In terms of strategy, I recommend you to think about going to work at the Montreal Institute for Learning Algorithms. They recently received a grant from OpenPhil to do AI Safety Research. I can personally recommend the two professors at McGill (Joelle Pineau and Doina Precup). Since you are Russian, you should be able to handle the cold :-)

Comment by Daniel_Burfoot on Sam Harris and Scott Adams debate Trump: a model rationalist disagreement · 2017-07-20T16:06:12.538Z · LW · GW

Continuing with Adams' theme of congratulating himself on making correct predictions, I'll point out that I correctly predicted both that Adams did in fact want Trump to win a year ago, and also planned to capitalize on the prediction if it came true, by writing a book:

My guess is that Adams is hoping that Trump wins the election, because he will then write a book about persuasion and how Trump's persuasion skills helped him win. He already has a lot of this material on his blog. In that scenario he can capitalize on his correct prediction, which seemed radical at the time, to generate a lot of publicity for the book.

Both of these claims seem to be confirmed by the podcast. Maybe I should write a book!

Comment by Daniel_Burfoot on Open thread, Jul. 17 - Jul. 23, 2017 · 2017-07-18T20:55:42.248Z · LW · GW

Does anyone have good or bad impressions of Calico Labs, Human Longevity, or other hi-tech anti-aging companies? Are they good places to work, are they making progress, etc?

Comment by Daniel_Burfoot on Against lone wolf self-improvement · 2017-07-08T22:19:56.428Z · LW · GW

I agree with you in the context of entrepreneurship, but the OP was talking about self improvement. The best strategy for learning or self-improving may be very different from the best strategy for building a company.

Comment by Daniel_Burfoot on Against lone wolf self-improvement · 2017-07-07T16:35:17.149Z · LW · GW

This is a mean vs median or Mediocristan vs Extremistan issue. Most people cannot do lone wolf, but if you can do lone wolf, you will probably be much more successful than the average person.

Think of it like this. Say you wanted to become a great writer. You could go to university and plod through a major in English literature. That will reliably give you a middling good skill at writing. Or you could drop out and spend all your time reading sci-fi novels, watching anime, and writing fan fiction. Now most people who do that will end up terrible writers. But when someone like Eliezer does it, the results are spectacular.

Furthermore, because of the Power Law and the "Average is Over" idea, most of the impact will come from the standout successes.

Comment by Daniel_Burfoot on Open thread, Jul. 03 - Jul. 09, 2017 · 2017-07-04T03:05:26.669Z · LW · GW

I am working on a software tool that allows programmers to automatically extract FSM-like sequence diagrams from their programs (if they use the convention required by the tool).

Here is a diagram expressing the Merge Sort algorithm

Here is the underlying source code.

I believe this kind of tool could be very useful for code documentation purposes. Suggestions or improvements welcome.

Comment by Daniel_Burfoot on What useless things did you understand recently? · 2017-07-01T17:58:28.231Z · LW · GW

There are lots of cacti that are mostly hairy/fuzzy instead of pointy.

In terms of air flow protection purchased vs biological effort expended, I'm not sure a leaf is better than a spike.

Comment by Daniel_Burfoot on What useless things did you understand recently? · 2017-06-29T00:05:30.120Z · LW · GW

For a long time it was odd to me that cacti have lots of spikes and big thorns. I supposed that the goal was to ward off big ruminants like cows, but that doesn't really make much sense, since the desert isn't really overflowing with big animals that eat a lot of plants.

It turns out that protection from predators is only a secondary goal. The main goal is protection from the environment. The spikes capture and slow the air moving around the plant, to preserve moisture and protect against the heat.

Comment by Daniel_Burfoot on Stupid Questions June 2017 · 2017-06-16T23:25:42.178Z · LW · GW

Given that many of the most successful countries are small and self-contained (Singapore, Denmark, Switzerland, Iceland, arguably the other Scandinavian countries), and also the disasters visited upon humanity by large unified nation-states, why are people so attached to the idea of large-scale national unity?

Comment by Daniel_Burfoot on Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere) · 2017-06-15T22:43:28.734Z · LW · GW

I really don't think you should try to convince mid-career professionals to switch careers to AI safety risk research. Instead, you should focus on recruiting talented young people, ideally people who are still in university or at most a few years out.

Comment by Daniel_Burfoot on Open thread, June. 12 - June. 18, 2017 · 2017-06-12T15:50:41.901Z · LW · GW

Does anyone follow the academic literature on NLP sentence parsing? As far as I can tell, they've been writing the same paper, with minor variations, for the last ten years. Am I wrong about this?

Comment by Daniel_Burfoot on We are the Athenians, not the Spartans · 2017-06-11T17:30:12.524Z · LW · GW

We're neither Athenians nor Spartans. Athens and Sparta were city-states. Greek culture thrived because Greece is a mountainous archipelago that prevented large empires from forming. The Greek city-states were constantly at war with each other and with the outside world, and so they had to develop strong new ideas to survive.

You mentioned the Netherlands, which is quite similar in the sense that it was a small country with strong threatening neighbors, but still became successful because of its good social technology. The story of Europe in general is basically the same as Greece. The complexity of European geography meant that after the fall of Rome no power could dominate the whole continent. So Europe was made up of small independent political entities that were constantly fighting each other. This competitive environment meant that they were forced to innovate good social technology.

Comment by Daniel_Burfoot on Bet or update: fixing the will-to-wager assumption · 2017-06-07T18:12:53.388Z · LW · GW

Yes, definitely. There is something about the presence of other agents with differing beliefs that changes the structure of the mathematics in a deep way.

P(X) is somehow very different from P(X|another agent is willing to take the bet).

How about using a "bet" against the universe instead of other agents? This is easily concretized by talking about data compression. If I do something stupid and assign probabilities badly, then I suffer from increased codelengths as a result, and vice versa. But nobody else gains or loses because of my success or failure.

Comment by Daniel_Burfoot on Probabilistic Programming and Bayesian Methods for Hackers · 2017-05-23T16:39:56.334Z · LW · GW

Can someone give me an example problem where this particular approach to AI and reasoning hits the ball out of the park? In my mind, it's difficult to justify a big investment in learning a new subfield without a clear use case where the approach is dramatically superior to other methods.

To be clear, I'm not looking for an example of where the Bayesian approach in general works, I'm looking for an example that justifies the particular strategy of scaling up Bayesian computation, past the point where most analysts would give up, by using MCMC-style inference.

(As an example, deep learning advocates can point to the success of DL on the ImageNet challenge to motivate interest in their approach).

Comment by Daniel_Burfoot on Thoughts on civilization collapse · 2017-05-04T19:19:31.141Z · LW · GW

Most of the pessimistic people I talk to don't think the government will collapse. It will just get increasingly stagnant, oppressive and incompetent, and that incompetence will make it impossible for individual or corporate innovators to do anything worthwhile. Think European-style tax rates, with American-style low quality of public services.

There will also be a blurring of the line between the government and big corporations. Corporations will essentially become extensions of the bureaucracy. Because of this they will never go out of business and they will also never innovate. Think of a world where all corporations are about as competent as AmTrak.

Comment by Daniel_Burfoot on Open thread, May. 1 - May. 7, 2017 · 2017-05-03T01:02:01.299Z · LW · GW

LessWrong: kind of an odd place to find references to Christian ethical literature.

Comment by Daniel_Burfoot on Open thread, Apr. 24 - Apr. 30, 2017 · 2017-04-26T05:21:49.036Z · LW · GW

and that's about it.

We can agree to disagree, but my view is that the US has dozens or hundreds of problems we can't solve - education, criminal justice, the deficit, the military-industrial complex - because the government is paralyzed because of partisan hatred.

Comment by Daniel_Burfoot on Open thread, Apr. 24 - Apr. 30, 2017 · 2017-04-25T23:48:54.944Z · LW · GW
  1. I live in Berkeley, where there are literally armed gangs fighting each other in the streets.
  2. Stability isn't intrinsically valuable. The point is that we know our current civilizational formula is a pretty good one for innovation and most others aren't, so we should stick to the current formula more or less.
  3. My recommendation is a political ceasefire. Even if we could just decrease the volume of partisan hate speech, without solving any actual problems, that seems like it would have a lot of benefits.
Comment by Daniel_Burfoot on Open thread, Apr. 24 - Apr. 30, 2017 · 2017-04-25T22:52:21.338Z · LW · GW

Claim: EAs should spend a lot of energy and time trying to end the American culture war.

America, for all its terrible problems, is the world's leading producer of new technology. Most of the benefits of the new technology actually accrue to people who are far removed from America in both time and space. Most computer technology was invented in America, and that technology has already done worlds of good for people in places like China, India, and Africa; and it's going to continue help people all over the world in the centuries and millennia to come. Likewise for medical technology. If an American company discovers a cure for cancer, that will benefit people all over the globe... and it will also benefit the citizens of Muskington, the capitol of the Mars colony, in the year 4514.

It should be obvious to any student of history that most societies, in most historical eras, are not very innovative. Europe in the 1000s was not very innovative. China in the 1300s was not very innovative, India in the 1500s was not very innovative, etc etc. France was innovative in the 1700s and 1800s but not so much today. So the fact that the US is innovative today is pretty special: the ability to innovate is a relatively rare property of human societies.

So the US is innovative, and that innovation is enormously beneficial to humanity, but it's naive to expect that the current phase of American innovation will last forever. And in fact there are a lot of signs that it is about to die out. Certainly if there were some large scale social turmoil in the US, like revolution, civil war, or government collapse, it would pose a serious threat to America's ability to innovate.

That means there is an enormous ethical rationale for trying to help American society continue to prosper. There's a first-order rationale: Americans are humans, and helping humans prosper is good. But more important is the second-order rationale: Americans are producing technology that will benefit all humanity for all time.

Currently the most serious threat to the stability of American society is the culture war: the intense partisan political hatred that characterizes our political discourse. EAs could have a big impact by trying to reduce partisanship and tribalism in America, thereby helping to lengthen and preserve the era of American innovation.

Comment by Daniel_Burfoot on April '17 I Care About Thread · 2017-04-19T02:13:42.657Z · LW · GW

I really want self-driving cars to be widely adopted as soon as possible. There are many reasons, the one that occurred to me today while walking down the street is : look at all the cars on the street. Now imagine all the parked cars disappear, and only the moving cars remain. A lot less clutter, right? What could we do with all that space? That's the future we could have if SDCs appear (assuming that most people will use services like Lyft/Uber with robotic drivers instead of owning their own car).

Comment by Daniel_Burfoot on What conservatives and environmentalists agree on · 2017-04-08T20:14:35.833Z · LW · GW

I agree with the broad sentiment, but I think it's increasingly unrealistic to believe that the liberal/conservative distinction is based on a fundamental philosophical difference instead of just raw partisan tribal hatred. In theory people would develop an ethical philosophy and then join the party that best represents the philosophy, but in practice people pick a tribe and then adopt the values of that tribe.

Comment by Daniel_Burfoot on OpenAI makes humanity less safe · 2017-04-05T23:09:24.547Z · LW · GW

If there's anything we can do now about the risks of superintelligent AI, then OpenAI makes humanity less safe.

I feel quite strongly that people in the AI risk community are overly affected by the availability or vividness bias relating to an AI doom scenario. In this scenario some groups get into an AI arms race, build a general AI without solving the alignment problem, the AGI "fooms" and then proceeds to tile the world with paper clips. This scenario could happen, but some others could also happen:

  • An asteroid is incoming and going to destroy Earth. AI solves a complex optimization problem to allow us to divert the asteroid.
  • Terrorists engineer a virus to kill all persons with genetic trait X. An AI agent helps develop a vaccine before billions die.
  • By analyzing systemic risk in the markets, an AI agent detects and allows us to prevent the Mother of all Financial Meltdowns, that would have led to worldwide economic collapse.
  • An AI agent helps SpaceX figure out how to build a Mars colony for a two orders of magnitude less money than otherwise, thereby enabling the colony to be built.
  • An AI system trained on vast amounts of bioinformatics and bioimaging data discovers the scientific cause of aging and also how to prevent it.
  • An AI climate analyzer figures out how to postpone climate change for millennia by diverting heat into the deep oceans, and gives us an inexpensive way to do so.
  • etc etc etc

These scenarios are equally plausible, involve vast benefit to humanity, and require only narrow AI. Why should we believe that these positive scenarios are less likely than the negative scenario?

Comment by Daniel_Burfoot on Chuckling a Bit at Microsoft and the PCFG Formalism · 2017-03-20T21:11:37.585Z · LW · GW

Good catch. Adverbial attachment is really hard, because there aren't a lot of rules about where adverbs can go.

Actually, Ozora's parse has another small problem, which is that it interprets "complex" as an NN with a "typeadj" link, instead of as a JJ with an "adject" link. The typeadj link is used for noun-noun pairings such as "police officer", "housing crisis", or "oak tree".

For words that can function as both NN and JJ (eg "complex"), it is quite hard to disambiguate the two patterns.

Comment by Daniel_Burfoot on Open thread, March 13 - March 19, 2017 · 2017-03-14T17:14:43.565Z · LW · GW

Why is it so hard to refrain from irrational participation in political arguments? One theory is that in the EEA, if you overheard some people talking covertly about political issues, there was a good chance that they were literally plotting against you. In a tribal setting, if you're being left out of the political conversation, you're probably going to be the victim of the political change being discussed. So we've probably evolved a mental module that causes us to be hyperaware of political talk, and when we hear political talk we don't like, to jump in and try to disrupt it.

Anyone have any good mind hacks to help stay out of political conversations?

Comment by Daniel_Burfoot on Open Thread, March. 6 - March 12, 2017 · 2017-03-12T22:42:18.436Z · LW · GW

Sorry to hear that, I know a lot of LW-adjacent people were involved.

Is there a postmortem discussion or blog post anywhere?

Comment by Daniel_Burfoot on An Intuition on the Bayes-Structural Justification for Free Speech Norms · 2017-03-10T02:40:36.170Z · LW · GW

positive-sum information-conveying component and a zero-sum social-control/memetic-warfare component.

Style complaint: did you really need to use five hyphenated words in one line in the first sentence?

Comment by Daniel_Burfoot on Open Thread, March. 6 - March 12, 2017 · 2017-03-07T05:54:25.082Z · LW · GW

A lesson on the linguistic concept of argument structure, with special reference to observational verbs (see/hear/watch/etc) and also the eccentric verb "help".

Comment by Daniel_Burfoot on John Tooby on Coalitional Instincts · 2017-03-05T20:55:08.521Z · LW · GW

The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. Communications of practical and functional truths are generally useless as differential signals, because any honest person might say them regardless of coalitional loyalty.

Comment by Daniel_Burfoot on Open Thread, Feb. 27 - March 5, 2017 · 2017-02-27T17:19:48.627Z · LW · GW

Peter McCluskey wrote a review of my book, and I wrote a response here. Thanks to Peter for writing the review!

Comment by Daniel_Burfoot on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-14T18:25:54.087Z · LW · GW

If you really believe in this allegory, you should try to intervene before people choose what research field to specialize in. You are not going to convince people to give up their careers in AI after they've invested years in training. But if you get to people before they commit to advanced training, it should be pretty easy to divert their career trajectory. There are tons of good options for smart idealistic young people who have just finished their undergraduate degrees.

Comment by Daniel_Burfoot on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-14T18:14:20.934Z · LW · GW

"But, Bifur, the prophecies are not that clear. It's possible the Balrog will annihilate us, but it's also possible he will eradicate poverty, build us dwarf-arcs to colonize other planets, and grant us immortality. Our previous mining efforts have produced some localized catastrophes, but the overall effect has been fantastically positive, so it's reasonable to believe continued mining will produce even more positive outcomes."

Comment by Daniel_Burfoot on Interview with Nassim Taleb 'Trump makes sense to a grocery store owner' · 2017-02-09T23:20:52.844Z · LW · GW

always regarded Taleb as a half-crackpot

My guess is Taleb wouldn't be offended by this, and would in fact argue that any serious intellectual should be viewed as a half-crackpot.

Serious intellectuals get some things right and get some things wrong, but they do their thinking independently and therefore their mistakes are uncorralated with others'. That means their input is a valuable contribution to an ensemble. You can make a very strong aggregate prediction by calling up your half-crackpot friends, asking their opinion, and forming a weighted average.

Pseudo-intellectuals, whom Taleb calls IYIs, are just regurgitating what other people say. That means their opinions are all highly correlated. The ensemble prediction obtained by asking a lot of pseudo-intellectuals isn't much stronger than the single opinion of just one such person.

There is an ethical component to this dichotomy. A serious intellectual is risking his reputation (being perceived as a crackpot) to add aggregate strength to the collective wisdom. In other words, the serious intellectual is accepting individual fragility to make the collective anti-fragile. In contrast the pseudo-intellectual seeks to protect himself from risk, while making the collective fragile, since the collective opinion of a group of IYIs is very likely to be wrong even if (especially if!) there are many IYIs and they all agree.

Comment by Daniel_Burfoot on Open thread, Feb. 06 - Feb. 12, 2017 · 2017-02-06T23:50:51.140Z · LW · GW

Good idea, thanks!

Comment by Daniel_Burfoot on Open thread, Feb. 06 - Feb. 12, 2017 · 2017-02-06T19:13:23.628Z · LW · GW

Request for programmers: I have developed a new programming trick that I want to package up and release as open-source. The trick gives you two nice benefits: it auto-generates a flow-chart diagram description of the algorithm, and it gives you steppable debugging from the command line without an IDE.

The main use case I can see is when you have some code that is used infrequently (maybe once every 3 months), and by default you need to spend an hour reviewing how the code works every time you run it. Or maybe you want to make it easier for coworkers to get a high-level understanding of the program, without having to dig into the actual source code. In these scenarios, the autogenerated flow diagram becomes quite useful. Conceptually, it is also nice to be able to look at the algorithm states and control flow as you are developing it, to clarify your own thinking.

Before releasing the tool I want to code up some examples that showcase how the technique works. I was hoping people could help me out by contributing some ideas for good test problems. The ideal problem, in my mind, is one where the difficulty comes not from any deep conceptual requirements, but rather from the presence of many different program states, options, subroutines, or special cases that interact in a way that is hard to remember or reason about without assistance.

Comment by Daniel_Burfoot on Metrics to evaluate a Presidency · 2017-01-24T15:58:27.009Z · LW · GW

reduction in male female differences in lifespan

The lifespan gap may be enforced by biology, but it seems wildly unjust to me that retirement-related social programs like Social Security and Medicare do not take the lifespan expectancy gap into account. For example, if the life expectancy gap is 5 years, the Medicare age of eligibility should be 68 for women and 63 for men, so that both sexes get the same number of years of expected coverage.

Comment by Daniel_Burfoot on Open thread, Jan. 23 - Jan. 29, 2017 · 2017-01-23T23:11:10.263Z · LW · GW

How do you weight the opinion of people whose arguments you do not accept? Say you have 10 friends who all believe with 99% confidence in proposition A. You ask them why they believe A, and the arguments they produce seem completely bogus or incoherent to you. But perhaps they have strong intuitive or aesthetic reasons to believe A, which they simply cannot articulate. Should you update in favor of A or not?

Comment by Daniel_Burfoot on Universal Hate · 2017-01-18T18:41:45.666Z · LW · GW

Everyone has every right to feel as pissed off and angry at this bullshit that’s coming down the pike as they want.

This really is not true. You have a right to be annoyed, but if your ideology causes you to actually hate millions of your fellow American citizens, then I submit you have an ethical obligation to emigrate.

Comment by Daniel_Burfoot on Open thread, Jan. 02 - Jan. 08, 2017 · 2017-01-03T23:42:32.438Z · LW · GW

Rationality principle, learned from strategy board games:

In some games there are special privileged actions you can take just once or twice per game. These actions are usually quite powerful, which is why they are restricted. For example, in Tigris and Euphrates, there is a special action that allows you to permanently destroy a position.

So the principle is: if you get to the end of the game and find you have some of these "power actions" left over, you know (retrospectively) that you were too conservative about using them. This is true even if you won; perhaps if you had used the power actions you would have won sooner.

Generalizing to real life, if you get to the end of some project or challenge, and still have some "power actions" left over, you were too conservative, even if the project went well and/or you succeeded at the challenge.

What are real life power actions? Well, there are a lot of different interpretations, but one is using social capital. You can't ask your rich grand-uncle to fund your startup every six months, but can probably do it once or twice in your life. And even if you think you can succeed without asking, you still might want to do it, because there's not much point in "conserving" this kind of power action.

Comment by Daniel_Burfoot on January 2017 Media Thread · 2017-01-01T15:27:36.400Z · LW · GW

Evicted, by Mathew Desmond, is an amazing work of ethnographic research into the lives of the urban poor and in particular their experiences with housing. Most importantly to me it feels real: nothing is sugarcoated. The poor people are incredibly irresponsible, but also the landlords are greedy, and the government agencies are incompetent and counterproductive. One typical event sequence goes something like this: a tenant living in a decrepit unit calls the building inspector to report some egregious violation. The inspector arrives and promptly demands that the landlord make a list of expensive repairs. The landlord retaliates by evicting the tenant because of her heroin habit.

Comment by Daniel_Burfoot on How does personality vary across US cities? · 2016-12-21T13:45:10.895Z · LW · GW

Five Factor Model (FFM) ... the model is founded on the lexical hypothesis:

I notice I am confused. I was sure that the FFM came out of doing the following simple procedure:

  1. Give people a many-item personality survey
  2. Do a PCA of the resulting data
  3. Keep the top 5 eigenvectors
  4. Label them with reasonably accurate adjectives that seem to describe the general drift of the vector

How wrong is this? How important is the "lexical hypothesis" part?