Posts

Comments

Comment by Riothamus on What's up with Arbital? · 2017-04-14T19:02:18.889Z · LW · GW

I had not imagined a strict barter system or scaling of paid content; the objective in both cases is only to make up the difference between the value content producers want versus the value they expect for the first wave.

The point of diminishing returns would be hard to judge for paid content, but perhaps the two strategies could work together: survey prospective content producers for the content they want to see, and then pay for the most popular subjects to draw the rest. Once you have enough content established to draw the first wave of voluntary content producers, everything else can build off of that for no/minimal further investment.

That being said, it would probably be a good idea to keep surveying and perhaps paying for content on a case by case, say to alleviate a dry spell of contributions or if there is some particular thing which is in high demand but no one is volunteering to produce.

What about a contest with a cash award of some kind? This could drive a lot of content for a fixed upfront investment, and then you would also have the ability to select among the entries for the appropriate style and nuance, which reduces the risk of getting unsatisfactory work.

Comment by Riothamus on What's up with Arbital? · 2017-04-12T18:28:46.560Z · LW · GW

I see finding high-quality content producers was a problem; you reference math explanations specifically.

I notice that people are usually good at providing thorough and comprehensible explanations in only their chosen domains. That being said, people are interested in subjects beyond those they have mastered.

I wonder if it is possible to approach quality content producers with the question of what content they would like to passively consume, and then try and approach networks of content producers at once. For example: find a game theory explainer who wants to read about complex analysis; a complex analysis explainer who wants to read about music theory; a music theory explainer who wants to read about game theory.

Then you can approach all three at once with the premise that if they explain the thing they are good at, they will also be able to read the thing they want to be explained well to them, on the same platform. There's a similar trick being explored for networks of organ donations.

Also, was there any consideration given to the simple mechanism of paying people for quality explanations? I expect a reasonable core of value could be had for low cost.

Comment by Riothamus on OpenAI makes humanity less safe · 2017-04-07T20:40:46.603Z · LW · GW

None of these are hypotheticals, you realize. The prior has been established through a long and brutal process of trial and error.

Any given popular military authority can be read, but if you'd like a specialist in defense try Vaubon. Since we are talking about AI, the most relevant (and quantitative) information is found in the work done on nuclear conflict; Von Neumann did quite a bit of work aside from the bomb, including coining the phrase Mutually Assured Destruction. Also of note would be Herman Kahn.

Comment by Riothamus on OpenAI makes humanity less safe · 2017-04-05T22:01:30.990Z · LW · GW

I disagree, for two reasons.

  1. AI in conflict is still only an optimization process; it remains constrained by the physical realities of the problem.

  2. Defense is a fundamentally harder problem than offense.

The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.

This drives a scenario where the security trap prohibits non-deployment of military AI, and the fundamental problem of defense means the AIs will privilege offensive solutions to security problems. The customary response is to develop resilient offensive ability, like second-strike...which leaves us with a huge surplus of distributed offensive power.

My confidence is low that catastrophic conflict can be averted in such a case.

Comment by Riothamus on OpenAI makes humanity less safe · 2017-04-04T19:36:35.700Z · LW · GW

I am curious about the frequency with which the second and fourth points get brought up as advantages. In the historical case, multipolar conflicts are the most destructive. Forestalling an arms race by giving away technology also sets that technology as the mandatory minimum.

As a result, every country that has a computer science department in their universities is now a potential belligerent, and violent conflict without powerful AI has been effectively ruled out.

Comment by Riothamus on April 2017 Media Thread · 2017-04-03T22:35:07.965Z · LW · GW

I have high hopes that the ongoing catastrophe of this system will discredit the entire design philosophy of the project, and the structure of priorities that governed it. I want it to be a meta-catastrophe, in other words.

The site looks very good. How do you find the rest of it?

Comment by Riothamus on Jocko Podcast · 2016-09-15T15:31:45.231Z · LW · GW

Here is a method I use to good effect:

1) Take a detailed look at the pros and cons of what you want to change. This is sometimes sufficient by itself - more than once I have realized I simply get nothing out what I'm doing, and the desire goes away by itself.

2) Find a substitution for those pros.

Alternatively, think about an example of when you decided to do something and then actually did it, and try to port the methods over. Personal example: I recently had a low-grade freakout over deciding to do a particular paperwork process that is famously slow and awful, and brings up many deeply negative feelings for me. Then I was cleaning my dutch oven, and reflected on getting a warranty replacement actually took about three months and several phone calls, which is frustrating but perfectly manageable. This gives me confidence that monitoring a slow administrative process is achievable, and I am more likely to complete it now.

Comment by Riothamus on [Link] How the Simulation Argument Dampens Future Fanaticism · 2016-09-12T18:25:27.184Z · LW · GW

On the grounds that those ethical frameworks rested on highly in-flexible definitions for God, I am skeptical of their applicability. Moreover, why would we look at a different question where we redefine it to be the first question all over again?

Comment by Riothamus on The progressive case for replacing the welfare state with basic income · 2016-09-12T18:17:58.467Z · LW · GW

I think the basic income is an interesting proposal for a difficult problem, but I downvoted this post.

  1. This is naked political advocacy. Moreover, the comment is hyperbole and speculation. A better way to address this subject would be to try and tackle it from an EA perspective - how efficient is giving cash compared to giving services? How close could we come if we wanted to try it as charity?

  2. The article is garbage. Techcrunch is not a good source for anything, even entertainment in my opinion. The article is also hyperbolic and speculative, while being littered with Straw Man, Ad Hominem, and The Worst Argument In the World. If you are interested in the topic, a much better place to go look would be the sidebar of the subreddit dedicated to basic income.

Bad arguments for a bad purpose with no data doesn't make for quality discussion.

Comment by Riothamus on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-08-30T15:08:41.093Z · LW · GW

If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others

I mean to say we are not ontologically motivated. The examples OP gave aren't ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.

In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, which weren't motivated by testing Aristotle. Architecture and siege engines did for falling objects, for example.

I agree with your points. I am now experiencing some disquiet about how slippery the notion of 'best' is. I wonder how one would distinguish whether it was undefinable or not.

Comment by Riothamus on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-08-29T14:29:02.361Z · LW · GW

This sounds strongly like we have no grounds for considering ontology at all when determining what the best possible explanation.

  1. We cannot qualitatively distinguish between ontologies, except through the other qualities we were already examining.
  2. We don't have a way of searching for new ontologies.

So it looks like all we have done is go from best possible explanation to best available explanation where some superior explanation occupies a space of almost-zero in our probability distribution.

Comment by Riothamus on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-08-25T14:37:03.870Z · LW · GW

Echo chamber implies getting the same information back.

It would be more accurate to say we will inevitably reach a local maxima. Awareness of the ontological implications should be a useful tool in helping us recognize when we are there and which way to go next.

Without pursuing the analysis to its maximal conclusions, how can we distinguish the merits of different ontologies?

Comment by Riothamus on Superintelligence via whole brain emulation · 2016-08-19T19:46:33.497Z · LW · GW

If the artificial intelligence from emulation is accomplished through tweaking an emulation and/or piling on computational resources, why couldn't it be accomplished before we start emulating humans?

Other primates, for example. Particularly in the case of the destructive-read and ethics-of-algorithmic-tweaks, animal testing will surely precede human testing. To the extent a human brain is just a primate brain with more computing power, another primate with better memory and clock speed should serve almost as effectively.

What about other mammals with culture and communication, like a whales or dolphins?

Something not a mammal at all, like Great Tits?

Comment by Riothamus on Open Thread, Aug. 15. - Aug 21. 2016 · 2016-08-19T19:21:01.663Z · LW · GW

Is anyone in a position to offer some criticism (or endorsement) of the work produced at Gerwin Schalk's lab?

http://www.schalklab.org/

I attended a talk given by Dr. Schalk in April 2015, where he described a new method of imaging the brain, which appeared to be a better-resolution fMRI (the image in the talk was a more precise image of motor control of the arm, showing the path of neural activity over time). I was reminded of it because Dr. Schalk spent quite a bit of time emphasizing doing the probability correctly and optimizing the code, which seemed relevant when the recent criticism of fMRI software was published.

Comment by Riothamus on Advice to new Doctors starting practice · 2016-08-19T18:42:42.347Z · LW · GW

This is enough of a problem for small medical practices in the US that it outweighs a good bedside manner and confidence in the doctor's medical ability.

I am confident that this has a large effect on the success of an individual practice; it may fall under the general heading of business advice for the individual practitioner. Even for a single-doctor office, a good secretary and record system will be key to success.

This information comes chiefly from experience of and interviews with specialists (dermatology and gynaecology) in the US.

Comment by Riothamus on Advice to new Doctors starting practice · 2016-08-18T20:43:34.057Z · LW · GW

I know this is banal, but ensure excellent administration.

Medical expertise is only relevant once you see the patient. Your ability to judge the evidence requires getting access to it; this means you need to be able to correctly send requests, get the data back, and keep all this attached to the correct patient.

Scheduling, filing and communication. Lacking these, medical expertise is meaningless. So get the best damn admin and IT you can possibly afford.

Comment by Riothamus on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-08-18T13:55:22.090Z · LW · GW

Let me try to restate, to be sure I have understood correctly:

We cannot stop once we have exhausted the evidence because explanations of equal predictive power have different ontological implications, and these implications must be accounted for in determining the best explanation. Further, we don't have a way to exclude other ontological implications we have not considered.

Question: why don't the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?

Comment by Riothamus on Superintelligence and physical law · 2016-08-09T19:19:02.931Z · LW · GW

So am I correct in inferring that this program looks for any mathematical correlations in the data, and returns the simplest and most consistent ones?

Comment by Riothamus on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-08-09T18:47:54.251Z · LW · GW

This is a useful bit of clarification, and timely.

Would that change if there was a mechanism for describing the criteria for the best explanation?

For example, could we show from a body of evidence the minimum entropy, and therefore even if there are other explanations they are at best equivalent?

Comment by Riothamus on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-07-28T17:24:57.652Z · LW · GW

There is a LessWrong wiki entry for just this problem: https://wiki.lesswrong.com/wiki/Simulation_argument

The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.

Understanding that beliefs are our knowledge of reality rather than reality itself has some very interesting effects. The first is that our beliefs do not have to take the form of singular conclusions, such as we are or are not in a simulation; instead our belief can take the form of a system of conclusions, with confidence distributed among them. The second is the notion of paying rent, which is super handy for setting priorities. In summary, if it does not yield a new expectation, it probably does not merit consideration.

If this does not seem sufficiently coherent, consider that you are allowed to be inconsistent, and also that you are engaging with rationality early in its development.

Comment by Riothamus on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-07-27T14:40:58.528Z · LW · GW

Effectiveness is desirable; effectiveness is measured by results; consistency and verifiability are how we measure what is real.

As a corollary, things that have no evidence do not merit belief. We needn't presume that we are not in a simulation, we can evaluate the evidence for it.

The central perspective shift is recognizing that beliefs are not assertions about reality, but assertions about our knowledge of reality. This what is meant by the map and the territory.

Comment by Riothamus on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-21T20:44:23.752Z · LW · GW

Is there a procedure in Bayesian inference to determine how much new information in the future invalidates your model?

Say I have some kind of time-series data, and I make an inference from it up to the current time. If the data is costly to get in the future, would I have a way of determining when cost of increasing error exceeds the cost of getting the new data and updating my inference?

Comment by Riothamus on Unofficial Canon on Applied Rationality · 2016-07-21T13:10:02.944Z · LW · GW

That doesn't mean that inherently impossible to transmit knowledge via writting but it's hard.

Agreed. The more I consider the problem, the higher my confidence that investing enough energy in the process is a bad investment for them.

Another romantic solution waiting for the appropriate problem. I should look into detaching from the idea.

Comment by Riothamus on Unofficial Canon on Applied Rationality · 2016-07-20T22:04:47.446Z · LW · GW

I should amend my assumption to uncontrolled transmission is inevitable. The strategy so far has been to use the workshops, and otherwise decline to distribute the knowledge.

The historical example should be considered in light of what the goals are. The examples you give are strategies employed by organizations trying to deny all knowledge outside of the initiated. Enforcing secrecy and spreading bad information are viable for that goal. CFAR is not trying to deny the knowledge, only to maximize its fidelity. What is the strategy they can use to maximize fidelity in cases where they did not choose to transmit it (like this one)?

Suppose we model everyone who practices state-of-the-art rationality as an initiate, and everyone who wants to read about CFAR's teachings as a suppliant. If the knowledge is being transmitted outside of the workshops, how do we persuade the suppliants to self-initiate? Imposing some sort of barrier, so that it requires effort to access the knowledge - I suggest by dividing the knowledge up, thus modelling the mysteries. We would want the divided content to be such that people who won't practice it disengage rather than consume it all passively.

If CFAR were to provide the content, even in this format, I expect the incentive of people to produce posts like the above would be reduced, likewise for the incentive of people to read such collections.

In retrospect, I should have made it explicit I was assuming everyone involved was a (potential) insider at the beginning.

Comment by Riothamus on Unofficial Canon on Applied Rationality · 2016-07-20T19:45:31.375Z · LW · GW

You have just described the same thing Duncan cited as a concern, only substituted a different motive; I am having trouble coming to grips with the purpose of the example as a result.

I propose that the method of organizing knowledge be considered. The goal is not to minimize the information, but to minimize the errors in its transmission. I assume transmission is inevitable; given that, segregating the information into lower-error chunks seems like a viable strategy.

Comment by Riothamus on Unofficial Canon on Applied Rationality · 2016-07-19T20:18:53.984Z · LW · GW

We aren't at a point yet where we distinguish "basic" from "advanced" practices.

This is a good point; I have assumed that there would eventually be a hierarchy of sorts established. I was allowing for instruction being developed (whether by CFAR or someone else) even down below the levels that are usually assumed in-community. When Duncan says,

Picture throwing out a complete text version of our current best practices, exposing it to the forces of memetic > selection and evolution.

I interpret this to mean even by people who have no experience of thinking-about-thinking at all. As you aptly point out, the fundamentals are very hard - there may be demand for just such materials from future advanced rationalists for exactly that reason. So what I suggest is that the components of the instruction be segregated while retaining clear structure, and in this way minimize the skimming and corruption problems.

That being said, I fully endorse the priority choices CFAR has made thus far, and I do not share the (apparent) intensity of Duncan's concern. I therefore understand if even evaluating whether this is a problem is a low priority.

Comment by Riothamus on Unofficial Canon on Applied Rationality · 2016-07-19T19:58:06.824Z · LW · GW

Sigh. I continue to forget how much of a problem that is. It is meant in the historical, rather than colloquial, meaning of the word. Since it apparently does not go without saying, the easily misunderstood term should never be used in official communication of any sort.

I apologize for the lack of clarity.

Comment by Riothamus on Unofficial Canon on Applied Rationality · 2016-07-19T15:23:02.617Z · LW · GW

I wonder if it would be possible to screen out some of the misinterpretation and recombination hazards by stealing a page from mystery religions.

Adherents were initiated by stages into the cult; mastery of the current level of mysteries was expected before gaining access to the next.

Rather than develop a specific canon or doctrine, CFAR could build the knowledge that new practices supersede the old, basic practices must come before advanced practices, and precisely what practices should have been tackled previously and will be tackled next into everything instructional they produce for the public.

If this is pervasive in CFAR literature for the public, I would expect the probability of erroneous practice to go down.

Comment by Riothamus on The map of cognitive biases, errors and obstacles affecting judgment and management of global catastrophic risks · 2016-07-17T23:04:25.929Z · LW · GW

Thank you for doing this work. I think that a graphical representation of the scope of the challenge is an excellent idea, and merits continuous effort in the name of making communication and retention easier.

That being said, I have questions:

1) What is the source of that text document? The citations consist almost exclusively of works concerning nanomachines. None of the citations concern biases, and do not reference people like Bostrom or Kahneman despite clearly being familiar with their work (at least second hand).

2) Am I correct to infer that the divisions along the X and Y axis are your own? Could you comment on what motivates them?

Also, I have suggestions:

Without having read the text document first the numbers confuse, and they are distracting to navigating the image. What do you think of: A, removing the numbers entirely; B, renumbering the text file and the image so the image provides the organization?

What do you think of a way to distinguish between biases that operate individually versus on a group? In example, #51 at (Underestimation, Heuristics) reads "An overly simplistic explanation is the most prominent.", which for an individual could be considered a special case of the Availability Heuristic. Argument against similar problems is found in arguing from fictional evidence, or alternately a form of information hazard. If the prominence of the explanation is the problem, that is a group failing rather than an individual failing.

I also think this warrants a pass for spelling and grammar, but that is merely a question of housekeeping. Would I be right to guess that English is a second language?

Good work!

Comment by Riothamus on Zombies Redacted · 2016-07-12T15:40:11.315Z · LW · GW

This gives us these options under the Chalmers scheme:

Same input -> same output & same qualia

Same input -> same output & different qualia

Same input -> same output & no qualia

I infer the ineffable green-ness of green is not even wrong. We have no grounds for thinking there is such a thing.

Comment by Riothamus on Zombies Redacted · 2016-07-12T15:23:59.494Z · LW · GW

They are meant to be arbitrarily accurate, and so we would expect them to include qualia.

However, in the Chalmers vein consciousness is non-physical, which suggests it cannot be simulated through physical means. This yields a scenario very similar to the identical-yet-not-conscious p-zombie.

Comment by Riothamus on Zombies Redacted · 2016-07-08T20:21:33.985Z · LW · GW

What do people in Chalmer's vein of belief think of the simulation argument?

If a person is plugged into an otherwise simulated reality, do all the simulations count as p-zombies, since they match all the input-output and lack-of-qualia criteria?

Comment by Riothamus on Zombies Redacted · 2016-07-08T18:58:58.486Z · LW · GW

I do not think we need to go as far as i-zombies. We can take two people, show them the same object under arbitrarily close conditions, and get the answer of 'green' out of both of them while one does not experience green on account of being color-blind.

Comment by Riothamus on Market Failure: Sugar-free Tums · 2016-07-01T20:27:19.272Z · LW · GW

This looks like an information problem.

It is useful to remember that the market is an abstraction of aggregated transactions. The basic supply and demand graphs they teach us in early econ rely on two assumptions: rational agents, and perfect information.

I expect the imperfect information problem dominates in cases of new products, because producers have a hard time estimating return, and customers don't even know it exists. VCs are largely about developing a marginal information advantage in this space. Interestingly, all of the VCs I have personally interacted with (sample size: 5) say they pick teams over ideas.

When the people at Thinx were asked why the dominant companies hadn't done it already, what was their answer? If they couldn't answer, that would indicate to the VC the team didn't gather enough information to justify their claims (and thus were unprepared). I would expect the answer is some combination of competing with their own products, and demand is not big enough to be profitable with their scaled manufacturing methods.

On the subject of Tums: what is the socially optimal point for sugar-free Tums? How do we know the socially optimal outcome isn't regular Tums and mouthwash?

Comment by Riothamus on Powering Through vs Working Around · 2016-07-01T18:03:04.422Z · LW · GW

It is worth keeping in mind that how to defeat X is not well-defined. The usual method for circumventing the planning fallacy is to use whatever the final cost was last time. What about cases where there isn't a body of evidence for the costs? Rationality is just such a case; while we have many well-defined biases, we have few methods for overcoming them.

As a consequence, I determine whether to workaround or defeat X primarily based on how frequently I expect it to come up. The cost of X I find less relevant for two reasons: one, I have a preference against being mugged by Pascal's Wager into spending all my effort on low-likelihood events; two, high cost cases often have a well developed System 2 methodology to resolve them.

A benefit is that frequent cases benefit more easily from spaced repetition and habit forming. In this way, I hope to develop a body of past cases to refer to when trying to plan for how long defeating future X will take.

Examples of frequent cases: exercise, amazon purchases, reading articles. Examples of rare cases: job benefits, housing costs, vehicle purchases.

Comment by Riothamus on Meme: Valuable Vulnerability · 2016-07-01T17:27:04.891Z · LW · GW

Military bonding is an interesting comparison. Training in a professional military relies on shared suffering to build a strong bond between the members of a unit. If we model combat as an environment so extreme that vulnerability is inescapable, the function of vulnerability as a bonding trait makes sense.

It also occurs to me that we almost universally have more control over how we signal emotions than how we feel them. The norm would therefore be that we feel more emotions than we show; by being vulnerable and signaling our emotions, other people can empathize instinctively and may feel greater security as a result.

Comment by Riothamus on Open Thread, January 11-17, 2016 · 2016-04-22T16:03:15.251Z · LW · GW

What are your criteria for good foreign policy choices then? You have conveyed that you want Iraq to be occupied, but Libya to be neglected, so non-intervention clearly is not the standard.

My current best guess is 'whatever promotes maximum stability'. Also, how do you expect these decisions are currently made?

Comment by Riothamus on Is Spirituality Irrational? · 2016-04-21T21:15:22.737Z · LW · GW

I would also have an easier time with ASCII, but that's because I (and presumably you also) have been trained in how to produce instructions for machines. This is a negligible chunk of humanity, so I thought it was equally discountable.

I suppose the spiritual analogy would be an ordained priest praying on behalf of another person!

Comment by Riothamus on Open Thread, January 11-17, 2016 · 2016-04-21T20:51:36.827Z · LW · GW

As compared to what alternative? There is no success condition for large scale ground operations in the region. If the criticism of the current administration is "failed to correct the lack of strategic acumen in the Pentagon" then I would agree, but I wonder what basis we have for expecting an improvement.

It seems to me we can identify problems, but have no available solutions to implement.

Comment by Riothamus on Is Spirituality Irrational? · 2016-04-21T16:26:55.784Z · LW · GW

A correct analogy between records and books would be the phonograph and the text of the book written in ASCII hexadecimal. Both are designed to be interpreted by a machine for presentation to humans.

Comment by Riothamus on "3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism" · 2016-04-21T16:10:06.493Z · LW · GW

Until someone demonstrates the utility of engaging in criticism of particular political groups, I will continue to treat it as noise.

We already know out groups don't use the word rationality the way we do. We also know that assuming others share our information and frame of reference is an error. There is no new information here.

Comment by Riothamus on Open Thread, January 11-17, 2016 · 2016-01-13T19:25:47.468Z · LW · GW

The thing to consider about the economy is that the president is not only not responsible, but mostly irrelevant. An easy way to see this is the 2008 stimulus packages. Critics of the president frequently share the graph of national debt which grows sharply immediately after he took office - ignoring that the package was demanded by congress and supported by his predecessor, who wore a different color shirt.

A key in evaluating a president is the difference between what he did, what he could have done, and what people think about him. Consider that the parties were polarizing before he took office.

In terms of specifics, I am disappointed that he continued most of the civil rights abuses of the previous administration with regards to due process. I also oppose the employment of the drone warfare doctrine, which is minimally effective at achieving strategic goals and highly effective at generating ill will in the region.

By contrast, I am greatly pleased at the administrations' commitment to diplomacy and improvement of our reputation among our allies. I am pleased that major combat operations were ended in two theaters, and that no new ones were launched. I applaud the Iranian nuclear agreement.

Comment by Riothamus on [Link] Introducing OpenAI · 2015-12-12T14:09:22.937Z · LW · GW

If I am reading that right, they plan to oppose Skynet by giving everyone a Jarvis.

Does anyone know their technical people, and whether they can be profitably exposed to the latest work on safety?

Comment by Riothamus on [Link] A rational response to the Paris attacks and ISIS · 2015-11-30T01:47:51.939Z · LW · GW

According to historical analysis of every resolved insurgency since 1944, conducted by RAND, the best predictor of success in defeating one is achieving conventional military superiority. Details here: http://www.rand.org/pubs/research_reports/RR291z1.html

Comment by Riothamus on [Link] arguman.org, an argument analysis platform · 2015-10-19T17:16:28.666Z · LW · GW

This looks to be a very good example of the dangers of a little bit of rationality, or a little bit of intelligence. The layout encourages deploying Fully General Counter Arguments. There appears to be no mechanism to ensure the information on which the arguments are based is either good, or agreed upon.

Comment by Riothamus on Stupid questions thread, October 2015 · 2015-10-16T20:20:25.038Z · LW · GW

Ah - I appear to have misread your comment, then.

Would I be correct in limiting my reading of your remarks to rebutting the generalization you quoted?

Comment by Riothamus on Stupid questions thread, October 2015 · 2015-10-16T16:13:41.732Z · LW · GW

I find it most relevant to planning and prediction. It helps greatly with realizing that I am not an exception, and so I should take the data seriously.

In terms of things that changed when my beliefs did, I submit the criminal justice system as an example. I now firmly think about crime in terms of individuals being components of a social system, and I am exclusively interested in future prevention of crime. I am indifferent to moral punishment for it.

Comment by Riothamus on Stupid questions thread, October 2015 · 2015-10-16T15:26:58.867Z · LW · GW

You have oversimplified to uselessness.

A common counter-example is people who do not want this job, for example because it pays less than their current lifestyle costs to support. It isn't lazy, it is making the smart economic decision.

You are also assuming that the trouble of traveling to and from an interview is where the stress and effort lies. I would only credit that as the case if they had a high-demand skill set and were traveling across the country for the in-person interview, which is highly unlikely to apply to someone drawing unemployment benefits. The stress and effort stems from preparation before and performance during an interview, neither of which apply if the goal is to fail at it.

Comment by Riothamus on Rationality Reading Group: Part K: Letting Go · 2015-10-12T21:46:04.383Z · LW · GW

The most interesting segment of this section was The Ritual. I find the problem of how to go about making an effective practice very interesting. I would also like to draw attention to this section:

"I concede," Jeffreyssai said. Coming from his lips, the phrase was spoken with a commanding finality. There is no need to argue with me any further: You have won.

I experienced a phenomenon recently that trends to act as a brake on letting go: the commentary following concession. I was having a conversation with someone, and expressed an opinion. They countered, and after a few moments' consideration I saw they had completely invalidated my premise. When I said so, the conversation came to a halt as they asked 'Did I just win an argument?' When I said 'Yup,' they said 'Write that down!'

This speaks to the way we value how we argue. Refusing to concede is a way to demonstrate commitment and strength. I have on more than one occasion experienced a modicum of ridicule for agreeing too quickly, from the person I was agreeing with. When I was younger, I even did this myself - yet it is insane as I reflect on it. I felt argument was a competition, and winning too easily was like a sporting event where one team played abysmally; no entertainment value. I reflect that I should dedicate more effort to arguing for the sake of exploration.

The person with whom I had the exchange has no knowledge of or interest in rationality. The experience happening so soon on the heels of reading served to illustrate that while developing the field of rationality may rely on shared complex ideas and values, developing the practice of rationality may not.

Comment by Riothamus on [Link] Tetlock on the power of precise predictions to counter political polarization · 2015-10-08T13:19:15.731Z · LW · GW

How does this idea square with elections in the United States? Consider pollsters; their job is to make specific predictions based on understood methods using data gathered with also understood methods.

Despite what was either fraud or tremendous incompetence in the last Presidential election cycle on the part of ideological pollsters, and the high degree of public attention paid to it, polarization has not meaningfully decreased in any way I can observe.

I therefore expect that making the candidates generate specific predictions would have little overall effect on polarization.