Posts
Comments
I had not imagined a strict barter system or scaling of paid content; the objective in both cases is only to make up the difference between the value content producers want versus the value they expect for the first wave.
The point of diminishing returns would be hard to judge for paid content, but perhaps the two strategies could work together: survey prospective content producers for the content they want to see, and then pay for the most popular subjects to draw the rest. Once you have enough content established to draw the first wave of voluntary content producers, everything else can build off of that for no/minimal further investment.
That being said, it would probably be a good idea to keep surveying and perhaps paying for content on a case by case, say to alleviate a dry spell of contributions or if there is some particular thing which is in high demand but no one is volunteering to produce.
What about a contest with a cash award of some kind? This could drive a lot of content for a fixed upfront investment, and then you would also have the ability to select among the entries for the appropriate style and nuance, which reduces the risk of getting unsatisfactory work.
I see finding high-quality content producers was a problem; you reference math explanations specifically.
I notice that people are usually good at providing thorough and comprehensible explanations in only their chosen domains. That being said, people are interested in subjects beyond those they have mastered.
I wonder if it is possible to approach quality content producers with the question of what content they would like to passively consume, and then try and approach networks of content producers at once. For example: find a game theory explainer who wants to read about complex analysis; a complex analysis explainer who wants to read about music theory; a music theory explainer who wants to read about game theory.
Then you can approach all three at once with the premise that if they explain the thing they are good at, they will also be able to read the thing they want to be explained well to them, on the same platform. There's a similar trick being explored for networks of organ donations.
Also, was there any consideration given to the simple mechanism of paying people for quality explanations? I expect a reasonable core of value could be had for low cost.
None of these are hypotheticals, you realize. The prior has been established through a long and brutal process of trial and error.
Any given popular military authority can be read, but if you'd like a specialist in defense try Vaubon. Since we are talking about AI, the most relevant (and quantitative) information is found in the work done on nuclear conflict; Von Neumann did quite a bit of work aside from the bomb, including coining the phrase Mutually Assured Destruction. Also of note would be Herman Kahn.
I disagree, for two reasons.
AI in conflict is still only an optimization process; it remains constrained by the physical realities of the problem.
Defense is a fundamentally harder problem than offense.
The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.
This drives a scenario where the security trap prohibits non-deployment of military AI, and the fundamental problem of defense means the AIs will privilege offensive solutions to security problems. The customary response is to develop resilient offensive ability, like second-strike...which leaves us with a huge surplus of distributed offensive power.
My confidence is low that catastrophic conflict can be averted in such a case.
I am curious about the frequency with which the second and fourth points get brought up as advantages. In the historical case, multipolar conflicts are the most destructive. Forestalling an arms race by giving away technology also sets that technology as the mandatory minimum.
As a result, every country that has a computer science department in their universities is now a potential belligerent, and violent conflict without powerful AI has been effectively ruled out.
I have high hopes that the ongoing catastrophe of this system will discredit the entire design philosophy of the project, and the structure of priorities that governed it. I want it to be a meta-catastrophe, in other words.
The site looks very good. How do you find the rest of it?
Here is a method I use to good effect:
1) Take a detailed look at the pros and cons of what you want to change. This is sometimes sufficient by itself - more than once I have realized I simply get nothing out what I'm doing, and the desire goes away by itself.
2) Find a substitution for those pros.
Alternatively, think about an example of when you decided to do something and then actually did it, and try to port the methods over. Personal example: I recently had a low-grade freakout over deciding to do a particular paperwork process that is famously slow and awful, and brings up many deeply negative feelings for me. Then I was cleaning my dutch oven, and reflected on getting a warranty replacement actually took about three months and several phone calls, which is frustrating but perfectly manageable. This gives me confidence that monitoring a slow administrative process is achievable, and I am more likely to complete it now.
On the grounds that those ethical frameworks rested on highly in-flexible definitions for God, I am skeptical of their applicability. Moreover, why would we look at a different question where we redefine it to be the first question all over again?
I think the basic income is an interesting proposal for a difficult problem, but I downvoted this post.
This is naked political advocacy. Moreover, the comment is hyperbole and speculation. A better way to address this subject would be to try and tackle it from an EA perspective - how efficient is giving cash compared to giving services? How close could we come if we wanted to try it as charity?
The article is garbage. Techcrunch is not a good source for anything, even entertainment in my opinion. The article is also hyperbolic and speculative, while being littered with Straw Man, Ad Hominem, and The Worst Argument In the World. If you are interested in the topic, a much better place to go look would be the sidebar of the subreddit dedicated to basic income.
Bad arguments for a bad purpose with no data doesn't make for quality discussion.
If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others
I mean to say we are not ontologically motivated. The examples OP gave aren't ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.
In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, which weren't motivated by testing Aristotle. Architecture and siege engines did for falling objects, for example.
I agree with your points. I am now experiencing some disquiet about how slippery the notion of 'best' is. I wonder how one would distinguish whether it was undefinable or not.
This sounds strongly like we have no grounds for considering ontology at all when determining what the best possible explanation.
- We cannot qualitatively distinguish between ontologies, except through the other qualities we were already examining.
- We don't have a way of searching for new ontologies.
So it looks like all we have done is go from best possible explanation to best available explanation where some superior explanation occupies a space of almost-zero in our probability distribution.
Echo chamber implies getting the same information back.
It would be more accurate to say we will inevitably reach a local maxima. Awareness of the ontological implications should be a useful tool in helping us recognize when we are there and which way to go next.
Without pursuing the analysis to its maximal conclusions, how can we distinguish the merits of different ontologies?
If the artificial intelligence from emulation is accomplished through tweaking an emulation and/or piling on computational resources, why couldn't it be accomplished before we start emulating humans?
Other primates, for example. Particularly in the case of the destructive-read and ethics-of-algorithmic-tweaks, animal testing will surely precede human testing. To the extent a human brain is just a primate brain with more computing power, another primate with better memory and clock speed should serve almost as effectively.
What about other mammals with culture and communication, like a whales or dolphins?
Something not a mammal at all, like Great Tits?
Is anyone in a position to offer some criticism (or endorsement) of the work produced at Gerwin Schalk's lab?
I attended a talk given by Dr. Schalk in April 2015, where he described a new method of imaging the brain, which appeared to be a better-resolution fMRI (the image in the talk was a more precise image of motor control of the arm, showing the path of neural activity over time). I was reminded of it because Dr. Schalk spent quite a bit of time emphasizing doing the probability correctly and optimizing the code, which seemed relevant when the recent criticism of fMRI software was published.
This is enough of a problem for small medical practices in the US that it outweighs a good bedside manner and confidence in the doctor's medical ability.
I am confident that this has a large effect on the success of an individual practice; it may fall under the general heading of business advice for the individual practitioner. Even for a single-doctor office, a good secretary and record system will be key to success.
This information comes chiefly from experience of and interviews with specialists (dermatology and gynaecology) in the US.
I know this is banal, but ensure excellent administration.
Medical expertise is only relevant once you see the patient. Your ability to judge the evidence requires getting access to it; this means you need to be able to correctly send requests, get the data back, and keep all this attached to the correct patient.
Scheduling, filing and communication. Lacking these, medical expertise is meaningless. So get the best damn admin and IT you can possibly afford.
Let me try to restate, to be sure I have understood correctly:
We cannot stop once we have exhausted the evidence because explanations of equal predictive power have different ontological implications, and these implications must be accounted for in determining the best explanation. Further, we don't have a way to exclude other ontological implications we have not considered.
Question: why don't the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?
So am I correct in inferring that this program looks for any mathematical correlations in the data, and returns the simplest and most consistent ones?
This is a useful bit of clarification, and timely.
Would that change if there was a mechanism for describing the criteria for the best explanation?
For example, could we show from a body of evidence the minimum entropy, and therefore even if there are other explanations they are at best equivalent?
There is a LessWrong wiki entry for just this problem: https://wiki.lesswrong.com/wiki/Simulation_argument
The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.
Understanding that beliefs are our knowledge of reality rather than reality itself has some very interesting effects. The first is that our beliefs do not have to take the form of singular conclusions, such as we are or are not in a simulation; instead our belief can take the form of a system of conclusions, with confidence distributed among them. The second is the notion of paying rent, which is super handy for setting priorities. In summary, if it does not yield a new expectation, it probably does not merit consideration.
If this does not seem sufficiently coherent, consider that you are allowed to be inconsistent, and also that you are engaging with rationality early in its development.
Effectiveness is desirable; effectiveness is measured by results; consistency and verifiability are how we measure what is real.
As a corollary, things that have no evidence do not merit belief. We needn't presume that we are not in a simulation, we can evaluate the evidence for it.
The central perspective shift is recognizing that beliefs are not assertions about reality, but assertions about our knowledge of reality. This what is meant by the map and the territory.
Is there a procedure in Bayesian inference to determine how much new information in the future invalidates your model?
Say I have some kind of time-series data, and I make an inference from it up to the current time. If the data is costly to get in the future, would I have a way of determining when cost of increasing error exceeds the cost of getting the new data and updating my inference?
That doesn't mean that inherently impossible to transmit knowledge via writting but it's hard.
Agreed. The more I consider the problem, the higher my confidence that investing enough energy in the process is a bad investment for them.
Another romantic solution waiting for the appropriate problem. I should look into detaching from the idea.
I should amend my assumption to uncontrolled transmission is inevitable. The strategy so far has been to use the workshops, and otherwise decline to distribute the knowledge.
The historical example should be considered in light of what the goals are. The examples you give are strategies employed by organizations trying to deny all knowledge outside of the initiated. Enforcing secrecy and spreading bad information are viable for that goal. CFAR is not trying to deny the knowledge, only to maximize its fidelity. What is the strategy they can use to maximize fidelity in cases where they did not choose to transmit it (like this one)?
Suppose we model everyone who practices state-of-the-art rationality as an initiate, and everyone who wants to read about CFAR's teachings as a suppliant. If the knowledge is being transmitted outside of the workshops, how do we persuade the suppliants to self-initiate? Imposing some sort of barrier, so that it requires effort to access the knowledge - I suggest by dividing the knowledge up, thus modelling the mysteries. We would want the divided content to be such that people who won't practice it disengage rather than consume it all passively.
If CFAR were to provide the content, even in this format, I expect the incentive of people to produce posts like the above would be reduced, likewise for the incentive of people to read such collections.
In retrospect, I should have made it explicit I was assuming everyone involved was a (potential) insider at the beginning.
You have just described the same thing Duncan cited as a concern, only substituted a different motive; I am having trouble coming to grips with the purpose of the example as a result.
I propose that the method of organizing knowledge be considered. The goal is not to minimize the information, but to minimize the errors in its transmission. I assume transmission is inevitable; given that, segregating the information into lower-error chunks seems like a viable strategy.
We aren't at a point yet where we distinguish "basic" from "advanced" practices.
This is a good point; I have assumed that there would eventually be a hierarchy of sorts established. I was allowing for instruction being developed (whether by CFAR or someone else) even down below the levels that are usually assumed in-community. When Duncan says,
Picture throwing out a complete text version of our current best practices, exposing it to the forces of memetic > selection and evolution.
I interpret this to mean even by people who have no experience of thinking-about-thinking at all. As you aptly point out, the fundamentals are very hard - there may be demand for just such materials from future advanced rationalists for exactly that reason. So what I suggest is that the components of the instruction be segregated while retaining clear structure, and in this way minimize the skimming and corruption problems.
That being said, I fully endorse the priority choices CFAR has made thus far, and I do not share the (apparent) intensity of Duncan's concern. I therefore understand if even evaluating whether this is a problem is a low priority.
Sigh. I continue to forget how much of a problem that is. It is meant in the historical, rather than colloquial, meaning of the word. Since it apparently does not go without saying, the easily misunderstood term should never be used in official communication of any sort.
I apologize for the lack of clarity.
I wonder if it would be possible to screen out some of the misinterpretation and recombination hazards by stealing a page from mystery religions.
Adherents were initiated by stages into the cult; mastery of the current level of mysteries was expected before gaining access to the next.
Rather than develop a specific canon or doctrine, CFAR could build the knowledge that new practices supersede the old, basic practices must come before advanced practices, and precisely what practices should have been tackled previously and will be tackled next into everything instructional they produce for the public.
If this is pervasive in CFAR literature for the public, I would expect the probability of erroneous practice to go down.
Thank you for doing this work. I think that a graphical representation of the scope of the challenge is an excellent idea, and merits continuous effort in the name of making communication and retention easier.
That being said, I have questions:
1) What is the source of that text document? The citations consist almost exclusively of works concerning nanomachines. None of the citations concern biases, and do not reference people like Bostrom or Kahneman despite clearly being familiar with their work (at least second hand).
2) Am I correct to infer that the divisions along the X and Y axis are your own? Could you comment on what motivates them?
Also, I have suggestions:
Without having read the text document first the numbers confuse, and they are distracting to navigating the image. What do you think of: A, removing the numbers entirely; B, renumbering the text file and the image so the image provides the organization?
What do you think of a way to distinguish between biases that operate individually versus on a group? In example, #51 at (Underestimation, Heuristics) reads "An overly simplistic explanation is the most prominent.", which for an individual could be considered a special case of the Availability Heuristic. Argument against similar problems is found in arguing from fictional evidence, or alternately a form of information hazard. If the prominence of the explanation is the problem, that is a group failing rather than an individual failing.
I also think this warrants a pass for spelling and grammar, but that is merely a question of housekeeping. Would I be right to guess that English is a second language?
Good work!
This gives us these options under the Chalmers scheme:
Same input -> same output & same qualia
Same input -> same output & different qualia
Same input -> same output & no qualia
I infer the ineffable green-ness of green is not even wrong. We have no grounds for thinking there is such a thing.
They are meant to be arbitrarily accurate, and so we would expect them to include qualia.
However, in the Chalmers vein consciousness is non-physical, which suggests it cannot be simulated through physical means. This yields a scenario very similar to the identical-yet-not-conscious p-zombie.
What do people in Chalmer's vein of belief think of the simulation argument?
If a person is plugged into an otherwise simulated reality, do all the simulations count as p-zombies, since they match all the input-output and lack-of-qualia criteria?
I do not think we need to go as far as i-zombies. We can take two people, show them the same object under arbitrarily close conditions, and get the answer of 'green' out of both of them while one does not experience green on account of being color-blind.
This looks like an information problem.
It is useful to remember that the market is an abstraction of aggregated transactions. The basic supply and demand graphs they teach us in early econ rely on two assumptions: rational agents, and perfect information.
I expect the imperfect information problem dominates in cases of new products, because producers have a hard time estimating return, and customers don't even know it exists. VCs are largely about developing a marginal information advantage in this space. Interestingly, all of the VCs I have personally interacted with (sample size: 5) say they pick teams over ideas.
When the people at Thinx were asked why the dominant companies hadn't done it already, what was their answer? If they couldn't answer, that would indicate to the VC the team didn't gather enough information to justify their claims (and thus were unprepared). I would expect the answer is some combination of competing with their own products, and demand is not big enough to be profitable with their scaled manufacturing methods.
On the subject of Tums: what is the socially optimal point for sugar-free Tums? How do we know the socially optimal outcome isn't regular Tums and mouthwash?
It is worth keeping in mind that how to defeat X is not well-defined. The usual method for circumventing the planning fallacy is to use whatever the final cost was last time. What about cases where there isn't a body of evidence for the costs? Rationality is just such a case; while we have many well-defined biases, we have few methods for overcoming them.
As a consequence, I determine whether to workaround or defeat X primarily based on how frequently I expect it to come up. The cost of X I find less relevant for two reasons: one, I have a preference against being mugged by Pascal's Wager into spending all my effort on low-likelihood events; two, high cost cases often have a well developed System 2 methodology to resolve them.
A benefit is that frequent cases benefit more easily from spaced repetition and habit forming. In this way, I hope to develop a body of past cases to refer to when trying to plan for how long defeating future X will take.
Examples of frequent cases: exercise, amazon purchases, reading articles. Examples of rare cases: job benefits, housing costs, vehicle purchases.
Military bonding is an interesting comparison. Training in a professional military relies on shared suffering to build a strong bond between the members of a unit. If we model combat as an environment so extreme that vulnerability is inescapable, the function of vulnerability as a bonding trait makes sense.
It also occurs to me that we almost universally have more control over how we signal emotions than how we feel them. The norm would therefore be that we feel more emotions than we show; by being vulnerable and signaling our emotions, other people can empathize instinctively and may feel greater security as a result.
What are your criteria for good foreign policy choices then? You have conveyed that you want Iraq to be occupied, but Libya to be neglected, so non-intervention clearly is not the standard.
My current best guess is 'whatever promotes maximum stability'. Also, how do you expect these decisions are currently made?
I would also have an easier time with ASCII, but that's because I (and presumably you also) have been trained in how to produce instructions for machines. This is a negligible chunk of humanity, so I thought it was equally discountable.
I suppose the spiritual analogy would be an ordained priest praying on behalf of another person!
As compared to what alternative? There is no success condition for large scale ground operations in the region. If the criticism of the current administration is "failed to correct the lack of strategic acumen in the Pentagon" then I would agree, but I wonder what basis we have for expecting an improvement.
It seems to me we can identify problems, but have no available solutions to implement.
A correct analogy between records and books would be the phonograph and the text of the book written in ASCII hexadecimal. Both are designed to be interpreted by a machine for presentation to humans.
Until someone demonstrates the utility of engaging in criticism of particular political groups, I will continue to treat it as noise.
We already know out groups don't use the word rationality the way we do. We also know that assuming others share our information and frame of reference is an error. There is no new information here.
The thing to consider about the economy is that the president is not only not responsible, but mostly irrelevant. An easy way to see this is the 2008 stimulus packages. Critics of the president frequently share the graph of national debt which grows sharply immediately after he took office - ignoring that the package was demanded by congress and supported by his predecessor, who wore a different color shirt.
A key in evaluating a president is the difference between what he did, what he could have done, and what people think about him. Consider that the parties were polarizing before he took office.
In terms of specifics, I am disappointed that he continued most of the civil rights abuses of the previous administration with regards to due process. I also oppose the employment of the drone warfare doctrine, which is minimally effective at achieving strategic goals and highly effective at generating ill will in the region.
By contrast, I am greatly pleased at the administrations' commitment to diplomacy and improvement of our reputation among our allies. I am pleased that major combat operations were ended in two theaters, and that no new ones were launched. I applaud the Iranian nuclear agreement.
If I am reading that right, they plan to oppose Skynet by giving everyone a Jarvis.
Does anyone know their technical people, and whether they can be profitably exposed to the latest work on safety?
According to historical analysis of every resolved insurgency since 1944, conducted by RAND, the best predictor of success in defeating one is achieving conventional military superiority. Details here: http://www.rand.org/pubs/research_reports/RR291z1.html
This looks to be a very good example of the dangers of a little bit of rationality, or a little bit of intelligence. The layout encourages deploying Fully General Counter Arguments. There appears to be no mechanism to ensure the information on which the arguments are based is either good, or agreed upon.
Ah - I appear to have misread your comment, then.
Would I be correct in limiting my reading of your remarks to rebutting the generalization you quoted?
I find it most relevant to planning and prediction. It helps greatly with realizing that I am not an exception, and so I should take the data seriously.
In terms of things that changed when my beliefs did, I submit the criminal justice system as an example. I now firmly think about crime in terms of individuals being components of a social system, and I am exclusively interested in future prevention of crime. I am indifferent to moral punishment for it.
You have oversimplified to uselessness.
A common counter-example is people who do not want this job, for example because it pays less than their current lifestyle costs to support. It isn't lazy, it is making the smart economic decision.
You are also assuming that the trouble of traveling to and from an interview is where the stress and effort lies. I would only credit that as the case if they had a high-demand skill set and were traveling across the country for the in-person interview, which is highly unlikely to apply to someone drawing unemployment benefits. The stress and effort stems from preparation before and performance during an interview, neither of which apply if the goal is to fail at it.
The most interesting segment of this section was The Ritual. I find the problem of how to go about making an effective practice very interesting. I would also like to draw attention to this section:
"I concede," Jeffreyssai said. Coming from his lips, the phrase was spoken with a commanding finality. There is no need to argue with me any further: You have won.
I experienced a phenomenon recently that trends to act as a brake on letting go: the commentary following concession. I was having a conversation with someone, and expressed an opinion. They countered, and after a few moments' consideration I saw they had completely invalidated my premise. When I said so, the conversation came to a halt as they asked 'Did I just win an argument?' When I said 'Yup,' they said 'Write that down!'
This speaks to the way we value how we argue. Refusing to concede is a way to demonstrate commitment and strength. I have on more than one occasion experienced a modicum of ridicule for agreeing too quickly, from the person I was agreeing with. When I was younger, I even did this myself - yet it is insane as I reflect on it. I felt argument was a competition, and winning too easily was like a sporting event where one team played abysmally; no entertainment value. I reflect that I should dedicate more effort to arguing for the sake of exploration.
The person with whom I had the exchange has no knowledge of or interest in rationality. The experience happening so soon on the heels of reading served to illustrate that while developing the field of rationality may rely on shared complex ideas and values, developing the practice of rationality may not.
How does this idea square with elections in the United States? Consider pollsters; their job is to make specific predictions based on understood methods using data gathered with also understood methods.
Despite what was either fraud or tremendous incompetence in the last Presidential election cycle on the part of ideological pollsters, and the high degree of public attention paid to it, polarization has not meaningfully decreased in any way I can observe.
I therefore expect that making the candidates generate specific predictions would have little overall effect on polarization.