Posts
Comments
This seems quite similar to the "Gish gallop" rhetorical technique.
Perhaps, in a parallel to the kings earlier mentioned, this could be interpreted as Orion having seen the fortunes of continents rise and fall. Orion has seen the prominence of Africa as the source of humanity, and its subjugation by Europe; it has seen the isolation and the global power of the Americas; it has seen the mercantile empires of the West and its dark ages.
While, if successful, such an epistemic technology would be incredibly valuable, I think that the possibility of failure should give us pause. In the worst case, this effectively has the same properties as arbitrary censorship: one side "wins" and gets to decide what is legitimate, and what counts towards changing the consensus, afterwards, perhaps by manipulating the definitions of success or testability. Unlike in sports, where the thing being evaluated and the thing doing the evaluating are generally separate (the success or failure of athletes doesn't impede the abilities of statisticians, and vice versa), there is a risk that the system is both its subject and its controller.
I do think "[a]bility to contribute to the thought process seems under-valued" is very relevant here. A prediction-tracking system captures one...layer[^1], I suppose, of intellectuals; the layer that is concerned with making frequent, specific, testable predictions about imminent events. Those who make theories that are more vague, or with more complex outcomes, or even less frequent[^2][^3], while perhaps instrumental to the frequent, specific, testable predictors, would not be recognized, unless there were some sort of complex system compelling the assignment of credit to the vague contributors (and presumably to their vague contributors, et cetera, across the entire intellectual lineage or at least some maximum feasible depth).
This would be useful to help the lay public understand outcomes of events, but not necessarily useful in helping them learn about the actual models behind them; it leaves them with models like "trust Alice, Bob, and Carol, but not Dan, Eve, or Frank" rather than "Alice, Bob, and Carol all subscribe to George's microeconomic theory which says that wages are determined by the House of Mars, and Dan, Eve, and Frank's failure to predict changes in household income using Helena's theory that wage increases are caused by three-ghost visitations to CEOs' dreams substantially discredits it". Intellectuals could declare that their successes or failures, or those of their peers, were due to adherence to a specific theory, or the lay people could try to infer as such, but this is another layer of intellectual analysis that is nontrivial unless everyone wears jerseys declaring what theoretical school of thought they follow (useful if there are a few major schools of thought in a field and the main conflict is between them, in which case we really ought to be ranking those instead of individuals; not terribly useful otherwise).
[^1]: I do not mean to imply here that such intellectuals are above or below other sorts. I use layer here in the same way that it is used in neural networks, denoting that its elements are posterior to other layers and closer to a human-readable/human-valued result.
[^2]: For example, someone who predicts the weather will have much more opportunity to be trusted than someone who predicts elections. Perhaps this is how it should be; while the latter are less frequent, they will likely have a wider spread, and if our overall confidence in election-predicting intellectuals is lower than in our predictions of weather-predicting intellectuals, that might just be the right response to a field with relatively fewer data points: less confidence in any specific prediction or source of knowledge.
[^3] On the other hand, these intellectuals may be less applied not because of the nature of their field, but the nature of their specialization; a grand an abstract genius could produce incredibly detailed models of the world, and the several people who run the numbers on those models would be the ones rewarded with a track record of successful predictions.
Why _haven't_ they already switched? Presumably, these companies are full of people with some vague incentives that point at maximizing efficacy, but they're leaving a "clearly superior" product on the table. It may be that the answer is that this is some sort of systemic, widespread failure of decision-making, or a decision-making success under different criteria (lower tolerance for the risk of change, perhaps, than these same systems have now) rather than a reflection of some inadequacy of RT-LAMP, but "the folks with the expertise and incentive to get it right are all getting it wrong and leaving money on the table" sounds like a more complex explanation than "there are shortcomings to RT-LAMP that I haven't considered", and I'd like to see some further evidence in favor of it.
You may be familiar with the term "Technological Singularity" as used to describe what happens in the wake of the development of superintelligent AGI; this term is not merely impressive but refers to the belief that what follows such a development would be incredibly and unpredictably transformative, subject to new phenomena and patterns of which we may not yet be able to conceive.
I don't believe it would be smart to invest with such a scenario in mind; we have little reason to believe that how much pre-Singularity wealth one has would matter post-Singularity in such a way that it would be wise to include such a term in one's expected value and decision-making. It would be not entirely unlike buying stock based on which companies would most benefit from the announcement of an incoming Earth-shattering asteroid. The development of superintelligent AGI is an existential threat to just about every institution, including the stock market and our current conception of the economy in general. A rational, entirely selfish actor or aggregate thereof does not make plans for what happens after its death.
However, I must admit that I have no data on the subject, and while I would not guess that there is much relevant data available, I imagine there is some - did the U.S. stock market account for what companies might be most successful in the case of a Soviet conquest of the U.S.? Is the potential profitability of a company in a world transformed by a global Communist revolution accounted for in its current stock price? I do not know, but I would be very surprised to learn that the stock market priced scenarios in which it and the institutions on which it depends are unlikely to continue to exist in recognizable forms.
The example of the pile of sand sounds a lot like the Chinese Room thought experiment, because at some point, the function for translating between states of the "computer" and the mental states which it represents must begin to (subjectively, at least, but also with some sort of information-theoretic similarity) resemble a giant look-up table. Perhaps it would be accurate to say that a pile of sand with an associated translation function is somewhere on a continuum between an unambiguously conscious (if anything can be said to be conscious) mind (such as a natural human mind) and a Chinese Room. In such a case, the issue raised by this post is an extension of the Chinese Room problem, and may not require a separate answer, but does do the notable service of illustrating a continuum along which the Chinese Room lies, rather than a binary.
I'm not sure if this is a brilliantly ironic example of the lack of absolute applicability of these guidelines or just a happy accident.
Not entirely true; low sperm counts are associated with low male fertility in part because sperm carry enzymes which clear the way for other sperm - so a single sperm isn't going to get very far.
In addition to enjoying the content, I liked the illustrations, which I did not find necessary for understanding but which did break up the text nicely. I encourage you to continue using them.
1) Historical counter-examples are valid. Counter-examples of the form of "if you had followed this premise at that time, with the information available in that circumstance, you would have come to a conclusion we now recognize as incorrect" are valid and, in my opinion, quite good. Alternately, this other person has a very stupid argument; just ask about other things which tend to be correlated with what we consider "advanced", such as low infant mortality rates (does that mean human value lies entirely in surviving to age five?) or taller buildings (is the United Arab Emirates is the objectively best country?).
2) "Does life have meaning" is a confused question. Define what "meaning" means in whatever context it is being used before engaging in any further debate, otherwise you will be arguing over definitions indefinitely and never know it. Your argument does sound suspiciously similar to Pascal's Wager, which I suspect other commenters are more qualified to dissect than I am.
I agree that growth shouldn't be a big huge marker of success (at least at this point), but even if it's not a metric on which we place high terminal value, it can still be a very instrumentally valuable metric - for example, if our insight rate per person is very expensive to increase, and growth is our most effective way to increase total insight.
So while growth should be sacrificed for impact on other metrics - for example, if growth is has a strong negative impact on insight rate per person - I would say it's still reasonable to assume it's valuable until proven otherwise.
Are we in any real danger of growing too quickly? If so, this is relevant advice; if not - if, for example, a doubling of our growth rate would bring no significant additional danger - I think this advice has negative value by making an improbable danger more salient.
Not necessarily; the three sorts of excellent organizations you mention are organizations whose excellence is recognized by the rest of the world in some way, granting its members prestige, opportunities, and money. I suspect this is what attracts people to a large extent, not a general ability to detect organizational goodness. This sort of recognition may be very difficult to get without being very good at whatever it is the organization does, but that does not imply that all good organizations are attractive in this way.
Having recently read The Craft & The Community: A Post-Mortem & Resurrection I think that its advice on recruiting makes a lot of sense: meet people in person, evaluate whom you think would be a good fit - especially those who cover skill or viewpoint gaps that we have - and bring them to in-person events.,
I would be very interested in reading, say a blog post (or series thereof) exploring why this happens (and, if remotely possible, directing motivated individuals towards ways to support faster adoption of successful treatments).
First, I think this is an excellent idea, and I wish you the best of luck.
Second, what mechanisms do you have in place for getting feedback about the content you produce? I'm aware that for a broadcast medium using a platform over which you do not have full control, your feasible options may be limited, but I strongly encourage you to consider (possibly when this project has reached a stable state, because this will take a non-trivial amount of resources) some amount of focus group A/B testing for comprehension and internalization. From the beginning, you should probably have one or two individuals close to your target audience (i.e. Italian-speaking, without prior Rationality experience) off of whom to bounce ideas. Yours is an ambitious plan and I would hate for it to lose contact with reality.
Third, if you are doing this at least in part as a response to irrationality in voter choices, I suggest (based on my awareness of the situation in the US) focusing on:
What statistics are comparable to each other? e.g. Politician P says that Group G is responsible for X% of Crimes. How does this compare to the national average? How does this compare to the national average when weighted by socioeconomic status to reflect the socioeconomic distribution of Group G? What factors could explain this, and which numbers are the right ones to use as a baseline?
Conservation of evidence: if a given study/exploration/piece of possible evidence has two outcomes, they can't both make you more confident in a given position. The examples I've seen used in this community are in [this article](http://lesswrong.com/lw/ii/conservationof expected_evidence/).
I think this is a very valuable concept to keep fresh in the public consciousness.
However, I think it is in need of better editing; right now its formatting and organization make it, for me at least, less engaging. This is less of an issue because it's short; I imagine that a longer piece in the same style would suffer more reader attrition.
It might help to read over your piece and then try to distill it down to the essentials, repeatedly; it reads right now as if it is only a few steps removed from straight stream-of-consciousness. Or it might not; at this point I'm speculating wildly about the creative processes of someone I've never met, so take my implementation advice with a grain of salt.
Either way, I look forward to reading more of your insights.
Perhaps part of the desire to avoid conformity is a desire to avoid comparability, for fear of where one might end up in a comparison.
If I am one of one hundred people doing the same thing in the same way - working on a particular part of an important problem, or embracing a very specific style - I run the psychological risk of discovering that I am strictly worse than a large number of other people.
If, instead, I am one of one hundred people doing different things in different ways, things about me - the skills I bring to bear on the problem - cannot easily be compared and found wanting. I am protected from the threat to my self-esteem by the confusion of the variety in approaches, which I can easily blame even if my efforts produce results which are comparable and inferior to others'.
You have the right to have beliefs which you know or could reasonably conclude are probably false, though it is advisable you not exercise it.
You have the right to have beliefs which you have reason to believe are probably true, even if an overwhelming majority of well-informed experts disagrees, though it is advisable you exercise it only when you have a very good reason to believe you are right (i.e. when you have carefully considered expert majority disagreement as evidence of a strength relative to the capability of the experts and the nature of the system of incentives in which they operate, and have sufficiently strong evidence in the other direction).
You have the right to make a series of bald assertions on variations of these rights, interwoven such as not to imply a distinction between the advisable and the inadvisable, and in such large numbers that disagreement over any specific point can be dismissed as only minorly affecting the conclusion, and that refuting all points would be difficult due to the limitations of the forum in which they are posted.
You have the right to claim that anything is a duty, but everyone else has the right to ignore it.
I apologize for the formatting; I tried to copy and paste from another app to get around the character-eating behavior of the comment box on mobile, and it seems to have resulted in this monstrosity which is immune to edits.
Perhaps I ought to just start posting comments as links to Google Docs.
For example, let's imagine that melatonin is effective for 60% of all people: 80% of people who describe themselves as "morning people", but only 40% of people who do not. This is useful melaton
I think that an important addition would be other data about the participants in a given intervention, that could ideally help newcomers filter out interventions which are reasonably likely to have a positive effect in the general population but unlikely to apply to some subset of people.I think that an important addition would be other data about the participants in a given intervention, that could ideally help newcomers filter out interventions which are reasonably likely to have a positive effect in the general population but unlikely to apply to some subset of people.I think that an important addition would be other data about the participants in a given intervention, that could ideally help newcomers filter out interventions which are reasonably likely to have a positive effect in the general population but unlikely to apply to some subset of people.I think that an important addition would be other data about the participants in a given intervention, that could ideally help newcomers filter out interventions which are reasonably likely to have a positive effect in the general population but unlikely to apply to some subset of people.
For example, let's imagine that melatonin is effective for 60% of all people: 80% of people who describe themselves as "morning people", but only 40% of people who do not. This is useful information for both groups (assuming the difference is statistically significant), and would be lovely to include in our cookbook.
This would require more information-gathering about individual users (and we should definitely have a "decline to disclose" option, particularly for more sensitive topics). If we want to be able to change what data we are collecting later (imagine that we suddenly have reason to believe that hair color is relevant to melatonin impact), we will need to store individual usernames in order to contact participants later.
I would be interested in helping with this project. My employer currently owns anything software-related I produce, but is willing to make reasonable exceptions where a project does not intersect with its business; if this project does materialize in a more concrete form, I would be able to present it to my employer and ask for permission to contribute. So if someone starts it, I would like to support it.
For example, let's imagine that melatonin is effective for 60% of all people: 80% of people who describe themselves as "morning people", but only 40% of people who do not. This is useful information for both groups (assuming the difference is statistically significant), and would be lovely to include in our cookbook.
This would require more information-gathering about individual users (and we should definitely have a "decline to disclose" option, particularly for more sensitive topics). If we want to be able to change what data we are collecting later (imagine that we suddenly have reason to believe that hair color is relevant to melatonin impact), we will need to store individual usernames in order to contact participants later.
Perhaps instead the karma of a post ought not to be linear in the number of upvotes it receives? If the karma of a post is best used as a signal of the goodness of the post, then it is less noisy as more data points appear, but not linearly so.
There is perhaps still a place for karma as a linear reward mechanism - that is, pleasing 10 people enough to get them to upvote is, all other things being equal, 10 times as good as pleasing 1 person - but this might be best separated from the signal aspect.
"Epistemic Status" is meant to convey why the author believes something, not why the quality of the writing is what it is.
I say this not to score Pedantry Points, but because I really like having "Epistemic Status" clarifications at the top of an article and would be dismayed if the term mutated away from its current usefulness.
This post seems to mesh really well with Zeroing Out Both are about how the status quo has a lot of valuable knowledge, and shouldn't be rejected entirely; in my case, reading one after the other helped both of them click..
Thank you for stumbling upon a way to make link posts work for me on mobile after landing on them through an RSS reader.
I think that there is a potentially dangerous implication in the comparison between the BoJ and the stock market, that the real essence of the difference between them is incentives . (At least, the way that I read it allowed for that interpretation; I'm not sure if this is sufficient universal).
I think that the general class of thing which is present in a stock market but not a central bank is an error-correction mechanism . In this case, that mechanism is in the form of very clear and direct monetary incentives. But we should expect other mechanisms to achieve the same purpose (though many may be or be connected to non-central examples of incentives as well).
Experimentation is one; I believe physicists, for example, because they have data to back up their predictions and the field tends to check theories against data. Peer review (formal or informal - namely, the ability of a field to call out and reject bad ideas) is another; my trust in this mechanism for correcting errors is the basis of my trust in a great deal of science (basically the idea that many qualified persons are keeping watch and could effectively raise an alarm if something important went wrong).
It seems reasonable to allow for disagreement with a field or institution if you can determine that its conclusions seem to have been reached in the absence of such a mechanism. In particular, if a field lacks, say, expert consensus, or an institution is going against that consensus, it seems reasonable to assume that there is an opportunity for a layperson to do reasonably well at interpreting expert-generated evidence from the rest of the field.
The requirements are even more lax, I believe, for errors of omission , which Eliezer mentions in his description of Brienne's light issues. I think this could reasonably be called a different category of problem.
I think an important distinction to make here is between the beliefs "there is a God who is Good in a nonspecific way that doesn't contradict very basic morality" and "there is a God who is very concerned with our day-to-day behavior and prescribes additional moral and factual precepts beyond what society generally already believes".
The former is the sort of belief which seems partially optimized for never needing to be examined (I'll wave my hands and say "memetic evolution" here as if I'm confident that I know what it means), and is probably more common among scientists and liberals and people with whom atheists are likely to agree than the latter. From an instrumental rationality perspective, it's the latter which ends inquiry and stifles truth, and the latter which we need to destroy by raising the waterline; the former is just collateral damage.
We rejoice at the increase of the share of matter being used in human minds.
I appreciate the disclaimer that this is meant to give context and highlight options, rather than persuade the reader of the correctness of those options.
Particularly given the mind-killing properties of research into gender dynamics, and the unearned explanatory flexibility that tends to encounter the evolutionary psychology that looks to be involved in your model, can you convince us why this map should be believed?
I think this form ends up a lot better. The explanation of what you, specifically, in this instance mean by "cynic" is still necessary and good, but since "cynic" doesn't have the same valence as "sociopath", it seems much less bait-and-switch.
If you have written a title for which you feel compelled to apologize in the second paragraph in order to explain what you mean, you have written a misleading title and this is behavior that I would very much like to disincentivize.
I've been able to get to the "Submit Comment" button on mobile in portrait (by tapping elsewhere to exit the editor before doing so), but my problem has been that the text box tends to lose all my progress oh, every other character or so. As a result, this comment has been copied and pasted from Google Keep.
I'd be interested in reading a more complete post on these concepts.
I personally would be particularly interested in the Standards, Social Reality, and Agency Pipeline posts.
I think that the benefit of criticizing publicly is that it allows your criticism to in turn be criticized.
Let us say that Alice writes a post. Bob finds the material too <adjective> to interest him. If he messages Alice privately, that is the extent of the feedback. If, however, he comments as such, Carol, Daniel, and Eve may all chime in saying that they found the material the right about of <adjective> to be interesting. The vocal minority inspires feedback from the silent majority, who might not have independently thought to give Alice feedback (because they didn't realize their views weren't universal, because they didn't believe they had actionable feedback, because they didn't have something they disagreed with).
Like all systems of voluntary feedback, there's a great deal of self-selection involved; moving from private to public feedback just affects who selects themselves. But I think it can do so in a valuable direction.
You mention wanting to be incentivized to research things, and also that a particular danger to the community is writers optimizing for engagement at the expense of other things.
It seems like a possible partial remedy for this would be a mechanism for the readership to make their desires known in a centralized place. Right now, a hypothetical writer William, if they want to craft content the community wants, would be best served by doing a review of past posts in search of things which are consistently popular. If they are lucky and clever, they may even be able to infer a niche which hasn't been filled - a gap in some topic, or perhaps a missing topic altogether - that would be welcome. This is time-consuming and presumably error-prone. Perhaps more importantly, it seems likely to produce content which is similar to what has come before. (Disclaimer: I have never made a top-level post, so if there is a different, quicker, more accurate process, please let me know.)
If instead the community could vote to say "We would like more posts on Machine Learning" or "We want posts on what Dr. X's latest psychology research means about consciousness", this would create an easily visible incentive to research and an incentive to explore specific topics (which may steer writers away from optimizing for comments, although the comments will likely follow). It adds complexity to the site, but I think it may be worth an experiment.
The problem, I think, with making pop evo-psych assertions without a solid foundation of comprehensive and well-explained research does not go away even when the assertions are not outlandish: we don't have a mechanism to judge these ideas other than common sense (or, in the worst case [not shown here], a misleadingly narrow selection of research).
This means that, while evo-psych may provide an interesting framework, it may also reinforce our existing preconceptions and give us false confidence in our conclusions. The outside view says that unsupported evo-psych assertions are likely to be wrong; I think that in many cases in which they are not wrong, they are not wrong for reasons independent of the inclusion of evo-psych itself. Whatever mechanism you used to judge that these particular theories were reasonable was probably sufficient without bringing in evo-psych. I think this post is a useful way of calling attention to certain human tendencies without it.
In general, however, I've really enjoyed your mini sequence and encourage you to keep up the good work.
"Three years is an awfully long time in the Internet world."
Publication date: April 6, 2000
I think the content of this article is a good recommendation against this article.
(cross-posted from the SSC comment thread)
People who have close friends with a wide range of “fields” (i.e. behaviors that they unconsciously evoke in others): do you observe differences in the behavior? Is there anything you notice that could be replicated to achieve a desired effect?
One reason I suspect that "manipulative" is often assumed to go along with "selfish", even when the two could be unrelated, is that risk aversion kicks in: a manipulative selfish person may be more harmful than a manipulative selfish person is helpful, and both will be more impactful than a naive selfish or selfless person. So rounding off an uncertain estimate of "manipulative, selfishness unknown" to "manipulative, selfish" may be a good defense. The costs of a failed alarm are higher than the costs of a false one.
This is particularly true if you don't believe that you need to be manipulated in order to be helped. If you believe that you are capable of making good decisions based on honest information, the expected value of an interaction with a naive selfless person rises relative to the expected value of an interaction with a manipulative selfless person. If you are on the side of truth - and of course you are! - then you have no need for helpful lies. Selfless manipulation then seems at best condescending.
It seems like there's really just one base Umeshism: "If you have not encountered adverse consequences, you have sunk too much value into mitigating risk."
I don't agree with this general form in all possible permutations; it could be instantiated as "If you've never been killed by a car, you've spent too much time looking both ways", and there doesn't seem to be a distinction contained in the structure that separates that example and other, less obviously wrong instantiations.
Perhaps the appropriate lesson is "If you are investing in avoiding adverse consequences, but have never suffered them, consider that you may be sacrificing too much for the sake of risk aversion." A general instruction to update in a particular direction on the proper balance between risk aversion and reward seeking, but not an absolute. However, in order to believe that, I'd like to see some evidence that the modal member of the target audience is actually overestimating the risks they want to avoid to such a degree that advice to indiscriminately avoid risk less would actually be helpful.
That may be enough: https://xkcd.com/810/
> If you say "food weights must be within 3% of what's on the packaging", then they'll be 3% below.
I would guess that setting this sort of regulation takes this into account, and the process for devising it must be accordingly more complicated.
The way the regulatory agency ought to get this number (note: I have no relevant background or experience, so this is all wild guessing) might look something like:
- Estimate the cost to the supplier of determining the weight of a package as a function of average accuracy. For example, it may cost $0.01 per package to determine the weight with a standard deviation of 1%, $0.04 for std dev of 0.5%, etc.
- Estimate the cost to the consumer of inaccurate package weights (which could just be linear, as in a package which is 3% underweight may be 3% undervalue, but for large inaccuracies and certain products - like medicine - can be different).
- Find some balance between costs to the supplier (taking into account, perhaps, that such costs may be different for large suppliers with enough throughput to easily absorb the fixed cost and small suppliers which will face a daunting up-front cost) and the consumer (taking into account, perhaps, that some consumers may be more sensitive than others to inaccuracies).
In such a case, the average inaccuracy wouldn't be 3%; if the supplier had the capability to calculate their weights so exactly that every package was no more or less than 3% underweight, the regulatory agency would presumably insist on 0% inaccuracy. Instead, the supplier would pick an inaccuracy between 0% and 3% such that they would be certain that enough of their packages would be less than 3% inaccurate.
Any supplier which did not do so wisely might get away with it for a time, but would eventually put out a sub-par product and presumably be penalized for it. Even a self-interested person will build a margin of error to ensure that they are never found to be noncompliant. These margins may be different for different suppliers - if the agency makes regulations with the intent of keeping barriers to entry low, it will allow a wider margin of error so as not to exclude potential suppliers without affordable access to super-precise measurement technology, allowing suppliers with it to choose an inaccuracy which is allowed by regulation but still safe for them. While more difficult, a remedy to this issue may be to additionally regulate on intent (e.g. an internal email saying "let's aim for 2% inaccuracy, we can get away with it" would be evidence of wrongdoing) or known ability (e.g. holding suppliers with better measurement capability to a higher standard).
To return to forum posting: rational trolls will not break every unwritten rule while perfectly respecting all of the written ones, because they likely can't be precise enough to do so consistently. Irrational trolls will try, fail, and be punished for it. Some trolls are better at staying close to the line without crossing it than others; moderators may want to hold these folks to higher standards or simply add a provision to punish those they believe to be acting within the rules but without good faith.
In US law*, restrictions on speech can be content-based (e.g. banning white supremacist content, even if it's polite) or content-neutral (e.g. banning insults by anyone). I think it maps rather well onto what you're describing and is a better dichotomy than libertarian vs. non-libertarian.
• Source: https://lawshelf.com/courseware/entry/limitations-on-expression
I'm not sure if this is specific to my device/browser (Android phone using Chrome), but if there is supposed to be a link it isn't apparent.
I'm curious; what are the origins of the hypothetical opponent in this discussion? That is, what articles/people/sources of arguments do you see promoting those views, either explicitly or implicitly? Now that you've presented a (straw?weak?accurate?steel?)man version of that position, I'm interested in learning more context.
I do not think that attempting to resolve or compensate for gaps in knowledge by filling them with something chosen to be narratively satisfying is an endeavor that will have accurate or useful results.
I am someone who has found that I'm using Wikipedia less, and I find that I'm relying more on Google than I used to, for what I used to use Wikipedia for. In particular, Featured Snippets in Search (which will often pull an excerpt from a Wikipedia article!) are a fantastic substitute for quick questions that I would, in past years, have asked Wikipedia, although it isn't a substitute for a deeper exploration.