Posts

Map and territory: Natural structures 2017-08-01T13:53:29.532Z
90% of problems are recommendation and adaption problems 2017-07-12T04:53:08.356Z
Red Teaming Climate Change Research - Should someone be red-teaming Rationality/EA too? 2017-07-07T02:16:29.949Z
Self-conscious ideology 2017-06-28T05:32:23.146Z
‘Crucial Considerations and Wise Philanthropy’, by Nick Bostrom 2017-03-31T12:52:46.209Z
On Arrogance 2017-01-20T01:04:27.188Z
Unspeakable conversations 2016-12-07T15:24:33.546Z
Which areas of rationality are underexplored? - Discussion Thread 2016-12-01T22:05:27.780Z
Debating concepts - What is the comparative? 2016-11-30T14:46:23.888Z
Terminology is important 2016-11-30T00:37:58.228Z
The Non-identity Problem - Another argument in favour of classical utilitarianism 2016-10-18T13:41:14.047Z
Avoiding strawmen 2016-06-17T08:20:51.673Z
Revitalising Less Wrong is not a lost purpose 2016-06-15T08:10:04.070Z
When considering incentives, consider the incentives of all parties 2016-05-29T13:47:40.571Z
The Validity of the Anthropic Principle 2016-04-23T09:12:33.606Z
Anthropics and Biased Models 2016-04-15T02:18:11.959Z
Positive utility in an infinite universe 2016-01-29T23:40:45.491Z
The Number Choosing Game: Against the existence of perfect theoretical rationality 2016-01-29T01:04:49.557Z
Variations on the Sleeping Beauty 2016-01-10T13:00:52.115Z
Consequences of the Non-Existence of Perfect Theoretical Rationality 2016-01-09T01:22:38.059Z
Consciousness and Sleep 2016-01-07T12:04:32.572Z
In favour of total utilitarianism over average 2015-12-22T05:07:02.767Z
In defense of philosophy 2015-12-22T01:53:37.945Z
The meaning of words 2015-11-27T05:15:33.311Z
Creating lists 2015-11-25T04:41:18.335Z
Mark Manson and Rationality 2015-11-25T03:34:44.116Z
Updating on hypotheticals 2015-11-06T11:49:03.800Z
Survey Articles: A justification 2015-10-18T10:06:05.297Z
Survey Article: How do I become a more interesting person? 2015-10-18T10:04:29.135Z
Philosophical schools are approaches not positions 2015-10-09T09:46:31.711Z
The Trolley Problem and Reversibility 2015-09-30T04:06:00.343Z
Hypothetical situations are not meant to exist 2015-09-27T10:58:25.433Z
Subjective vs. normative offensiveness 2015-09-25T04:10:34.282Z
Should there be more people on the leaderboard? 2015-09-02T11:52:27.771Z
Yvain's most important articles 2015-08-16T08:27:49.268Z
Human factors research seems very relevant to rationality 2015-06-21T12:55:35.479Z
Less Wrong lacks direction 2015-05-25T14:53:30.972Z
Communities: A single moderator is often superior to the wisdom of crowds 2015-05-03T09:21:12.575Z
Tim Ferris Experiment 2015-04-29T14:27:57.854Z
Who are your favourite rationalist bloggers? 2015-04-12T13:58:35.525Z
Revisiting Non-centrality 2015-03-26T01:49:07.710Z
Getting better at getting better 2015-03-03T11:12:54.019Z
What subjects are important to rationality, but not covered in Less Wrong? 2015-02-27T11:57:42.747Z
What are the thoughts of Less Wrong on property dualism? 2015-01-03T13:24:43.193Z
Noticing 2014-10-20T07:47:54.388Z
Truth and the Liar Paradox 2014-09-02T02:05:26.189Z
Experiments 1: Learning trivia 2014-07-20T10:31:38.377Z
Observational learning and the importance of high quality examples 2014-04-06T07:00:27.187Z
Intelligence-disadvantage 2014-03-16T07:14:57.212Z
What are some related communities online? 2014-02-24T13:04:51.261Z

Comments

Comment by casebash on Leaving beta: Voting on moving to LessWrong.com · 2018-03-18T14:08:02.786Z · LW · GW

I don't think there is anything stopping you from trying to create a test LW2 account to see if you will be locked out

Comment by casebash on Leaving beta: Voting on moving to LessWrong.com · 2018-03-18T14:06:31.038Z · LW · GW

Have you seen the notifications up the top right? Does that do what you want?

Comment by casebash on An alternative way to browse LessWrong 2.0 · 2018-03-12T12:14:16.055Z · LW · GW

How haven't they caught up to 90s-era newsreaders.

Comment by casebash on Feedback on LW 2.0 · 2017-10-02T11:08:10.137Z · LW · GW

What are the plans for the Wiki? If the plan is to keep it the same, why doesn't Lesser Wrong have a link to it yet?

Comment by casebash on Feedback on LW 2.0 · 2017-10-02T10:41:53.543Z · LW · GW

I agree that people should not be able to upvote or downvote an article without having clicked through to it.

I also find the comments hard to parse because the separation is less explicit than on either Reddit or here.

Comment by casebash on LW 2.0 Open Beta starts 9/20 · 2017-09-21T13:29:58.511Z · LW · GW

It works now.

Comment by casebash on LW 2.0 Open Beta starts 9/20 · 2017-09-20T23:18:22.268Z · LW · GW

It does not seem to be working.

Comment by casebash on LW 2.0 Strategic Overview · 2017-09-15T22:40:52.934Z · LW · GW

Are there many communities that do that apart from meta-filter?

Comment by casebash on LW 2.0 Strategic Overview · 2017-09-15T09:54:04.426Z · LW · GW

Firstly, well done on all your hard work! I'm very excited to see how this will work out.

Secondly, I know that this might be best after the vote, but don't forget to take advantage of community support.

I'm sure that if you set up a Kickstarter or similar, that people would donate to it, now that you've proven your ability to deliver.

I also believe that, given how many programmers we have here, many people will want to make contributions to the codebase. My understanding was that this wasn't really happening before: a) Because the old code base was extremely difficult to get up and running/messy b) Because it wasn't clear who to talk to if you wanted to know if your changes were likely to be approved if you made them.

It looks like a) has been solved, if you also improve b), then I expect a bunch of people will want to contribute.

Comment by casebash on Map and territory: Natural structures · 2017-08-03T14:00:52.117Z · LW · GW

It's just an example.

Comment by casebash on Map and territory: Natural structures · 2017-08-03T14:00:21.781Z · LW · GW

Yes, they don't appear in the map, but when you see a mountain you think, "Hmm... this really needs to go in the map"

Comment by casebash on People don't have beliefs any more than they have goals: Beliefs As Body Language · 2017-07-30T14:26:18.696Z · LW · GW

I think it is important to note that there are probably some ways in which this is adaptive. Us nerds probably spend far too much time thinking and trying to be consistent when it offers us very little benefit. It's also better socially in order to be more flexible - people don't like people who follow the rules too strictly as they are more likely to dob them in. It also much it much easier to appear sincere, but also come up with an excuse for avoiding your prior commitments.

Comment by casebash on Models of human relationships - tools to understand people · 2017-07-30T12:58:22.414Z · LW · GW

Interesting post, I'll probably look more into some of these resources at some point. I suppose I'd be curious to know which concepts you really need to read the book for or which ones can be understood more quickly. Because reading through all of these books would be a very big project.

Comment by casebash on 90% of problems are recommendation and adaption problems · 2017-07-14T10:07:04.790Z · LW · GW

"I'm assuming you mean "new to you" ideas, not actually novel concepts for humanity as a whole. Both are rare, the latter almost vanishingly so. A lot of things we consider "new ideas" for ourselves are actually "new salience of an existing idea" or "change in relative weighting of previous ideas"." - well that was kind of the point. That if we want to help people coming up with new ideas is somewhat overrated vs. recommending existing resources or adapting existing ideas.

Comment by casebash on 90% of problems are recommendation and adaption problems · 2017-07-14T10:05:05.379Z · LW · GW

Hopefully the new LW has an option to completely delete a thread.

Comment by casebash on 90% of problems are recommendation and adaption problems · 2017-07-13T14:47:45.032Z · LW · GW

I can't see any option to report it :-(

Comment by casebash on 90% of problems are recommendation and adaption problems · 2017-07-12T13:51:50.868Z · LW · GW

I guess what I was saying that insofar as you require knowledge what you tend to need is usually a recommendation to read an existing resource or an adaption of ideas in an existing resource as opposed to new ideas. The balance of knowledge vs. practise is somewhat outside the scope of this article.

In particular, I wrote: "I'm not saying that this will immediately solve your problem - you will still need to put in the hard yards of experiment and practise - just that lack of knowledge will no longer be the limiting factor."

Comment by casebash on In praise of fake frameworks · 2017-07-11T09:24:52.840Z · LW · GW

I wrote a post on a similar idea recently - self-conscious ideologies (http://lesswrong.com/r/discussion/lw/p6s/selfconscious_ideology/) - but I think you did a much better job of explaining the concept. I'm really glad that you did this because I consider it to be very important!

Comment by casebash on Red Teaming Climate Change Research - Should someone be red-teaming Rationality/EA too? · 2017-07-07T02:17:19.246Z · LW · GW

Link doesn't seem to be working: http://reason.com/blog/2017/07/06/red-teaming-climate-chang1

Comment by casebash on Lesswrong Sydney Rationality Dojo on zen koans · 2017-07-04T01:20:58.704Z · LW · GW

What did you do re: Captain Awkward advice?

Comment by casebash on The Use and Abuse of Witchdoctors for Life · 2017-06-29T07:15:55.139Z · LW · GW

Yeah, I have a lot of difficulty understanding Lou's essays as well. Nonetheless, there appear to be enough interesting ideas there that I will probably reread them again at some point. I suspect that attempting to write a summary as I go of the point that he is making might help clarify here.

Comment by casebash on Self-conscious ideology · 2017-06-29T06:59:25.522Z · LW · GW

"'Rationality gives us a better understanding of the world, except when it does not"

I provided this as an exaggerated example of how aiming for absolute truth can mean that you produce an ideology that is hard to explain. More realistically, someone would write something along the lines of, rationality gives us a better understanding of the world, except in cases a), b), c)... but if there are enough of these cases and these cases are complex enough, then in practise people round it off to "X is true, except when it is not", ie. they don't really understand what is going on as you've pointed out.

The point was that there are advantages of creating a self-conscious ideology that isn't literally true, but has known flaws, such as it becoming much easier to actually explain so that people don't end up being confused as above.

In other words, as far as I can tell, it doesn't seem that your comment isn't really responding to what I wrote.

Comment by casebash on Self-conscious ideology · 2017-06-29T06:52:04.545Z · LW · GW

Can you add any more detail of what precisely Continental Rationalism is? Or, even better, if you have time it's probably writing up a post on this.

Comment by casebash on Effective Altruism : An idea repository · 2017-06-25T06:13:24.648Z · LW · GW

Additionally, how come you posted here instead of on the Effective Altruism forum: http://effective-altruism.com/?

Comment by casebash on Effective Altruism : An idea repository · 2017-06-25T02:27:55.447Z · LW · GW

If you want casual feedback, probably the best location currently is: https://www.facebook.com/groups/eahangout/.

I definitely think it would be useful, the problem is that building such a platform would probably take significant effort.

There are a huge number of "ideas" startups out there. I would suggest taking a look at them for inspiration.

Comment by casebash on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-22T06:57:28.997Z · LW · GW

I think the reason why cousin_it's comment is upvoted so much is that a lot of people (including me) weren't really aware of S-risks or how bad they could be. It's one thing to just make a throwaway line that S-risks could be worse, but it's another thing entirely to put together a convincing argument.

Similar ideas have been in other articles, but they've framed it in terms of energy-efficiency while defining weird words such as computronium or the two-envelopes problem, which make it much less clear. I don't think I saw the links for either of those articles before, but if I had, I probably wouldn't have read them.

I also think that the title helps as well. S-risks is a catchy name, especially if you already know x-risks. I know that this term has been used before, but it wasn't used in the title. Further, while being quite a good article, you can read the summary, introduction and conclusion without encountering the idea that the author believes that s-risks are much greater than x-risks, as opposed to being just yet another risk to worry about.

I think there's definitely an important lesson to be drawn here. I wonder how many other articles have gotten close to an important truth, but just failed to hit it out fo the park for some reason or another.

Comment by casebash on Instrumental Rationality 1: Starting Advice · 2017-06-21T04:56:05.660Z · LW · GW

Thanks for writing this post. Actually, one thing that I really liked about CFAR is that they gave a general introduction at the start of the workshop about how to approach personal development. This meant that everyone could approach the following lectures with an appropriate mindset of how they were supposed to be understood. I like how this post uses the same strategy.

Comment by casebash on Concrete Ways You Can Help Make the Community Better · 2017-06-17T13:17:56.220Z · LW · GW

Part of the problem at the moment is that the community doesn't have a clear direction like it did when Elizier was in charge. There was talk about starting an organisation in charge of spreading rationality before, but this never actually seems to have happened. I am optimistic about the new site that is being worked on though. Even though content is king and I don't know how much any of the new features will help us increase the amount of content, I think that the psychological effect about having a new site will be massive.

Comment by casebash on The Rationalistsphere and the Less Wrong wiki · 2017-06-13T01:49:32.827Z · LW · GW

I probably don't have time to be involved in this, but just commenting to note my approval for this project and appreciation for anyone who choses to contribute. One major advantage of this project is that any amount of effort here will provide value - it isn't like a spaceship that isn't useful half built.

Comment by casebash on Bet or update: fixing the will-to-wager assumption · 2017-06-09T01:27:55.423Z · LW · GW

The fact that an agent has chosen to offer the bet, as opposed to the universe, is important in this scenario. If they are trying to make money off you, then the way to do that is to offer an unbalanced bet on the expectation that you will take the wrong side. So for example, if you think you have inside information, but they know that is actually unreliable.

The problem is that you have to always play when they want, whilst the other person only has to sometimes play.

So I'm not sure if this works.

Comment by casebash on Bet or update: fixing the will-to-wager assumption · 2017-06-09T01:00:16.148Z · LW · GW

Partial analysis:

Suppose David is willing to stake 100:1 odds against Trump winning the presidency (before the election). Assume that David is considered to be a perfectly rational agent who can utilise their available information to calculate odds optimally or at least as well as Cameron, so this suggests David has some quite significant information.

Now, Cameron might have his own information that he suspects that David does not and Cameron know that David has no way of knowing that he has this information. Taking this info into account, and the fact that Cameron offered to stake 100:1 odds, he might calculate 80:1 when his information is incorporated. So this would suggest that David should take the bet as the odds are better than Cameron thinks. Except, perhaps David suspected that Cameron had some inside info and actually thinks the true odds are 200:1 - he only offered 100:1 to fool David into thinking it was better that it was - meaning that the bet is actually bad for Cameron despite his inside info.

Hmm... I still can't get my head around this problem.

Comment by casebash on Bet or update: fixing the will-to-wager assumption · 2017-06-08T15:28:06.020Z · LW · GW

Thanks for posting this. I've always been skeptical of the idea that you should offer two sided bets, but I never broke it down in detail. Honestly, that is such an obvious counter-example in retrospect.

That said, "must either accept the bet or update their beliefs so the bet becomes unprofitable" does not work. The offering agent has an incentive to only ever offer bets that benefit them since only one side of the bet is available for betting.

I'm not certain (without much more consideration), but it seems that Oscar_Cunningham's solution of always taking one half of a two sided bet sounds more plausible.

Comment by casebash on [deleted post] 2017-05-28T06:30:27.750Z

What is Esalen?

Comment by casebash on [deleted post] 2017-05-28T06:03:22.900Z

What's Goodhart's Demon?

Comment by casebash on Notes from the Hufflepuff Unconference · 2017-05-28T02:56:28.491Z · LW · GW

The biggest challenge with getting projects done within the Less Wrong community will always be that people have incredibly different ideas of what should be done. Everyone has their own ideas, few people want to join in other people's ideas. Will definitely be interested to see how things turn out after 3 months.

Comment by casebash on How To Build A Community Full Of Lonely People · 2017-05-20T15:37:53.161Z · LW · GW

I like the idea of spreading popularity around when justified, ie. high status people pointing out when someone has a particular set of knowledge that people may not know that they could benefit from or giving them credit for interesting ideas. These seem important for a strong community and additionally provide benefits to the rest of the community by allowing them to take advantage of each other's skills.

Comment by casebash on Gears in understanding · 2017-05-20T05:41:48.083Z · LW · GW

"Seems fraught with philosophical gobbledygook and circular reasoning to specify what about "because the teacher said so" it is that isn't as "mathematical" as "because you're summing ones and tens separately"."

"Because you're summing ones and tens separately" isn't really a complete gears level explanation, but a pointer or hint to one. In particular, if you are trying to explain the phenomenon formally, you would begin by defining a "One's and ten's representation" of a number n as a tuple (a,b) such that 10a + b = n. We know that at least on such representation exists with a=0 and b=n.

Proof (warning, this is massive, you don't need to read the whole thing)

You then can define a "Simples one's and ten's representation" as such a representation such that 0<=b<=9. We want to show that each number has at least one such representation. It is easy to see that (a, b) = 10a + b = 10a +10 + b - 10 = 10(a+1) + (b-10) = (a+1, b-10). We can repeat this process x times to get (a+x, b-10x). We know that for some x, b-10x will be negative, ie. if x=b, b-10x = -9x. We can decide to look at the last value before it is negative. Let this representation be (m,n). We have defined that n>=0. We also know that n can't be >=10, else, (m+1, n-10) still has the second element of the tuple >=0. So any number can be written in the tuple form.

Suppose that there are two simple representations of a number (x, y) and (p, q). Then 10x+y = 10p + q. 10(x-p) =y-q. Now, since y and q are between 0 and 9 inclusive, we get that y-q is between 9 and -9, the only factor of 10 in this range is 0. So 10(x-p) =0 meaning x=p and y-q=0, meaning y=q. ie. both members of the tuple are the same.

It is then trivial to prove that (a1, b1) + (a2, b2) = (a1+a2, b1+b2). It is similarly easy to show 0<=b1+b2<=18, so that b1+b2 or b1+b2-10 is between 0 and 9 inclusive. It then directly follow that (a1+a2, b1+b2) or (a1+a2-1, b1+b2-10) is a simple representation (here we haven't put any restriction on the value of the a's).

Observations

So a huge amount is actually going on in something so simple. We can make the following observations:

  • "Because you're summing ones and tens separately" will seem obvious to many people because they've been doing it for so long, but I suspect that the majority of the population would not be able to produce the above proof. In fact, I suspect that the majority of the population would not even realise that it was possible to break down the proof to this level of detail - I believe many of them would see the above sentence as unitary. And even when you tell them that there is an additional level of detail, they probably won't have any idea what it is supposed to look like.

  • Part of the reason why it feels more gear like is because it provides you the first step of the proof (defining one's and ten's tuples). When someone has a high enough level of maths, they are able to get from the "hint" quite quickly to the full proof. Additionally, even if someone does not have the full proof in their head, they can still see that a certain step will be useful towards producing a proof. The hint of "summing the one's and tens separately" allows you to quite quickly construct a formal representation of the problem, which is progress even if you are unable to construct a full proof. Discovering that the sum will be between 0 and 18, let's you know that if you carry, you will only ever have to carry the one. This limits the problem to a more specific case. Any person attempting to solve this will probably have examples in the past where limiting the case in such a way made the proof either easier or possible, so whatever heurestic pattern matching which occurs within their brain will suggest that this is progress (though it may of course turn out later that the ability to restrict the situation does not actually make the proof any easier)

  • Another reason why it may feel more gear like is that it is possible to construct sentence of a similar form and use them as hints for other proofs. So, "Because you're summing ones and tens separately" is linguistically close to "Because you're summing tens and hundreds separately", although I don't want to suggest that people only perform a linguistic comparison. If someone has developed an intuitive understand of these phenomenon, this will also play a role.

I believe that part of the reason why it is so hard to define what is or what is not "gears-like" is because this isn't based on any particular statement or model just by itself, but in terms of how this interacts with what a person already knows and can derive in order to produce statements. Further, it isn't just about producing one perfect gears explanation, but the extent to which a person can produce certain segments of the proof (ie. a formal statement of the problem or restricting the problem to the sub-case as above) or the extent to which it allows the production of various generalisation (ie. we can generalise to (tens & hundreds, hundreds and thousands... ) or to (ones & tens & hundreds) or to binary or to abstract algebra). Further, what counts as a useful generalisation is not objective, but relative to the other maths someone knows or the situations that someone knows in which they can apply this maths. For example, imaginary numbers may not seem like a useful generalisation until a person knows the fundamental theorem of algebra or how it can be used to model phases in physics.

I won't claim that I've completely or even almost completely mapped out the space of gears-ness, but I believe that this takes you pretty far towards an understanding of what it might be.

Comment by casebash on Gears in understanding · 2017-05-13T08:00:50.404Z · LW · GW

I'm still confused about what Gear-ness is. I know it is pointing to something, but it isn't clear whether it is pointing to a single thing, or a combination of things. (I've actually been to a CFAR workshop, but I didn't really get it there either).

Is gear-ness:

a) The extent to which a model allows you to predict a singular outcome given a particular situation? (Ideal situation - fully deterministic like Newtonian physics)

b) The extent to which your model includes each specific step in the causation? (I put my foot on the accelerator -> car goes faster. What are the missing steps? Maybe -> Engine allows more fuel in -> Compressions have greater explosive force -> Axels spin faster -> Wheels spin faster ->. This could be broken down even further)

c) The extent to which you understand how the model was abstracted out from reality? (ie. You may understand the causation chain and have a formula for describing the situation, but still be unable to produce the proof)

d) The extent to which your understanding of each sub-step has gears-ness?

Comment by casebash on Soft Skills for Running Meetups for Beginners · 2017-05-09T04:33:28.475Z · LW · GW

Out of:

1) "Hey, sorry to interrupt but this sounds like a tangent, maybe we can come back to this later during the followup conversation?"

and:

2) "Hey, just wanted to make sure some others got a chance to share their thoughts."

I would suggest that number 1) is better as 2) suggests that they are selfishly dominating the conversation.

Comment by casebash on There is No Akrasia · 2017-05-02T05:18:34.322Z · LW · GW

You used the word umbrella and if I was going with a slightly less catchy, but more accurate summary, I would write, "Akrasia is an umbrella term". I think the word is still useful, but only if you remember this. The first step in solving an Akrasia problem is to notice that a problem falls within the Akrasia umbrella, the second step is to then figure out where it falls within that umbrella.

Comment by casebash on Effective altruism is self-recommending · 2017-04-24T22:55:14.111Z · LW · GW

Because the whole point of these funds is that they have the opportunity to invest in newer and riskier ventures. On the other hand, Givewell tries to look for interventions with a strong evidence base.

Comment by casebash on Effective altruism is self-recommending · 2017-04-24T00:00:48.734Z · LW · GW

They expect Givewell to update its recommendations, but they don't necessarily expect Givewell to evaluate just how wrong a previous past recommendation was. Not yet anyway, but maybe this post will change this.

Comment by casebash on Effective altruism is self-recommending · 2017-04-23T23:59:22.993Z · LW · GW

A major proportion of the clients will be EAs

Comment by casebash on Effective altruism is self-recommending · 2017-04-22T22:51:14.461Z · LW · GW

Because people expect this from funds.

Comment by casebash on Effective altruism is self-recommending · 2017-04-22T16:18:37.980Z · LW · GW

To what extent is it expected that EAs will be the primary donors to these funds?

If you want to outsource your donation decisions, it makes sense to outsource to someone with similar values. That is, someone who at least has the same goals as you. For EAs, this is EAs.

Comment by casebash on Effective altruism is self-recommending · 2017-04-22T15:59:12.369Z · LW · GW

No, because the fund managers will report on the success or failure of their investments. If the funds don't perform, then their donations will fall.

Comment by casebash on Effective altruism is self-recommending · 2017-04-22T15:55:59.274Z · LW · GW

Wanting a board seat does not mean assuming that you know better than the current managers - only that you have distinct and worthwhile views that will add to the discussion that takes place in board meetings. This may be true even if you know worse than the current managers.

Comment by casebash on Infinite Summations: A Rationality Litmus Test · 2017-01-20T12:42:04.781Z · LW · GW

All I ever covered in university was taking the Scrodinger equation and then quantum physics did whatever that equation said.

Comment by casebash on Infinite Summations: A Rationality Litmus Test · 2017-01-20T09:45:36.501Z · LW · GW

Infinite sums/sequences are a particular area of me. I would love to know about how these sums appear in string theory - what's the best introduction/way into this? You said these sums appear all over physics. Where do they appear?

Comment by casebash on On Arrogance · 2017-01-20T08:03:30.278Z · LW · GW

"This may also be somewhat pedantic, but in something like quantum physics, because of this gap in knowledge, it'd be very obvious who the professor was to an audience that doesn't know quantum physics, even if it wasn't made explicitely clear beforehand." - I met one guy who was pretty convincing about confabulating quantum physics to some people, even though it was obvious to me he was just stringing random words together. Not that I know even the basics of quantum physics. He could actually speak really fluently and confidently - just everything was a bunch of non-sequitors/new age mysticism. I can imagine a professor not very good at public speaking who would seem less convincing.