Posts

[Link] Faster than Light in Our Model of Physics: Some Preliminary Thoughts—Stephen Wolfram Writings 2020-10-04T20:26:51.611Z · score: 1 (5 votes)
[Link] Where did you get that idea in the first place? | Meaningness 2020-09-25T15:38:00.092Z · score: 7 (4 votes)
Link: Vitamin D Can Likely End the COVID-19 Pandemic - Rootclaim Blog 2020-09-18T17:07:22.953Z · score: 20 (9 votes)
The Peter Attia Drive podcast episode #102: Michael Osterholm, Ph.D.: COVID-19—Lessons learned, challenges ahead, and reasons for optimism and concern 2020-04-04T05:19:38.304Z · score: 7 (3 votes)
"Preparing for a Pandemic: Stage 3: Grow Food if You Can [COVID-19, hort, US, Patreon]" 2020-04-03T17:57:58.826Z · score: 7 (5 votes)
How much do we know about how brains learn? 2020-01-24T14:46:47.185Z · score: 8 (4 votes)
[Link] "Doing being rational: polymerase chain reaction" by David Chapman 2019-12-13T23:54:45.189Z · score: 11 (6 votes)
Link: An exercise: meta-rational phenomena | Meaningness 2019-10-21T16:56:24.443Z · score: 9 (4 votes)
Paper on qualitative types or degrees of knowledge, with examples from medicine? 2019-06-15T00:31:56.912Z · score: 5 (2 votes)
Flagging/reporting spam *posts*? 2018-05-23T16:14:11.515Z · score: 6 (2 votes)

Comments

Comment by kenny on The Treacherous Path to Rationality · 2020-10-19T01:08:53.626Z · score: 2 (2 votes) · LW · GW

The point is, the analogy fails because there is no "music people tribe" with "music meetups" organized at "MoreMusical.com". There is no Elizier Yudkowsky of "music tribe" (at most, everyone who appreciates the Western classical music has heard about Beethoven maybe) ...

Yes, there is no single 'music people tribe' but there are very much tribes for specific music (sub-)genres. (Music is huge!)

But as you point out, there are people of 'similar' stature in music generally; really much greater stature overall. And 'music' is much much much older than 'rationality'. (Music is older than history!) And I'd guess it's inherently more interesting to many many more people too.

... nor idea that people familiar with main ideas of music have learned them from a small handful of "music sequences" and interconnected resources that reference each other.

I don't consider 'the sequences' or LW to be essential, especially now. The same insights are available from a lot of sources already and this should be more true in the future. It was, and perhaps is, a really good intro to what wasn't previously a particularly coherent subject.

Actual 'rationality' is everywhere. There was just no one persistently pointing at all of the common phenomena, or at least not recently and in a way that's accessible to (some) 'laypeople'.

But I wouldn't be surprised if there is something like a 'music sequences', e.g. a standard music textbook. I'd imagine 'music theory' or music pedagogy are in fact "interconnected resources that reference each other".

Again, if it wasn't already clear, the LW sequences are NOT essential for rationality.

Picking at one particular point in the OP, there are no weird sexual dynamics of music (some localized groups or cultures might have, eg. one could talk about sexual culture in rock music in general, and maybe the dynamics at a particular scene, but they are not central to the pursuit of all of music, and even at the local level the culture is often very diffuse).

There's no weird "sexual dynamics" in rationality – based on MY experience. I don't know why the people that publically write about that thing must define everyone else that's part of the overall network. I certainly don't consider any of it central to rationality.

I don't even know that "weird sexual dynamics" is a common feature of LW meetups, let alone other 'rationality'-related associations.

Music is widespread. There are several cultures of music that intersect with the wider society : no particular societal group has any claim of monopoly on teaching appreciation or practice of music. There is so much music that there are economies of music. There are many academies, even more teachers, untold amount of people who have varying expertise in playing instruments who apply them for fun or sometimes profit. Anyone with talent and opportunity can learn to appreciate music or play an instrument from lots of different resources.

Rationality, in the LW sense, could be all of these things. At least give it a few hundred years! Music is old.

And no one has a monopoly on rationality. If anything, LW-style rationality is competing with everything else; almost everything else is implicitly claiming to help you either believe truths or act effectively.

It would be good for rationality to explicitly attempt become like music (or scientific thinking, or mathematics, or such), because then the issue perceived by some of being an insular tribe would simply not exist.

I agree! We should definitely try to become 'background knowledge' or at least as diffuse or widespread as mathematics! I think this is already happening and that it was more widely known that it was. I may have assumed that anyone reading my comment knew (or believed) that too.

Instead of building a single community, build a culture of several communities. After all, the idea of good, explicit thinking is universally applicable, so there is nothing in it that would necessitate a single community, is there?

I agree! And again, I think this has already happened to an extent. I'm not a part of any rationality 'community'; not in the sense you've described. I think that's true for most of the people interested in this.

But, in case it's still not clear, I do NOT think rationality should or must be 'a single community'.

What I was pointing out is that if there was something named "music club" or you observed someone describe themselves as a 'music lover', it wouldn't be a big deal.

I also wrote that "I'm open to 'joining the tribe' (or some 'band' closeby)". I meant 'tribe' in the sense I think you mean 'culture' in "a culture of several communities". I meant 'band' in the sense of some – not the – real-world group of people that at least meetup regularly (and are united by at least a common interest in rationality).

Now I'm wondering where people get the idea that 'rationality' is any kind of IRL organization centered around, or run by, Elizier Yudkowsky. I think there's way more of us that aren't a member of such an organization, beyond being users of this site or readers of 'the diaspora'.

Comment by kenny on Things are allowed to be good and bad at the same time · 2020-10-18T01:19:36.482Z · score: 1 (1 votes) · LW · GW

Similarly, the hardest decisions to make are often those for which the relevant factors are most closely balanced.

If the job would have involved doing cool things but the commute would have been better (or at least no worse than your current one), you'd have just felt bad at not getting it. And vice versa.

But when things are both good and bad, or (as someone else pointed out) it's harder to sum all of the goodness and badness of the things, it's harder to feel any one thing in particular, or to feel that consistently.

Comment by kenny on Has Eliezer ever retracted his statements about weight loss? · 2020-10-16T21:15:09.905Z · score: 1 (1 votes) · LW · GW

If we there was a practical and efficient way to assign general trust values (and regularly re-computer them too!), and we used them, then yes, that might prevent Gell-Mann amnesia.

I'm not against general trust values – maybe it could be practical and useful. But I don't think there's any current way to do this that's accurate enough to be worth doing.

It seems reasonable to be skeptical about general trust values because it seems strictly better to instead trust individuals on specific topics.

I don't feel like a general trust value is a useful way to think of anyone, even total strangers. I might have something like a general distribution of trust over some set of topics (for arbitrary people), or maybe a few different distributions for different groups, and definitely distributions for specific people. I guess you could consider a distribution to also be a 'value'. I was implicitly considering 'a value' to be more like a single (real) number.

I admit that I'm not sure how true it is that anyone does already use general trust values in some sense.

Comment by kenny on Is Stupidity Expanding? Some Hypotheses. · 2020-10-16T18:25:48.566Z · score: 1 (1 votes) · LW · GW

The SAT does change, so comparisons across decades isn't obviously accurate, but a bigger difficulty is probably that many more people take the SAT than previously.

Comment by kenny on Police violence: The veil of darkness · 2020-10-16T18:21:07.179Z · score: 1 (1 votes) · LW · GW

The thing you pointed out in your previous comment (the top-level one) is a possible effect and we're open to discussing it, except that we generally avoid 'politically charged' topics.

But you didn't bring that up civilly – you were completely, and unnecessarily uncharitable towards us, assuming (for some reason) that we didn't already know the 'results' of your "fun experiment" and further that we were unwilling to acknowledge or discuss it at all either.

Comment by kenny on Police violence: The veil of darkness · 2020-10-16T18:01:21.818Z · score: 1 (1 votes) · LW · GW

Yes, those are probably much better venues for this.

Tho I think even TheMotte isolates stuff like this in special threads.

Comment by kenny on Police violence: The veil of darkness · 2020-10-16T17:59:09.427Z · score: 1 (1 votes) · LW · GW

Practically, it's pretty unreasonable to demand a discussion, even about something related to whatever is being discussed.

As for liking saying it, a couple of years ago my restraint just dried up overnight. The smart thing to do would be to shut up at the very least, but I literally have a compulsion to wade into situations that I view as unjust. It doesn't matter that I can't change a damn thing, it doesn't matter if every man and his dog hates my guts, it seems that it's all about me voicing my refusal to consent no matter what that costs me or how pointless it is. Beats me why that is. It sure hasn't made my IRL any more fun or peaceful.

The people that frequent this site are going to give you the fairest hearing you're likely to find anywhere. If you want to discuss something, bring it up! Be civil, and reasonable, and rational, but also be prepared for disagreement.

But don't treat us as guilty of something that we haven't done. Beware of distributed hypocrisy!

Comment by kenny on How do I get rid of the ungrounded assumption that evidence exists? · 2020-10-16T01:44:03.295Z · score: 1 (1 votes) · LW · GW

I don't think this helps, but that's because you can't reason without any assumptions (e.g. axioms, prior beliefs, etc.).

Comment by kenny on How do I get rid of the ungrounded assumption that evidence exists? · 2020-10-16T01:43:01.491Z · score: 1 (1 votes) · LW · GW

I would, in plain language, say that 'math needs evidence' is true.

It seems reasonable to think that the study of the natural numbers was the earliest math. I'd imagine that reaching the idea of abstract numbers itself required a lot of evidence.

And mathematical practice since seems to involve a lot of evidence as well. A valid proof seems to exist in the perfect Platonic world of forms and I'm very sympathetic to the sense that we 'discover' proofs and aren't 'inventing' them. But finding proofs, or even thinking of searching for proofs seems both necessary in the abstract as well as practically required.

I have been explicitly instructed by math professors to play with new math, e.g. gather evidence of how those systems 'work', with the context that doing so was necessary to develop general understanding and intuition of that material.

Comment by kenny on How do I get rid of the ungrounded assumption that evidence exists? · 2020-10-16T01:22:51.569Z · score: 1 (1 votes) · LW · GW

From that linked post:

Wouldn't it be nice if there were some chain of justifications that neither ended in an unexaminable assumption, nor was forced to examine itself under its own rules, but, instead, could be explained starting from absolute scratch to an ideal philosophy student of perfect emptiness?

Well, I'd certainly be interested, but I don't expect to see it done any time soon. I've argued elsewhere in several places against the idea that you can have a perfectly empty ghost-in-the-machine; there is no argument that you can explain to a rock.

I love the phrase "ideal philosophy student of perfect emptiness" as a shorthand for this idea.

The title of the post linked to in the first two links in the quote above is also a good candidate slogan for this:

[There are] no universally compelling arguments

Comment by kenny on Is Stupidity Expanding? Some Hypotheses. · 2020-10-16T00:59:29.505Z · score: 2 (2 votes) · LW · GW

I think nearly all of the 'effects' you listed exist and many are significant.

Another effect might be an inflated threshold for 'smarter-than-stupid'. I imagine this might be due to 'myopic cost accounting', i.e. a set of purchases or expenditures might all, individually, be sensible and justified, in aggregate they exceed the relevant budget. There are more and more things we're 'expected' to know, and remember in appropriate contexts. Individually, each of those expectations seems sensible, but in aggregate it's impossible to know and remember all of them. And then, via all of the biased 'selection' mechanisms at our disposal, almost everyone is judged poorly against an unfair standard.

[Is there an existing term or phrase for what I named 'myopic cost accounting'?]

Comment by kenny on Is Stupidity Expanding? Some Hypotheses. · 2020-10-16T00:45:12.502Z · score: 3 (3 votes) · LW · GW

I like this question a lot. You cast a wide net in listing possibilities and many of the items are pretty funny by themselves.

Comment by kenny on Is Stupidity Expanding? Some Hypotheses. · 2020-10-16T00:43:35.501Z · score: 3 (3 votes) · LW · GW

There's a problem of distinguishing between stupid people and stupid actions.

That'd be (akin to) fundamental attribution bias. I think this is a very plausible effect, e.g. via examples of 'stupidity' being more salient and available than all the other times someone acted reasonable or intelligently.

I think that on average, there are few brightly stupid people so when we eventually run into even one it makes a lasting impression.

Is "brightly stupid people" something like obviously and generally stupid people?

Stupidity (and intelligence) are or can be incredibly diverse. I can think of 'stupid' people that nevertheless also displayed relatively sophisticated 'cunning'. And even 'not-stupid' people will sometimes invent elaborate and convoluted workarounds to avoid a simpler and cheaper solution.

There's a problem distinguishing stupidity and ignorance too.

Maybe I've become boringly charitable towards too many people, but I don't think 'people are stupid' is particularly accurate in general. I don't think 'people are ignorant' is either.

Comment by kenny on LessWrong FAQ · 2020-10-15T21:58:12.400Z · score: 3 (2 votes) · LW · GW

Thanks!

A direct link to the "Lesswrong.com Privacy Policy and Terms of Use":

Based on that, copyright would be (by default) held by MIRI, but that only includes the site's content, not user-generated content.

From the document:

MIRI may provide you with the ability to upload or transmit User-Generated Content to or through the Website, including, but not limited to, text, comments, photographs, images, videos, audio files, profile information, name, likeness, advertisements, listings, information, and designs (collectively "User-Generated Content"). Except as otherwise provided herein, you own all rights in and to your User-Generated Content.

When you submit User-Generated Content to the Website, you grant MIRI a non-exclusive, irrevocable, worldwide, and perpetual license to use your User-Generated Content for the normal and intended purposes of the Website. These purposes may include providing you or third parties with access to and use of the Website, backing up or archiving the Website, and selling or transferring the Website to a third party. In submitting User-Generated Content to the Website, you agree to waive all moral rights in or to your User-Generated Content across the world, whether you have or have not asserted moral rights. You also agree to waive all rights of publicity or privacy in or to your User-Generated Content.

So, AFAICT – and I am NOT a lawyer or other legal professional (in any jurisdiction) – users retain copyright on all of their content, e.g. posts and comments, and MIRI only insists on a 'license' to use that content.

It would be nice to relicense both the site's content and user's content, e.g. using a Creative Commons license.

It would also be nice to have something like an API endpoint (or publicly accessible download) of the 'site data'. Ideally, such that 'deltas' could be retrieved instead of needing to download 'full snapshots' every time a backup/archive is made.

But I'm happy with the status quo! Others have been, for some time, using the site content – as you mention. I'm not aware of any problems with that. The LW team is much bigger than I expected (and you have a designated CTO!), so I'm not worried that it'll be abandoned entirely and unexpectedly. I imagine The Internet Archive has pretty good backups/archives of the site as well.

Comment by kenny on The LessWrong Team · 2020-10-15T21:36:06.104Z · score: 1 (1 votes) · LW · GW

The original context in which I started thinking about this was a discussion of a new site feature. I knew there was a small team of developers, I was pretty sure they weren't being paid full-time developer salaries, and was thinking about rules or systems to decide on things like that.

(I'm a big fan of a loose 'whatever the developers are willing to implement and maintain' as the sole filter on what gets done for non-commercial projects.)

Thinking bigger picture, I realized I didn't know how the LW team was being funded, if (or how much) it's being funded, how it's organized 'formally' (e.g. as a non-profit), etc..

I also realized I didn't know how or whether the site data was being backed-up, whether those backups were publicly available, etc.. (I'd still like to know about this, if only so I can archive my own copy of the site's contents.)

Comment by kenny on Has Eliezer ever retracted his statements about weight loss? · 2020-10-15T21:29:26.057Z · score: 1 (1 votes) · LW · GW

Some very trustworthy people are only trustworthy in a specific and relatively narrow domain.

Consider the news and journalism and popular media more generally. Have you encountered wildly inaccurate claims or descriptions about something you already knew a lot about? That's a very common experience for all kinds of experts!

But almost all experts are only experts – 'trustworthy' – in a specific and relatively narrow domain. But where do they get most of their beliefs about everything else? The same not-that-accurate sources that everyone else uses.

Gell-Mann Amnesia is forgetting how inaccurately those same sources handle the subjects on which you are an expert and failing to generalize that those same sources are probably about as accurate for everything else too.

Physicists are 'notorious' for acting as-if they had resolved all of the important unsolved issues in other sciences. One lesson is that expertise is very much bounded, limited, and generally not particularly 'global'.

In other words, don't (bother) assigning "general 'trust' values to people".

Comment by kenny on [Link] Faster than Light in Our Model of Physics: Some Preliminary Thoughts—Stephen Wolfram Writings · 2020-10-15T21:09:45.339Z · score: 1 (1 votes) · LW · GW

In the second paragraph of the introduction in the review by Aaronson:

As a popularization, A New Kind of Science is an impressive accomplishment.

With regard to Aaronson's criticisms with respect to the content in NKS about quantum mechanics, I'm pretty sure Wolfram has addressed some of them in his newer work, e.g. (previously) ignoring 'multiway systems'.

One thing that jumps out at me, in Aaronson's 'not compatible with both special relativity and Bell inequality violations' argument against Wolfram's (earlier version of his) 'hypergraph physics':

A technicality is that we need to be able to identify which vertices correspond to x_a, y_a, and so on, even as G evolves over time.

Funnily enough, it's Aaronson's 'computation complexity for philosophers' paper that now makes me think such an 'identification' routine is possibly (vastly far) from "a technicality", especially given that the nodes in the graph G are expected to represent something like a Planck length (or smaller) and x_a and y_a are "input bits", i.e. some two-level quantum mechanical system (?). The idea of identifying the same x_a and y_a as G doesn't seem obvious or trivial from a computational complexity perspective.

Tho, immediately following what I quoted above, Aaronson writes:

We could do this by stipulating that (say) "the x_a vertices are the ones that are roots of complete binary trees of depth 3", and then choosing the rule set to guarantee that, throughout the protocol, exactly two vertices have this property.

That doesn't make sense to me as even a reasonable example of how to identify 'the same' qubits as G evolves. Aaronson seems to be equating vertices in G with a qubit but Wolfram's idea is that a qubit is something much much bigger inside G.

I can't follow the rest of that particular argument with any comprehensive understanding.

I wonder how much 'criticism' of Wolfram is a result of 'independent discovery'. Aaronson points out that a lot of Wolfram's 'hypergraph physics' is covered in work on loop quantum gravity. While Wolfram was a 'professional physicist' at one point, he hasn't been a full-time academic in decades so it's understandable that he isn't familiar with all of the possibly relevant literature.

It's also (still) possible that Wolfram's ideas will revolutionize other sciences as he claims. I'm skeptical of this too tho!

Comment by kenny on The LessWrong Team · 2020-10-15T02:10:27.636Z · score: 5 (3 votes) · LW · GW

It occurred to me yesterday that maybe LessWrong should be 'formalized' a bit. I'm happy to have found this page (and others) that are strong evidence that some parts of civilization are pretty adequate!

Thank you all for your hard work. I love this site. I think you've done an excellent job keeping it going!

Comment by kenny on LessWrong FAQ · 2020-10-15T02:09:03.505Z · score: 1 (1 votes) · LW · GW

I'm curious about this as well.

Specifically, I'd like to (possibly) archive the site (privately).

Comment by kenny on Editor Mini-Guide · 2020-10-15T01:42:20.984Z · score: 1 (1 votes) · LW · GW

I concur.

I also am convinced that serif fonts are probably better for any text that's not just a name by itself or a simple table (e.g. an airport arrival/departure board). Those little hooks and claws make a difference!

Comment by kenny on [Link] Faster than Light in Our Model of Physics: Some Preliminary Thoughts—Stephen Wolfram Writings · 2020-10-15T00:45:33.667Z · score: 1 (1 votes) · LW · GW

Thanks! I just read another Aaronson paper recently – his 'computation complexity for philosophers' – and thought it was fantastic. (I've been following his blog for awhile now.)

I definitely appreciate, not even having (yet) read the paper to which you linked, that Wolfram might not be entirely up-to-date with the frontier of computation complexity. (I'm pretty sure he knows some, if not a lot, of the major less-recent results.)

Wolfram's also something of a 'quantum computing' skeptic, which I think satisfyingly explains why he doesn't discuss it much in NKS or elsewhere. (He also does somewhat explain his skepticism, and that he is skeptical of it, in NKS (IIRC).)

I can also understand and sympathize with experts not being impressed with the book, or his work generally. But Robin Hanson has expressed similar complaints about the reception of his own work, and interdisciplinary work more widely, and I think those complaints are valid and (sadly) true.

I don't personally model academia as (effectively) producing truth or even insight as a particularly high priority.

Comment by kenny on [Link] Faster than Light in Our Model of Physics: Some Preliminary Thoughts—Stephen Wolfram Writings · 2020-10-15T00:36:20.626Z · score: 1 (1 votes) · LW · GW

I think this is a waste of his time tho. Academia is nice and all (tho not as much as we once thought) but it actively resists its members publishing big accessible books. That seems tragic to me.

Comment by kenny on [Link] Faster than Light in Our Model of Physics: Some Preliminary Thoughts—Stephen Wolfram Writings · 2020-10-15T00:34:10.309Z · score: 1 (1 votes) · LW · GW

With his latest 'hypergraph physics' project, that's exactly what his 'team' is doing.

His company hosts some kind of math/science/computation summer camp (for high school students and older I think) and I'm pretty sure he's mentioned several times that research has been published based on the camp activities. (That's much less directly connected to him or his own personal ideas or research tho.)

Comment by kenny on Has Eliezer ever retracted his statements about weight loss? · 2020-10-15T00:31:25.796Z · score: 1 (1 votes) · LW · GW

There's a subtle point in the second part that is very plausible.

Comment by kenny on Why Boston? · 2020-10-14T16:16:24.320Z · score: 3 (2 votes) · LW · GW

Thanks – that makes sense!

Comment by kenny on [Link] Faster than Light in Our Model of Physics: Some Preliminary Thoughts—Stephen Wolfram Writings · 2020-10-14T16:14:41.233Z · score: 2 (2 votes) · LW · GW

I agree – it seems perfectly fine research, and, as you mention, novel.

I also think it's not only too early but besides the point to demand rigor at all. Or, it's fine to demand rigor, but no one's obligated to supply it – not even Wolfram or his team or the wider 'community'. It's fine to ignore them too!

But yes, it's unreasonable to expect a lot of rigor given how young this 'field' is.

I also thank that – *reasonably – our priors regarding the computational, i.e. practical, difficulty of simulating our universe (or something similar) at the level of 'space quanta' is immense. String theory seems to have run into similar problems – and it's been one of the premier fields in physics for decades.

AFAIK, our simulations of the Standard Model, or other quantum mechanical models, are extremely limited too. Why would we expect any more fundamental theory to be even that much more difficult to compute/simulate/analyze?

I think that, given the extremely young age of this topic of research, the kind of qualitative 'eyeball or ad-hoc program' analysis Wolfram provides in his published work is eminently sensible and reasonable. It should be very much exploratory at such an early stage.

The math is a bit advanced at times but the 'raw' research is much simpler – basically very simple computer programs, but lots of them.

This is Wolfram's big/unique trick (IMO): just enumerate literally all of the possibilities for some class or set of simple programs and, often first, look at some visualizations of the programs, e.g. their evolution, and look for patterns, first with your eyes/brain and then, incrementally, with more and more 'search' programs. If possible, one might find good 'mathematical' compressions of the data/info/behavior of the programs, and, more rarely, a good 'mechanistic' understanding as well.

He wrote and published a book – available free online here – that's a massive infodump of basically all of his thoughts and speculations after having performed his trick – and diligently recorded all kinds of interesting findings – on a whole bunch of different kinds of 'simple programs'. (And this apparently happened over decades.) He came up with a bunch of interesting and, to me, very plausible ideas about computation and its implications for a lot of other sciences. I found, and continue to find, it to be a hugely impressive intellectual achievement.

But the book – and now Wolfram as a person – has very much not been received as I did and have. Academics in particular have a number of objections, some (IMO) reasonable – e.g. Wolfram seems to claim originality for some ideas that definitely had been published earlier (in the 'academic literature') – and some (again, IMO) unreasonable – e.g. Wolfram doesn't write in a typical academic style or format.

Wolfram also is widely considered to be generally arrogant and self-centered. I don't find those charges to be that persuasive, or that significant or serious regardless.

(He's certainly not, on any scale, particularly bad along these dimensions. But I also don't have any personal problem that, e.g. Steve Jobs, was also arrogant, self-centered, and seemingly an extreme 'asshole'. It does seem like the kind of people that are like this are over-represented among people that are both (relatively and extremely, as well as publicly) 'successful' or 'important'. And this doesn't seem that unintuitive either.)

And that is my theory/model of the "negativity" that Wolfram elicits. (And the examples here on this post are pretty mild based on what I've found elsewhere.)

Comment by kenny on Police violence: The veil of darkness · 2020-10-14T15:44:52.383Z · score: 3 (2 votes) · LW · GW

What's fun about this?

Comment by kenny on Police violence: The veil of darkness · 2020-10-14T15:43:56.109Z · score: 1 (1 votes) · LW · GW

Yes, that was it – thanks! No worries tho!

I'm not aware of any good and common convention here for handling link posts. I like to post the link and then my own separate commentary. But I've also seen a lot of people go to the opposite extreme and cross-post here.

For this post, it would have been much less confusing had you quoted the entire last paragraph of the intro, and also added something like "Read the rest here". I like putting "[Link] ..." in the title of my link posts here too so that that info is available for people skimming titles. (I don't think that's always necessary or should be required; just a personal preference.)

What's the theory for why "state patrol agencies" are less racist/biased than "municipal police departments"?

This is a hard topic to discuss rationally (or reasonably) because of politics. I also worry there's a large 'mistake theory vs conflict theory' conflict/mistake dynamic too.

I like your idea of analyzing a bunch of dimensions, e.g. age, gender, income/wealth, education, and political identification, for things like police traffic stops and vehicle searches. That's something Andrew Gelman suggests a lot:

When you do have multiple comparisons, I think the right way to go is to analyze all of them using a hierarchical model—not to pick one or two or three out of context and then try to adjust the p-values using a multiple comparisons correction. ...

To put it another way, the original sin is selection. The problem with p-hacked work is not that p-values are uncorrected for multiple comparison, it’s that some subset of comparisons is selected for further analysis, which is wasteful of information. It’s better to analyze all the comparisons of interest at once.

It'd be nice if the researchers for the studies you reference in your post had also published their data. (Did they? I expect they didn't – but I haven't checked.)

Comment by kenny on The Solomonoff Prior is Malign · 2020-10-14T15:07:49.705Z · score: 8 (7 votes) · LW · GW

I liked this post a lot, but I did read it as something of a scifi short story with a McGuffin called "The Solomonoff Prior".

It probably also seemed really weird because I just read Why Philosophers Should Care About Computational Complexity [PDF] by Scott Aaronson and having read it makes sentences like this seem 'not even' insane:

The combined strategy is thus to take a distribution over all decisions informed by the Solomonoff prior, weight them by how much influence can be gained and the version of the prior being used, and read off a sequence of bits that will cause some of these decisions to result in a preferred outcome.

The Consequentialists are of course the most badass (by construction) alien villains ever "trying to influence the Solomonoff prior" as they are wont!

Given that some very smart people seem to seriously believe in Platonic realism, maybe there are Consequentialists malignly influencing vast infinities of universes! Maybe our universe is one of them.

I'm not sure why, but I feel like the discovery of a proof of P = NP or P ≠ NP is the climax of the heroes valiant struggle, as the true heirs of the divine right to wield The Solomonoff Prior, against the dreaded (other universe) Consequentialists.

Comment by kenny on Does anyone worry about A.I. forums like this where they reinforce each other’s biases/ are led by big tech? · 2020-10-13T19:03:52.985Z · score: 1 (1 votes) · LW · GW

I don't think they're dominating the conversation – just some conversations, i.e. the ones they pay for. I don't think them doing this is negatively affecting any other conversations, e.g. by academics or "people I suppose less focused on a mentality of “move fast and break things” when they are breaking people".

(I'm not sure what you mean by "when they are breaking people" – any details or specifics you can share about this?)

If anything, I've been pleasantly surprised at how open those same big tech companies are to 'friendly AI'.

It also doesn't hurt tho that deploying (and maintaining) effective AI (systems) seems fairly difficult.

Comment by kenny on Why Boston? · 2020-10-13T18:54:13.930Z · score: 1 (1 votes) · LW · GW

Ohhh – what's the context of that? A past possibility? Or just a hypothetical?

Comment by kenny on Police violence: The veil of darkness · 2020-10-13T18:52:19.998Z · score: 1 (1 votes) · LW · GW

Instead of augmented reality goggles we use the geometry of the earth and sun.

Huh?

Comment by kenny on The Sun Room · 2020-10-12T22:15:43.912Z · score: 3 (3 votes) · LW · GW

This is a beautiful poetic gem of a post!

I can't think of any keys to 'the sun room' but I definitely know of some doors or barriers that block me from entering, e.g. distractions, discomfort, anxiety, depression.

Being in a literal sunny room is often a good way to enter too!

I am particularly sympathetic to this:

I love to code here, though one must keep an eye for refactoring hubris when sitting in the sun room.

Comment by Kenny on [deleted post] 2020-10-12T20:10:10.774Z

There are some additional and significant details that complicates this kind of analysis:

  1. In some places, renters have something more accurately describable as quasi- or partial ownership.
  2. In a lot (?) of places, owners also have to pay property taxes, which can change (e.g. increase) significantly over short periods.
  3. In the U.S. (and I'd guess some other places), owners are subsidized, e.g. via the mortgage interest deduction.
  4. It also the case that, (again!) in some places, it's considerably cheaper to rent than own – these seem to be places where housing generally (and relatively) expensive but renters also have something like quasi- or partial ownership of the housing they're renting.

In my experience, most housing rentals involve a significant time commitment, i.e. a twelve (12) month lease. That's a very different situation than month-to-month or one in which a tenant could leave with a month or two notice. And moving itself is very costly – not just in financial terms, but in time, energy, and considerable stress. Transaction costs are important!

Comment by kenny on Why Boston? · 2020-10-12T19:47:59.795Z · score: 5 (2 votes) · LW · GW

Ahhh – I didn't know MIRI (or similar groups) were allowing people to work remotely.

I think Robin Hanson might be on to something with respect to the looming importance and significance of remote work (e.g. it will effectively create a much larger, more global, labor market) so I'd expect MIRI-like organizations to have to be willing to pay those still-high labor costs regardless of where people live, i.e. rent would be pretty cheap in Montreal (compared to SV or NYC or even Boston).

Comment by kenny on Why Boston? · 2020-10-12T19:44:31.009Z · score: 1 (1 votes) · LW · GW

I'm confused then. "Personal survival" seems like a 'avoid early death' metric whereas 'personal flourishing' (or something similar) would include typical 'quality of life' measures.

Disaster is a recurring part of "the real world" too and some places are more or less dangerous than others in that respect. That seemed to be what you were getting at.

Comment by kenny on [Link] Faster than Light in Our Model of Physics: Some Preliminary Thoughts—Stephen Wolfram Writings · 2020-10-12T19:41:42.585Z · score: 1 (1 votes) · LW · GW

He in fact did derive (approximately) both special and general relativity for the 'hypergraph physics' project – I think. I'll look for a link but it should be on the same site as the link for this post.

Have you read his previous book "A New Kind of Science"? It's available for free online here. I think the "analogies" he presents are surprisingly good given how simple they are, e.g. the fluid dynamics stuff seems 'right', even if it's not (nearly) as accurate as standard numerical approximations/simulations based on the standard differential equations.

Comment by kenny on Everything I Know About Elite America I Learned From ‘Fresh Prince’ and ‘West Wing’ · 2020-10-12T00:40:19.939Z · score: 7 (4 votes) · LW · GW

It's a good piece; I liked the end a lot:

TV helped me to understand people who were worlds away from how I grew up. It gave me an understanding of the ingredients of social mobility. What I can’t quite disentangle is whether it taught me how to get what I had always wanted or taught me what to want.

I'd imagine art in general does both things to an extent.

Comment by kenny on Everything I Know About Elite America I Learned From ‘Fresh Prince’ and ‘West Wing’ · 2020-10-12T00:31:19.388Z · score: 1 (1 votes) · LW · GW

Answering your clarified questions:

It seems obvious (to me) that being an elite is valuable (good) – for some people. Conversely – for some people – being an elite would be net anti-valuable (bad).

For you specifically (or me, who also doesn't particularly want to be 'elite'), I think we're probably right that that we wouldn't enjoy it overall, despite the many advantages. We could be wrong about that tho!

It's something I wrestle with for more specific cases as well. Do I want to be an elite in my profession? I have mixed feelings. For one, the money would be nice! But I also don't want to work the longer hours I'd also expect.

It would seem reasonable to try being an elite, in some cases, if doing so doesn't seem too costly. Elite status does seem to be a particularly costly signal in general tho.

Comment by kenny on Why Boston? · 2020-10-11T23:55:06.659Z · score: 1 (1 votes) · LW · GW

I liked the notes, but they're hard to interpret (for me).

One example being me not appreciating how cheap 400-600 CAD "per person" (in what I'm assuming is shared housing) is for different plausible incomes by profession. If NYC housing costs are 150% of Montreal, but so too are salaries, then Montreal isn't really very much "cheap for a big city".

There does seem to be a good bit of AI work tho, and research too; that's interesting!

Comment by kenny on Why Boston? · 2020-10-11T23:48:26.632Z · score: 1 (1 votes) · LW · GW

best city for personal survival

Like for a 'zombie apocalypse'?

Comment by kenny on [Link] Faster than Light in Our Model of Physics: Some Preliminary Thoughts—Stephen Wolfram Writings · 2020-10-11T00:28:46.357Z · score: 1 (1 votes) · LW · GW

Ahh – I can understand and sympathize with that!

I don't think he has literally one trick but you're right that a lot of his recent public work has been exploring his ideas instead of falsifying them.

I'd describe his 'main trick' as trying to find a simple computable system that mostly mimics the 'dynamics' of some other system.

And – or so I think – exploring, at considerable length, the idea that 'everything is space' and '(maybe) space is a hypergraph evolving according to a simple rule' is an extremely interesting endeavor. It doesn't seem particularly crazy compared to other niche 'theories of everything' for one.

And, yes, he talks and writes about 'universal computation', his own phrase, instead of 'Turing-complete' – that's a somewhat lamentable phenomena, but pretty understandable. We all – as individuals and groups – do that too tho, so I don't really 'ding' him for those 'excesses'. This is an extremely common complaint about him and his work, but it's mostly irrelevant to determining whether his ideas are interesting, let alone true.

(Arguably we – the LessWrong users – have done the same thing repeatedly!)

I think the bigger thing that he has – not demonstrated exactly – but accumulated tantalizing evidence for, is that Turing-completeness ('universal computation') is both easy and, surprisingly, common. I still think that's an under-appreciated point.

His recent 'hypergraph' work seems promising to me – it seems like a (very mildly or weakly) plausible (tho rough) idea of how one might formulate everything else in terms of 'space quanta' and his ideas about what 'time' and 'causality' could mean based on an example formulation seem very interesting. I certainly don't begrudge him, or anyone else, spending their time this doing this. And I definitely don't think him, or anyone else, owes me a falsifiable theory! (I might feel a little differently if I was involuntarily supporting his efforts, e.g. via taxes, like I am with string theory.)

The practical obstacles to actually start to test how well his ideas or theories work seem insurmountable, but that's still true of string theory as well – and maybe you feel similarly about it!

Comment by kenny on On Slack - Having room to be excited · 2020-10-10T20:30:40.910Z · score: 2 (2 votes) · LW · GW

This is a great post! Thanks!

Something that this reminded me of is 'top-down versus bottom-up', or, really, the need for both top-down and bottom-up elements in 'healthy' and long-term-effective systems. Another very-much-related concept is the explore-vs-exploit tradeoff (and the general conclusion that both are necessary, to varying degrees in different circumstances).

Comment by kenny on The Treacherous Path to Rationality · 2020-10-10T04:23:23.887Z · score: 7 (5 votes) · LW · GW

Music isn't the sole domain of people that are particular interested in it either but it doesn't seem "super toxic" that they might consider themselves to be, let alone refer to themselves as, 'music people'. It seems like a natural shorthand given that that is the topic or subject around which they've organized.

And yes, it is – mostly – about the ideas. I've only been to a few meetups and generally prefer to read along and occassionally comment, but I'm open to 'joining the tribe' (or some 'band' closeby) too because it is nice to be able to socialize with people that think similarly and about the same topics.

The examples in the post about people bouncing off the community also seemed to be cases where they were bouncing off the ideas too.

Comment by kenny on Why isn't JS a popular language for deep learning? · 2020-10-10T00:13:20.142Z · score: 1 (1 votes) · LW · GW

This seems like a duplicate answer.

Comment by kenny on Why isn't JS a popular language for deep learning? · 2020-10-10T00:10:45.335Z · score: 3 (2 votes) · LW · GW

Particularly the path-dependence of culture and socialization

Comment by kenny on Covid 10/8: October Surprise · 2020-10-09T23:30:33.077Z · score: 4 (3 votes) · LW · GW

I prefer not censoring – by you – but I don't mind what you're doing now really.

I think your emotions are useful information and profanity is sometimes a perfect vehicle for them.

Comment by kenny on [Link] Faster than Light in Our Model of Physics: Some Preliminary Thoughts—Stephen Wolfram Writings · 2020-10-09T22:21:57.616Z · score: 1 (1 votes) · LW · GW

This is extremely uncharitable.

For one, it discusses other possibilities for FTL beyond 'wormholes'; for another, 'wormhole' is mostly a mysterious label for the possibility of the 'locality' of space being more complicated than our intuitive understanding.

The linked post is part of a larger project exploring the possibility of a 'hypergraph physics' – it's not asserting that the universe is a hypergraph but 'assuming' it for the sake of explication.

Comment by kenny on What reacts would you like to be able to give on posts? (emoticons, cognicons, and more) · 2020-10-09T22:15:00.332Z · score: 1 (1 votes) · LW · GW

I don't mind any number of people replying "Updated". So, yes, I would prefer that over a count of some small number of 'standard reactions'.

But, as I suggested in another comment on this post, external survey tools could be easily used to gather this data or feedback if you or anyone else really think it's valuable or useful.

I would like to see some evidence about how well those work and how useful that gathered data is before I change my mind about this being useful here.

We want to lower the bar to providing feedback so we can get more feedback

I don't want this as I don't think 'more feedback' is particularly useful, valuable, or germane to this site.

I'm also thinking about this request in terms of additional work, and ongoing maintenance, by the site's developers/maintainers.

I'm also unclear why anyone would want to (seemingly) optimize for such incredibly low-density info as the count of 'reacts' on posts. We are mostly – at times explicitly – trying to avoid persuading each other and instead focus on sharing our (detailed) thoughts and feelings so that we can, as a group, reason better. This all seems exactly backwards given that.

Comment by kenny on What reacts would you like to be able to give on posts? (emoticons, cognicons, and more) · 2020-10-09T22:04:16.266Z · score: 0 (2 votes) · LW · GW

I think our (effectively) requiring comments is better than what you're proposing.

I don't think I've published a post other than link posts, but even with my 'poster' hat on, I'd (personally) much prefer engagement and discussion than a simple 'self reported understanding' count. I measure understanding relative to engagement and would estimate it based on the specific and particular details in comments, e.g. whether several users have pointed out that something was confusing; what expected, or surprising, connections to do others make; whether the arguments for or about, and summaries or paraphrasing of my post match my own understanding of the topic.

I wouldn't trust a simple count of the number of users that report 'understanding' a post and thus I wouldn't find it to be particularly valuable.

But I agree with both of your last points – your proposal very well might result in more feedback and these metrics would be trivially accessible versus manually interpreting some number of text comments.

I'd prefer that LessWrong remain as-is in this way.

But I think you could implement this yourself with external survey tools – and I'd be very interested in reading about any experiments along those lines!