Thomas Kwa's Shortform

post by Thomas Kwa (thomas-kwa) · 2020-03-22T23:19:01.335Z · score: 2 (1 votes) · LW · GW · 14 comments

14 comments

Comments sorted by top scores.

comment by Thomas Kwa (thomas-kwa) · 2020-06-13T04:32:57.946Z · score: 6 (4 votes) · LW(p) · GW(p)

Say I need to publish an anonymous essay. If it's long enough, people could plausibly deduce my authorship based on the writing style; this is called stylometry. The only stylometry-defeating tool I can find is Anonymouth; it hasn't been updated in 7 years and it's unclear if it can defeat modern AI. Is there something better?

comment by Thomas Kwa (thomas-kwa) · 2020-09-19T00:44:57.788Z · score: 5 (4 votes) · LW(p) · GW(p)

Given that social science research often doesn't replicate [LW · GW], is there a good way to search a social science finding or paper and see if it's valid?

Ideally, one would be able to type in e.g. "growth mindset" or a link to Dweck's original research, and see:

  • a statement of the idea e.g. 'When "students believe their basic abilities, their intelligence, their talents, are just fixed traits", they underperform students who "understand that their talents and abilities can be developed through effort, good teaching and persistence." Carol Dweck initially studied this in 2012, measuring 5th graders on IQ tests.'
  • an opinion from someone reputable
  • any attempted replications, or meta-analyses that mention it
  • the Replication Markets predicted replication probability, if no replications have been attempted.
comment by habryka (habryka4) · 2020-09-19T17:17:59.598Z · score: 3 (2 votes) · LW(p) · GW(p)

Alas, the best I have usually been able to do is "<Name of the paper> replication" or "<Name of the author> replication". 

comment by Thomas Kwa (thomas-kwa) · 2020-07-08T22:30:27.358Z · score: 4 (3 votes) · LW(p) · GW(p)

Are there ring species where the first and last populations actually can interbreed? What evolutionary process could feasibly create one?

comment by Thomas Kwa (thomas-kwa) · 2020-10-01T22:41:52.382Z · score: 3 (2 votes) · LW(p) · GW(p)

One of my professors says this often happens with circular island chains; populations from any two adjacent islands can interbreed, but not those from islands farther apart. I don't have a source. Presumably this doesn't require an expanding geographic barrier.

comment by Richard_Kennaway · 2020-07-09T08:34:14.341Z · score: 2 (1 votes) · LW(p) · GW(p)

Wouldn't that just be a species?

comment by Pattern · 2020-07-11T17:14:00.460Z · score: 2 (1 votes) · LW(p) · GW(p)

Ourorobos species.

comment by Thomas Kwa (thomas-kwa) · 2020-07-09T20:54:25.739Z · score: 2 (2 votes) · LW(p) · GW(p)

I'm thinking of a situation where there are subspecies A through (say) H; A can interbreed with B, B with C, etc., and H with A, but no non-adjacent subspecies can produce fertile offspring.

comment by Pongo · 2020-07-09T06:34:19.102Z · score: 2 (2 votes) · LW(p) · GW(p)

A population distributed around a small geographic barrier that grew over time could produce what you want

comment by Thomas Kwa (thomas-kwa) · 2020-06-10T00:56:00.308Z · score: 4 (3 votes) · LW(p) · GW(p)

2.5 million jobs were created in May 2020, according to the jobs report. Metaculus was something like [99.5% or 99.7% confident](https://www.metaculus.com/questions/4184/what-will-the-may-2020-us-nonfarm-payrolls-figure-be/) that the number would be smaller, with the median at -11.0 and 99th percentile at -2.8. This seems like an obvious sign Metaculus is miscalibrated, but we have to consider both tails, making this merely a 1 in 100 or 1 in 150 event, which doesn't seem too bad.

comment by Thomas Kwa (thomas-kwa) · 2020-03-25T08:36:31.741Z · score: 3 (2 votes) · LW(p) · GW(p)

The most efficient form of practice is generally to address one's weaknesses. Why, then, don't chess/Go players train by playing against engines optimized for this? I can imagine three types of engines:

  1. Trained to play more human-like sound moves (soundness as measured by stronger engines like Stockfish, AlphaZero).
  2. Trained to play less human-like sound moves.
  3. Trained to win against (real or simulated) humans while making unsound moves.

The first tool would simply be an opponent when humans are inconvenient or not available. The second and third tools would highlight weaknesses in one's game more efficiently than playing against humans or computers. I'm confused about why I can't find any attempts at engines of type 1 that apply modern deep learning techniques, or any attempts whatsoever at engines of type 2 or 3.

comment by Thomas Kwa (thomas-kwa) · 2020-05-07T02:54:10.155Z · score: 1 (1 votes) · LW(p) · GW(p)

Someone happened to ask a post on Stack Exchange about engines trained to play less human-like sound moves. The question is here, but most of the answerers don't seem to understand the question.

comment by Thomas Kwa (thomas-kwa) · 2020-03-22T23:51:18.114Z · score: 3 (2 votes) · LW(p) · GW(p)

Eliezer Yudkowsky wrote in 2016:

At an early singularity summit, Jürgen Schmidhuber, who did some of the pioneering work on self-modifying agents that preserve their own utility functions with his Gödel machine, also solved the friendly AI problem. Yes, he came up with the one true utility function that is all you need to program into AGIs!

(For God’s sake, don’t try doing this yourselves. Everyone does it. They all come up with different utility functions. It’s always horrible.)

His one true utility function was “increasing the compression of environmental data.” Because science increases the compression of environmental data: if you understand science better, you can better compress what you see in the environment. Art, according to him, also involves compressing the environment better. I went up in Q&A and said, “Yes, science does let you compress the environment better, but you know what really maxes out your utility function? Building something that encrypts streams of 1s and 0s using a cryptographic key, and then reveals the cryptographic key to you.”

At first it seemed to me that EY refutes the entire idea that "increasing the compression of environmental data" is intrinsically valuable. This surprised me because my intuition says it is intrinsically valuable, though less so than other things I value.

But EY's larger point was just that it's highly nontrivial for people to imagine the global maximum of a function. In this specific case, building a machine that encrypts random data seems like a failure of embedded agency [LW · GW] rather than a flaw in the idea behind the utility function. What's going on here?

comment by Viliam · 2020-03-24T00:46:12.470Z · score: 3 (2 votes) · LW(p) · GW(p)

Something like Goodhart's Law, I suppose. There are natural situations where X is associated with something good, but literally maximizing X is actually quite bad. (Having more gold would be nice. Converting the entire universe into atoms of gold, not necessarily so.)

EY has practiced the skill of trying to see things like a machine. When people talk about "maximizing X", they usually mean "trying to increase X in a way that proves my point"; i.e. they use motivated thinking.

Whatever X you take, the priors are almost 100% that literally maximizing X would be horrible. That includes the usual applause lights, whether they appeal to normies or nerds.