Posts

Book review of "Mengzi" 2022-03-12T04:54:01.801Z

Comments

Comment by Anonymous (currymj) on Linkpost: They Studied Dishonesty. Was Their Work a Lie? · 2023-10-05T21:30:48.847Z · LW · GW

I don't think this is a sufficiently complete way of looking at things. It could make sense when the problem was thought to be "replication crisis via p-hacking" but it turns out things are worse than this.

  • The research methodology in biology doesn't necessarily have room for statistical funny business but there are all these cases of influential Science/Nature papers that had fraud via photoshop.
  • Gino and Ariely's papers might have been statistically impeccable, the problem is they were just making up data points.
  • there is fraud in experimental physics and applied sciences too from time to time.

I don't know much about what opportunities there are for bad research practices in the humanities. The only thing I can think of is citing a source that doesn't say what is claimed. This seems like a particular risk when history or historical claims are involved, or when a humanist wants to refer to the scientific literature. The spectacular claim that Victorian doctors treated "hysteria" using vibrators turns out to have resulted from something like this.

Outside cases like that, I think the humanities are mostly "safe" like math in that they just need some kind of internal consistency, whether that is presenting a sound argument, or a set of concepts and descriptions that people find to be harmonious or fruitful.

Comment by Anonymous (currymj) on We Should Prepare for a Larger Representation of Academia in AI Safety · 2023-08-16T09:34:48.081Z · LW · GW

I think the biggest difference is this will mean more people with a wider range of personality types, socially interacting in a more arms-length/professionalized way, according to the social norms of academia.

Especially in CS, you can be accepted among academics as a legitimate researcher even without a formal degree, but it would require being able and willing to follow these existing social norms.

And in order to welcome and integrate new AI safety researchers from academia, the existing AI safety scene would have to make some spaces to facilitate this style of interaction, rather than the existing informal/intense/low-social-distance style.

Comment by Anonymous (currymj) on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T09:43:04.999Z · LW · GW

This community is doing way better than it has any right to for a bunch of contrarian weirdos with below-average social skills. It's actually astounding.

The US government and broader military-industrial complex is taking existential AI risk somewhat seriously. The head of the RAND Corporation is an existential risk guy who used to work for FHI. 

Apparently the Prime Minister of the UK and various European institutions are concerned as well.

There are x-risk-concerned people at most top universities for AI research and within many of the top commercial labs.

In my experience "normies" are mostly open to simple, robust arguments that AI could be very dangerous if sufficiently capable, so I think the outreach has been sufficiently good on that front.

There is a much more specific set of arguments about advanced AI (exotic decision theories, theories of agency and preferences, computationalism about consciousness) that are harder to explain and defend than the basic AI risk case, so would rhetorically weaken it. But people who like them get very excited about them. Thus I think having a lot more popular materials by LessWrong-ish people would do more harm than good, so it was a good move whether intentional or not to avoid this. (On the other hand if you think these ideas are absolutely crucial considerations without which sensible discussion is impossible, then it is not good.)

Comment by Anonymous (currymj) on What I Think About When I Think About History · 2023-07-06T08:53:30.542Z · LW · GW

This is the case for me as well, and I don't remember when it developed. I have a timeline that starts with the present day on the right, and goes left and slightly up. It gets blurry around 500 BC. I can somewhat zoom in and recenter it if I'm thinking about individual historical periods. I can roughly place some historical events in the correct spots on the timeline, but since I have never needed to formally memorize many historical dates, this is very rough.

You might be interested in reading about experiences in the broad category of synesthesia, and of the  really fascinating history of "memory palace" techniques. Also in the linguistic details of how different languages spatially talk about the past and future (e.g. in English the past is behind/future is ahead; in Chinese, past is above/future is below).

Comment by Anonymous (currymj) on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2023-01-27T15:47:46.287Z · LW · GW

Normal, standard causal decision theory is probably it. You can make a case that people sometimes intuitively use evidential decision theory ("Do it. You'll be glad you did.") but if asked to spell out their decision making process, most would probably describe causal decision theory.

Comment by Anonymous (currymj) on Where's the economic incentive for wokism coming from? · 2022-12-10T08:00:11.133Z · LW · GW

Fandom people on Tumblr,  AO3, etc. really responded to The Last Jedi (because it was targeted to them). Huge phenomenon. There are now bestselling romance novels that started life as TLJ fanfiction. Everything worked just like it does for the Marvel movies, very profitably.

However there was an additional group of Star Wars superfans outside of fandom, who wanted something very different, hence the backlash. This group is somewhat more male and conservative, and then everything polarized on social media so this somehow became a real culture war issue. Of course, Disney did not like the backlash, and tried to make the 3rd movie more palatable to this group.

That kind of fan doesn't organically exist for most things outside of Star Wars though. For most things, you only get superfans in this network of fan communities which skew towards social justice. And for any new genre story without a pre-existing fanbase, there's an opportunity to get fandom people excited about it, which is very valuable.

Comment by Anonymous (currymj) on Where's the economic incentive for wokism coming from? · 2022-12-09T13:12:35.396Z · LW · GW

As far as running a media company goes, fandom is extremely profitable, increasingly so in an age where enormous sci-fi/fantasy franchises drive everything. And there's been huge overlap between fandom communities and social justice politics for a long time.

It's definitely in Disney's interest to appeal to Marvel superfans who write fanfiction and cosplay and buy tons of merchandise, and those people tend to also be supporters of social justice politics.

Like, nothing is being forced on this audience -- there are large numbers of people who get sincerely excited when a new character is introduced that gives representation for the first time to a new minority group, or something like that.

As with so many businesses, the superfans are worth quite a few normies who might be put off by this. I think this is the main explanation.

Comment by Anonymous (currymj) on Where to be an AI Safety Professor · 2022-12-08T18:54:46.147Z · LW · GW

The “canonical” rankings that CS academics care about would be csrankings.org (also not without problems but the least bad).

Comment by Anonymous (currymj) on Adversarial Policies Beat Professional-Level Go AIs · 2022-11-03T17:11:38.851Z · LW · GW

The KataGo paper says of its training, "Self-play games used Tromp-Taylor rules modified to not require capturing stones within pass-alive territory".

It sounds to me like this is the same scoring system as used in the adversarial attack paper, but I don't know enough about Go to be sure.

Comment by Anonymous (currymj) on The Last Year - is there an existing novel about the last year before AI doom? · 2022-10-23T09:41:32.143Z · LW · GW

The Sprawl trilogy by William Gibson (starting with Neuromancer) is basically about this, and is a classic for a reason. It's not exactly hard sci-fi though.

Comment by Anonymous (currymj) on Signaling Guilt · 2022-10-10T17:04:13.372Z · LW · GW

If you don’t signal the expected way then you are, if not being dishonest, at least misleading people — in many cases it is less honest.

Everyone knows your job application is written to puff you up, and they price it in. If you don’t have the correct amount of puffery, you’re misleading people into thinking you’re worse than you are.

It’s a bad way to communicate and a bad race-to-the-bottom equilibrium but not actually dishonest.

You can write “Dear X” on a letter to a person you don’t know. People used to sign off letters “Your obedient servant”. It evolves for weird signaling reasons but is not taken literally.

Comment by Anonymous (currymj) on The Teacup Test · 2022-10-10T13:32:16.956Z · LW · GW

"Systems that would adapt their policy if their actions would influence the world in a different way"

Does the teacup pass this test? It doesn't necessarily seem like it.

We might want to model the system as "Heat bath of Air -> teacup -> Socrates' tea". The teacup "listens to" the temperature of the air on its outside, and according to some equation transmits some heat to the inside. In turn the tea listens to this transmitted heat and determines its temperature.

You can consider the counterfactual world where the air is cold instead of hot. Or the counterfactual world where you replace "Socrates' tea" with "Meletus' tea", or with a frog that will jump out of the cup, or whatever. But in all cases the teacup does not actually change its "policy", which is just to transmit heat to the inside of the cup according to the laws of physics.

To put it in the terminology of "Discovering Agents", one can add mechanism variables  going into the object level variables. But there are no arrows between these, so there's no agent.

Of course, my model here is bad and wrong physically speaking, even if it does capture crude cause-effect intuition about the effect of air temperature on beverages. However I'd be somewhat surprised if a more physically correct model would introduce an agent to the system where there is none.

Comment by Anonymous (currymj) on Funding is All You Need: Getting into Grad School by Hacking the NSF GRFP Fellowship · 2022-09-25T10:16:47.489Z · LW · GW

There are industry places that will, at least as stated, take you seriously with no PhD as long as you have some publications (many job postings don't require a PhD or say "or equivalent research experience"), and it's unusual but not unheard of for people do this.

The thing is, a PhD program is a reliable way to build a research track record.  And you don't see too many PhD dropouts who want to be scientists because if you've got a research track record, the extra cost of just finishing your dissertation and graduating is pretty low.

Comment by Anonymous (currymj) on A Bias Against Altruism · 2022-07-24T03:54:35.003Z · LW · GW

People sometimes seem to act like unsolved problems are exasperating, aesthetically offensive, or somehow unappealing, so they have no choice but to roll up their sleeves and try to help fix them, because it's just so irritating to see the problem go unsolved. So one can do purely altruistic stuff, but with this selfish posture (which also shifts focus away from motivation and psychology) it won't trip the hypocrisy alarms. It may also genuinely be a better attitude to cultivate, if it helps deflate one's ego a little bit -- I'm not quite sure.

Comment by Anonymous (currymj) on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-07T23:34:41.248Z · LW · GW

A lot of the AI risk arguments seem to come mixed together with assumptions about a particular type of utilitarianism, and with a very particular transhumanist aesthetic about the future (nanotech, von Neumann probes, Dyson spheres, tiling the universe with matter in fixed configurations, simulated minds, etc.).

I find these things (especially the transhumanist stuff) to not be very convincing relative to the confidence people seem to express about them, but they also don't seem to be essential to the problem of AI risk. Is there a minimal version of the AI risk arguments that are disentangled from these things?

Comment by Anonymous (currymj) on Reshaping the AI Industry · 2022-06-02T02:24:08.303Z · LW · GW

most academic research work is done by grad students, and grad students need incremental, legible wins to put on their CV so they can prove they are capable of doing research. this has to happen pretty fast. an ML grad student who hasn't contributed to any top conference papers by their second or third year in grad school might get pulled aside for a talk about their future.

ideally you want a topic where you can go from zero to paper in less than a year, with multiple opportunities for followup work. get a few such projects going and you have a very strong chance of getting at least one through in time to not get managed out of your program -- and of course, usually more will succeed and you'll be doing great.

I don't think there's anything like this in AI safety research. Section 3.4 seems to acknowledge this a little bit. If you want AI safety to become more popular, you'd hope that an incoming PhD student could say "I want to work on AI Safety" and be confident that in a year or two, they'll have a finished research project that they can claim as a success and submit to a top venue. Otherwise, they are taking a pretty huge career risk, and most people won't take it.

Comment by Anonymous (currymj) on What's up with the recent monkeypox cases? · 2022-05-20T01:43:08.851Z · LW · GW

“Does the disease heavily affect career-age people (age 25-65), or frequently leave survivors with lasting disability?”

This is rightly ticked off as “No”, but I think it morally counts as “Yes” if there is more danger to young children. That’s scarier in itself, and from COVID it seems people are also more likely to accept very extreme NPIs to protect children, meaning there might well be a large economic impact.

Comment by Anonymous (currymj) on Is there a convenient way to make "sealed" predictions? · 2022-05-07T18:08:00.239Z · LW · GW

Historically, scientists would use anagrams to do this. Galileo famously said "Smaismrmilmepoetaleumibunenugttauiras". Later he revealed that it could be unscrambled into "Altissimum planetam tergeminum observavi" which per Wikipedia is Latin for "I have observed the most distant planet to have a triple form", establishing his priority in discovering the rings of Saturn.

Obviously hashing and salting is better, nowadays.

Comment by Anonymous (currymj) on Book review of "Mengzi" · 2022-03-13T14:26:14.571Z · LW · GW

From my limited knowledge, that's definitely one of the purposes Ruism/Confucianism was put to -- especially once the civil service exams were instituted.

In one way, "philosophy of the establishment" seems mostly correct to me, as the Mengzi seemingly makes a core assumption that the current social order is legitimate. But it mostly isn't making excuses for that social order (as philosophy and social science often does), it's challenging rulers to live up to an ideal and serve the people. At one point, Mengzi says that any king who "mutilates benevolence" is a "mere fellow" who can be rightfully executed by the people.

And I don't know enough about history, but it seems like nearly every Chinese philosopher of any school -- even maybe Zhuangzi (??) -- was involved in some kind of government position. Maybe that's what every literate person did. So it's hard to draw a clear "establishment/anti-establishment" line.

Comment by Anonymous (currymj) on Introducing myself: Henry Lieberman, MIT CSAIL, whycantwe.org · 2022-03-04T01:41:48.746Z · LW · GW

Schizophrenia is the wrong metaphor here -- it's not the same disease as split personalities (i.e. dissociative identity disorder). I think it would be clearer and more accurate to rewrite that paragraph without it. I don't intend this as an attack or harsh criticism, it's just that I have decided to be a pedant about this point whenever I encounter it, as I think it would be good for the general public to develop a more accurate and realistic understanding of schizophrenia.

Comment by Anonymous (currymj) on 12 interesting things I learned studying the discovery of nature's laws · 2022-02-21T02:00:32.114Z · LW · GW

Rubin's framework says basically, suppose all our observations are in a big data table. Now consider the counterfactual observations that didn't happen (i.e. people in the control group getting the treatment) -- these are called "potential outcomes" -- treat those like missing cells in the data table. Then causal inference is just to fill in potential outcomes using missing data imputation techniques, although to be valid these require some assumptions about conditional independence.

Pearl's framework and Rubin's are isomorphic in the sense that any set of causal assumptions in Pearl's framework (a structural causal model, which has a DAG structure), can be translated into a set of causal assumptions in Rubin's framework (a bunch of conditional independence assumptions about potential outcomes), and vice versa. This is touched on somewhat in Ch. 7 of "Causality".

Pearl argues that despite this equivalence, his framework is superior because it's a better tool for thinking. In other words, writing down your assumptions as DAG/SCM is intuitive and can be explained and argued about, while he claims the Rubin model independence assumptions are opaque and hard to understand.

Comment by Anonymous (currymj) on 12 interesting things I learned studying the discovery of nature's laws · 2022-02-20T21:17:37.404Z · LW · GW

I will give a potted history of Pearl's discovery as I understand it.

In the late 70s/early 80s, people wanted to deal with uncertainty in logic-based AI. The obvious thing to use is probability, but doing a Bayesian update to compute a posterior is exponentially expensive.

Pearl wanted to come up with a good data structure for doing computations over probability distributions in less-than-exponential time.

He introduced the idea of Bayesian networks in his paper Reverend Bayes On Inference Engines where he represents factorized probability distributions using DAGs. Here, the direction of the arrows is arbitrary and there are many DAGs corresponding to one probability distribution.

He was not thinking about causality at all, it was just a problem in data structures.  The idea was this would be used for the same sort of thing as an "expert system" or other logic based AI systems, but taking into account uncertainty expressed probabilistically.

Later, people including Pearl noticed that you can and often should interpret the arrows as causal, this amounts to choosing one DAG from many.  The fact that there are many possible DAGs is related to the fact that there are seemingly always multiple incompatible causal stories, to explain observations absent making additional assumptions about the world. But if you pick one, you can start using it to see whether your causal question can be answered from observational data alone.

Finally, he realized that the assumptions encoded in a DAG aren't sufficient for fully general counterfactuals, and realized that in full generality you have to specify exactly what functional relationship goes along each edge of the graph.

As someone originally concerned with AI, not with problems in the natural sciences, Pearl is probably unusual. Pearl himself looks back on Sewall Wright as his progenitor for coming up with path diagrams -- he was working in genetics. If you are interested in this, you should also look at Don Rubin's experience -- his causal framework is isomorphic to Pearl's. He was a 100 percent classic statistician, motivated by looking at medical studies.