Comment by larks on Financial engineering for funding drug research · 2019-05-17T03:04:28.949Z · score: 5 (3 votes) · LW · GW

Is this very different from founding a pharmaceutical company?

Comment by larks on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-05-02T20:51:24.945Z · score: 10 (2 votes) · LW · GW

Critch wrote a related paper:

Existing multi-objective reinforcement learning (MORL) algorithms do not account for objectives that arise from players with differing beliefs.Concretely, consider two players with different beliefs and utility functions who may cooperate to build a machine that takes actions on their behalf. A representation is needed for how much the machine’s policy will prioritize each player’s interests over time. Assuming the players have reached common knowledge of their situation, this paper derives a recursion that any Pareto optimal policy must satisfy. Two qualitative observations can be made from the recursion: the machine must (1) use each player’s own beliefs in evaluating how well an action will serve that player’s utility function, and (2) shift the relative priority it assigns to each player’s expected utilities over time, by a factor proportional to how well that player’s beliefs predict the machine’s inputs. Observation (2) represents a substantial divergence from naive linear utility aggregation (as in Harsanyi’s utilitarian theorem, and existing MORL algorithms), which is shown here to be inadequate for Pareto optimal sequential decision-making on behalf of players with different beliefs.

Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making

Comment by larks on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-05-02T20:44:47.325Z · score: 2 (1 votes) · LW · GW

War only happens if two agents don’t have common knowledge about who would win (otherwise they’d agree to skip the costs of war).

They might also have poorly aligned incentives, like a war between two countries that allows both governments to gain power and prestige, at the cost of destruction that is borne by the ordinary people of both countries. But this sort of principle-agent problem also seems like something AIs should be better at dealing with.

Comment by larks on Literature Review: Distributed Teams · 2019-04-16T16:06:48.856Z · score: 12 (6 votes) · LW · GW

In light of this:

Build over-communication into the process.
In particular, don’t let silence carry information. Silence can be interpreted a million different ways (Cramton 2001).

Thanks for writing this! I found it very interesting, and I like the style. I particularly hadn't properly appreciated how semi-distributed was worth than either extreme. It's disappointing to hear, but seemingly obvious in retrospect and good to know.

Comment by larks on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T18:45:23.520Z · score: 2 (1 votes) · LW · GW

Thanks for sharing, seems like a reasonable take to me.

Comment by larks on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T18:43:10.417Z · score: 2 (1 votes) · LW · GW

I definitely being near AI hubs is helpful, and I'd be interested in supporting any credible new groups that started in other hubs.

Thanks for that extra info on CHAI staff. In general my objections to the bay area are partly about the EA/LW culture there, and partly about the broader culture. I did end up donating to CHAI despite this!

Comment by larks on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T18:22:33.116Z · score: 4 (2 votes) · LW · GW

Thanks! I have fixed most of the typos.

Comment by larks on In Defense of Finance · 2018-12-19T00:37:01.560Z · score: 3 (2 votes) · LW · GW

Especially as in recent years investor preferences for lower risk, low beta, low vol, higher yield stocks (like utilities and staples) has been well documented as strong.

One related fact is that (at least some of the banks) feel like they haven't gotten credit for the risk reductions they have done. Given their lower leverage now, they feel like they should have higher multiples, whereas actually their relative multiples have compressed vs the market vs pre-crisis. Of course, this may be because their risk was under-estimated pre-crisis, so their relative multiple was too high.

Comment by larks on In Defense of Finance · 2018-12-19T00:33:25.226Z · score: 18 (6 votes) · LW · GW

At least in public equity markets the size premium is very weak. My team looked at it and decided not to use it as a factor to guide our investing. It's closely correlated to an illiquidity premia, which I do believe in.

2018 AI Alignment Literature Review and Charity Comparison

2018-12-18T04:46:55.445Z · score: 185 (61 votes)
Comment by larks on Get genotyped for free ( If your IQ is high enough) · 2018-12-15T19:04:15.139Z · score: 2 (1 votes) · LW · GW

Update: Jeff says they only ever sent him part of his data.

Comment by larks on The Vulnerable World Hypothesis (by Bostrom) · 2018-11-17T14:48:14.663Z · score: 2 (1 votes) · LW · GW

Nick's space papers are largely about how to harvest large amounts of utility from the galaxy, not about how to increase humanity's robustness. And yes, there are some Xrisks (including the one I am focused on) that space colonies do not help with, but the reader may not be convinced of these, so it is surely worth mentioning that some risks would be guarded against with interstellar diversification. If nothing else you should probably argue that space colonization is not an adequate solution for these reasons.

Comment by larks on Some cruxes on impactful alternatives to AI policy work · 2018-11-17T03:02:37.769Z · score: 28 (10 votes) · LW · GW
A study by the investment-research firm Strategas which was cited in The Economist and the Washington Post compared the 50 firms that spent the most on lobbying relative to their assets, and compared their financial performance against that of the S&P 500 in the stock market; the study concluded that spending on lobbying was a "spectacular investment" yielding "blistering" returns comparable to a high-flying hedge fund, even despite the financial downturn of the past few years.

I think I read this research while I was a Strategas client; if I'm remembering it correctly it was extremely poorly done. Short back test (just a few years), garden of forking paths, etc. Most sell-side research is not epistemically rigourous and Strategas is not one of the better firms. I would not put much weight on this research.

There is widespread agreement that a key ingredient in effective lobbying is money. This view is shared by players in the lobbying industry.

Well of course lobbyists would say they're worth the money!

Comment by larks on Values determined by "stopping" properties · 2018-03-22T23:33:37.355Z · score: 8 (2 votes) · LW · GW
This, by the way, explains my intuitive dislike for some types of moral realism. If there are true objective moral facts that humans can access, then whatever process counts as "accessing them" becomes... a local stopping condition for defining value.

I'm not sure I understand what you're getting at here. Yes, they are both local stopping conditions, but there seems to be a clear dis-analogy. The other local stopping conditions seem to be bad not because they are stopping conditions, but because most contemporary people don't want to end up as Lotus-Eaters, or as mindless outsourcers. We would oppose such a development even if it wasn't stable! For example, a future where we oscillate between lotus-eaters and mindless outsourcers seems about as bad as either individual scenario. So it's not really the stability we object to.

But in that case, it's not clear why we should be opposed to the moral realism. After all, many people would like to go there, even if we are presently a long way away.

Comment by larks on Open Thread, February 1-14, 2012 · 2018-03-06T04:01:16.507Z · score: 0 (0 votes) · LW · GW

the one academic doing good work in the area is Sheffer, who is running a longitudinal survey which may or may not have enough statistical power to rule out particularly dramatic variances in outcomes. (Sheffer mentions the selection bias problem but seems to have the attitude that it's not a problem for her work.)

Was there any follow-up here?

Comment by larks on Experiences in applying "The Biodeterminist's Guide to Parenting" · 2018-01-08T02:13:07.247Z · score: 0 (0 votes) · LW · GW

The bioterminist's guide is now 5 years old. Does anyone know of an updated version?

Comment by larks on 2017 AI Safety Literature Review and Charity Comparison · 2017-12-30T20:21:35.693Z · score: 8 (2 votes) · LW · GW

Thanks, I'm honoured! I've sent you a private message.

2017 AI Safety Literature Review and Charity Comparison

2017-12-24T18:52:31.816Z · score: 75 (25 votes)

2018 AI Safety Literature Review and Charity Comparison

2017-12-20T22:04:47.174Z · score: 2 (2 votes)
Comment by larks on More Dakka · 2017-12-09T16:43:28.394Z · score: 5 (3 votes) · LW · GW

In our house we started a tradition of holding hands and taking turns saying something we're grateful before dinner each night. We then soft-evangalise this by having guests over and including them - most notably to hundreds of people at our wedding.

Comment by larks on Inadequacy and Modesty · 2017-11-01T02:00:03.086Z · score: 3 (1 votes) · LW · GW

Yup, just logged back in to make that guess. Would also explain the Japan commentary.

Comment by larks on Inadequacy and Modesty · 2017-10-29T23:58:10.382Z · score: 9 (3 votes) · LW · GW

Great article, and I'm glad to see you've returned to Less(er)wrong.

One very very small question: speaking as one of the hedge fund guys you mention who happened to be long MSFT into a very successful quater on friday, why did your Microsoft example use a share price of $37.70? We're at $83.81 now!

Comment by larks on Multidimensional signaling · 2017-10-17T00:08:55.941Z · score: 3 (2 votes) · LW · GW

Presumably it would also lead us to think that having lots of free time, or being very concerned about [clothes/wit/grades] was better - but this does not seem to be obveously the case.

Comment by larks on Beta - First Impressions · 2017-10-14T03:32:24.726Z · score: 1 (1 votes) · LW · GW

First of all, thanks for making this all! :)

One suggestiong: coudl we hvae a sepll-ckecher for the cmmonent box?

Comment by larks on There's No Fire Alarm for Artificial General Intelligence · 2017-10-14T03:21:35.921Z · score: 26 (16 votes) · LW · GW

Occationally we run surveys of ML people. Would it be worth asking them what their personal fire alarm would be, or what they are confident will not be achieved in the next N years? This would force them to make a mental stance that might help them achieve some cognitive dissonance later, and also allow us to potentially follow up with them.

Comment by larks on June 2017 Media Thread · 2017-06-03T22:11:15.683Z · score: 0 (0 votes) · LW · GW

Why is there no way to downvote, report or otherwise punish this comment?

Comment by Larks on [deleted post] 2017-06-02T02:03:03.335Z

I think Plato fans would probably argue I'm being somewhat unfair. If nothing else, the society described was intended as a metaphor for the virtuous person, not necessarily as an actually good society in itself.

More relevantly, I didn't intend for this to be a major criticism of your endeavor. I think if you can avoid sexual conflict (for which I recommend celibacy on your part) this could be worthwhile for (some) people.

Comment by Larks on [deleted post] 2017-06-01T03:07:24.362Z

The section on goals reminded me a little of Plato's Republic. The perfect society involves sacrificing all wealth, art, free expression, and what does it offer in return?

Victory in war against similar-sized enemies.

Comment by Larks on [deleted post] 2017-05-28T21:46:11.295Z

One idea that is probably necessary but not sufficient is for the Commander (and anyone else with any authority in the house) to have an absolute commitment not to sleep with anyone else in the house.

Edit: with this rule, a different/earlier version of me might have been interested. Without it I would never be.

Comment by larks on Thoughts on civilization collapse · 2017-05-07T18:51:11.552Z · score: 3 (3 votes) · LW · GW

Cities, with their large and varied-skill workforce, will suffer less than the countryside.

Cities have a large and varied workforce, but many of their skills lie in things that rely on civilisation remaining intact. Tax lawyers, bartenders, yoga instructors, investment bankers etc. all seem like they would be more of a liability than an asset in such a scenario. Whereas the countryside has skills more focused around food production, and a lower population density reduces the risks of food riots.

Comment by larks on survey about biases in the investment context · 2017-03-25T18:16:18.753Z · score: 0 (0 votes) · LW · GW

One of the pages crashed on me. When I refreshed the page I got the next question (5) I think, and this error message.

Comment by larks on 80,000 Hours: EA and Highly Political Causes · 2017-01-27T02:59:55.919Z · score: 5 (5 votes) · LW · GW

You should post this on the EA forum

http://effective-altruism.com/

2016 AI Risk Literature Review and Charity Comparison

2016-12-15T00:19:21.966Z · score: 7 (8 votes)
Comment by larks on CFAR’s new focus, and AI Safety · 2016-12-03T04:12:03.324Z · score: 11 (11 votes) · LW · GW

headline: CFAR considering colonizing Antarctica.

Comment by larks on Google Deepmind and FHI collaborate to present research at UAI 2016 · 2016-11-23T02:14:37.633Z · score: 1 (1 votes) · LW · GW

Hey Stuart,

It seems like much of the press around this paper discussed it as a 'big red button' to turn off a rogue AI. This would be somewhat in-line with your previous work around limited impact AIs who are indifferent to their being turned off, but it doesn't seem to really describe this paper. My interpretation of this paper is it doesn't make the AI indifferent to interruption, or prevent the AI from learning about the button - it just helps the AI avoid a particular kind of distraction during the training phase. Being able to implement the interruption is a separate issue - but it seems that designing a form of interruption that the AI won't try to avoid is the tough problem. Is this reading right, or am I missing something?

Comment by larks on Google Deepmind and FHI collaborate to present research at UAI 2016 · 2016-07-12T00:18:07.817Z · score: 0 (0 votes) · LW · GW

Yup, I think I understand that, and agree you need to at least tend to one. I'm just wondering why you initially use the loser definition of theta (where it doesn't need to tend to one, and can instead be just 0 )

Comment by larks on Google Deepmind and FHI collaborate to present research at UAI 2016 · 2016-07-10T03:33:57.592Z · score: 0 (0 votes) · LW · GW

Very interesting paper, congratulations on the collaboration.

I have a question about theta. When you initially introduce it, theta lies in [0,1]. But it seems that if you choose theta = (0n)n, just a sequence of 0s, all policies are interruptible. Is there much reason to initially allow such a wide ranging theta - why not restrict them to converge to 1 from the very beginning? (Or have I just totally missed the point?)

Comment by larks on [Link] Mutual fund fees · 2016-04-29T01:35:52.369Z · score: 0 (0 votes) · LW · GW

Yes, I agree it's possible to do them correctly. But few people do, and finding positive results is so much more likely if you do them wrong that poor methodology should be the default explanation for any such positive result.

Comment by larks on [Link] Mutual fund fees · 2016-04-27T23:42:37.943Z · score: 1 (1 votes) · LW · GW

They don't mention being survivorship-bias free, which I would expect them to if they were.

Comment by larks on [Link] Mutual fund fees · 2016-04-27T00:47:58.885Z · score: 1 (1 votes) · LW · GW

I think this is very likely. When going to label funds, naturally currently existing ones come to mind - but these are the survivors. Failed activists funds don't leave much of a track record.

Comment by larks on Open Thread April 11 - April 17, 2016 · 2016-04-13T23:48:19.552Z · score: 3 (3 votes) · LW · GW

Did it work? While I would give him full credit, I can easily imagine many teachers not approving.

Comment by larks on AlphaGo versus Lee Sedol · 2016-03-10T03:10:01.386Z · score: 1 (1 votes) · LW · GW

What do you mean by "one handshake"?

Comment by larks on What is the future of nootropic drugs? Why can't there be ones more effective than ones that have existed for 15+ years? · 2016-03-07T03:06:39.214Z · score: 0 (0 votes) · LW · GW

The FDA will not give you a patent

The issue isn't patents (which are not awarded by the FDA anyway) but whether they will give your drug approval to market.

Comment by larks on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-10-27T00:06:30.149Z · score: 5 (5 votes) · LW · GW

Has anyone been to the Young Cryonicists Gathering? Is it worth going to? Anyone planning on attending the one in California in April?

Previous coverage on LW: positive and negative.

Comment by larks on Median utility rather than mean? · 2015-09-09T01:11:31.310Z · score: 0 (0 votes) · LW · GW

In finance we use medians a lot more than means.

Comment by larks on Experiences in applying "The Biodeterminist's Guide to Parenting" · 2015-09-05T21:16:42.193Z · score: 0 (0 votes) · LW · GW

I'm looking to buy a sofa without flame retardants. The Center for Enviromental Health suggests that all IKEA products are fine, but at least in 2012 it seems that they instead substituted another chemical flame retardant, TRIS. Does anyone know if IKEA furniture is now chemical flame retardant free, or if there are any other good options for below $1,000 ?

Comment by larks on Stupid Questions September 2015 · 2015-09-05T21:09:12.954Z · score: 1 (1 votes) · LW · GW

US firms? Your main China exposure is going to come from your Aussie mining exposure.

Comment by larks on A list of apps that are useful to me. (And other phone details) · 2015-08-22T16:40:15.764Z · score: 1 (1 votes) · LW · GW

I just installed 6 apps.

  • CPU Thermometer
  • PowerCalc
  • Advanaced Signal Status
  • Compass
  • First Aid (US Red Cross)
  • Heart Rate

Thanks for writing this.

Comment by larks on Yvain's most important articles · 2015-08-16T14:25:57.194Z · score: 12 (14 votes) · LW · GW

I thought the biodeterminists guide was one of the most useful things I've ever read. I'd love it if Yvain would write the same for longevity, general fitness, IQ, etc.

Comment by larks on Ideological Turing Test Domains · 2015-08-08T10:08:30.106Z · score: 2 (2 votes) · LW · GW
  • Eugenics
  • Foreign Intervention
  • Capital Gains taxation
  • Cryonics
  • Temporal Discounting
Comment by larks on Experiences in applying "The Biodeterminist's Guide to Parenting" · 2015-07-18T02:12:49.316Z · score: 4 (6 votes) · LW · GW

Excelent post! Thanks for sharing.

Comment by larks on Crazy Ideas Thread · 2015-07-09T22:49:32.228Z · score: 1 (1 votes) · LW · GW

Me too.

Comment by larks on Effective Altruism vs Missionaries? Advice Requested from a Newly-Built Crowdfunding Platform. · 2015-07-01T01:00:36.778Z · score: 8 (8 votes) · LW · GW

saving lives and saving souls are nearly equally important.

If souls actually exist (and could go to heaven and hell) then saving souls is far more important than saving lives! Your disagreement with them is surely not about relative importance, it is about ontology.

Comment by larks on Open Thread, Jun. 22 - Jun. 28, 2015 · 2015-06-23T23:26:54.503Z · score: 0 (0 votes) · LW · GW

fbpvny pbafreingvirf

Contrarian LW views and their economic implications

2014-10-08T23:48:04.250Z · score: 21 (19 votes)

Confirmation Bias Presentation

2014-06-05T21:07:31.918Z · score: 2 (3 votes)

Meetup : Princeton NJ Meetup

2014-03-23T00:22:06.409Z · score: 1 (2 votes)

Meetup : Princeton NJ Meetup

2014-02-02T22:32:10.247Z · score: 0 (1 votes)

Meetup : Princeton NJ Meetup

2013-10-22T02:10:25.174Z · score: 2 (3 votes)

Giving What We Can September Internship

2013-02-18T20:03:56.667Z · score: 4 (7 votes)

[minor] Separate Upvotes and Downvotes Implimented

2013-01-29T10:31:21.726Z · score: 29 (30 votes)

[Link]: 80,000 hours blog

2012-02-26T14:34:58.457Z · score: 20 (23 votes)

Counterfactual Coalitions

2012-02-16T21:42:52.639Z · score: 23 (23 votes)

Best Intro to LW article for transhumanists

2011-10-28T02:30:35.471Z · score: 6 (7 votes)

In Defense of Objective Bayesianism: MaxEnt Puzzle.

2011-01-06T00:56:50.739Z · score: 6 (7 votes)

Link: Facing the Mind-Killer

2010-12-18T00:57:08.114Z · score: 8 (15 votes)

Oxford (UK) Rationality & AI Risks Discussion Group

2010-11-02T19:10:30.494Z · score: 2 (5 votes)

A Player of Games

2010-09-23T22:52:38.849Z · score: 15 (24 votes)

Burning Man Meetup: Bayes Camp

2010-08-25T06:14:54.005Z · score: 16 (19 votes)