Posts

Comments

Comment by Edward Kmett (edward-kmett) on The Waluigi Effect (mega-post) · 2023-03-27T11:28:57.739Z · LW · GW

The problem is the lack of narrative heel-face turns for truly deceptive characters. Once a character reveals they've been secretly a racist, evil, whatever, they rarely flip to good and honest spontaneously without a huge character arc.

Comment by Edward Kmett (edward-kmett) on Why I'm joining Anthropic · 2023-01-05T04:58:26.750Z · LW · GW

Time to update my position on 

Comment by Edward Kmett (edward-kmett) on Discovering Language Model Behaviors with Model-Written Evaluations · 2022-12-21T14:49:11.919Z · LW · GW

Great work!

One of these days I hope Evan collaborates on a paper that gives me more reason to expect a brighter future -- beyond surfacing latent issues that we really need to pay attention to now for said future to be realizable!

Today is not one of those days.

That said, seeing all the emergent power-seeking behaviors laid out is quite depressing.

Comment by Edward Kmett (edward-kmett) on Theses on Sleep · 2022-03-22T12:22:58.791Z · LW · GW

I heavily sympathize with a lot of the views from this post. 

I used to sleep much more (~9 hours), but as I've aged, I now tend to sleep between 3-5 hours a night. This was a rather conscious choice on my part, but now I find it hard to revert to my previous behavior. I switched to various forms of polyphasic sleep during my bender through academia from 2004-2006, and while I eventually abandoned polyphasic, I haven't switched gone back to a "regular" sleep schedule since.

I do find that at my most acute stage of sleep deprivation I become much more mono-focused. I have to stay interested to stay awake, so I dive into everything my ADHD wants me to focus on to stay locked in. As long as I can ride this mode of operation, I can stay awake truly ridiculous numbers of hours without dipping into modafinil/adrafinil or other anti-sleep agents.

At the risk of dipping into anecdote territory, my biggest concern with having "lived this" for the last decade or two is the apparent sharp impact on cortisol levels. 

I gained a ton of productivity for the first decade or so of this practice, but then failed to adequately manage my weight. Now I have a number of health issues that correspond to chronic inflammation.

I still consider on net to have made a gain with the use of 'front-loading' my conscious time on earth, but I do feel the need to consider these longer term impacts on cortisol and core body fat. Recently I've been trying to find the right middle ground to give my body enough time to recover from exercise and still retain high-focus. e.g. using semaglutide off-label to bring weight under control.

If I had it to do over again, I would probably try to be a lot more careful about monitoring such things BEFORE I reached the point of having to do more drastic steps to bring things back into line.

Comment by Edward Kmett (edward-kmett) on What are good election betting opportunities? · 2020-11-03T19:54:04.502Z · LW · GW

I was able to complete the transactions on the "What will be the Electoral College margin in the 2020 presidential election?" side, but not on the election itself side.

Comment by Edward Kmett (edward-kmett) on The rationalist community's location problem · 2020-10-11T07:49:13.465Z · LW · GW

True. Sorry. My baseline for that passing tax comment was the previous clause about New Hampshire, as it seems a significant part of the argument trotted out in favor of New Hampshire, over all the other points scattered around Boston. e.g. northern or western MA, New Haven, Providence, etc.

I do agree that it is, as you point out, almost as strong a strike against my Ann Arbor narrative.

Comment by Edward Kmett (edward-kmett) on The rationalist community's location problem · 2020-10-09T23:03:13.846Z · LW · GW

Madison checks most of the same cultural boxes, but it loses out on the ease of international air travel.

Comment by Edward Kmett (edward-kmett) on The rationalist community's location problem · 2020-10-09T22:20:28.442Z · LW · GW

My working model of a good location is either in or around Ann Arbor.

Travel is going to be a concern for any location, I think. Why? I think you want visiting scholars, the ability to reach out to other organizations, the ability for folks who have become sort of part of the rationalist diaspora to be able to physically reach out and connect. You may not want to be in the major city, but ready access to an international airport seems like a good filter, as the farther the nearest one is away from you, the steeper the gradient to get anyone to come visit is.

If you run through a list of hub airports and rule out the west coast for fires and much of the south due to hurricanes, you're left with a pretty short list of cities and very few with good nearby colleges that might be cultural fits:

American Airlines:

  • New York LaGuardia Airport (LGA)
  • New York John F. Kennedy International Airport (JFK)
  • Philadelphia International Airport (PHL)
  • Washington Ronald Reagan National Airport (DCA)
  • Charlotte Douglas International Airport (CLT)
  • Miami International Airport (MIA)
  • Chicago O'Hare International Airport (ORD)
  • Dallas-Ft. Worth International Airport (DFW)
  • Phoenix Sky Harbor International Airport (PHX)
  • Los Angeles International Airport (LAX)

United:

  • Newark Liberty International Airport (EWR)
  • Washington Dulles International Airport (IAD)
  • Chicago O'Hare International Airport (ORD)
  • Houston George-Bush Intercontinental Airport (IAH)
  • Denver International Airport (DEN)
  • San Francisco International Airport (SFO)
  • Los Angeles International Airport (LAX)

Delta:

  • Boston Logan International Airport (BOS)
  • New York LaGuardia Airport (LGA)
  • New York John F. Kennedy International Airport (JFK)
  • Detroit Metropolitan Airport (DTW)
  • Cincinnati/Northern Kentucky International Airport (CVG)
  • Hartsfield-Jackson Atlanta International Airport (ATL)
  • Minneapolis-St. Paul International Airport (MSP)
  • Salt Lake City International Airport (SLC)
  • Seattle-Tacoma International Airport (SEA)
  • Los Angeles International Airport (LAX)

I'm ignoring Southwest as they don't have "hubs" per se, and smaller regional airlines.

If you rule out most of the west coast due to ongoing fire troubles, and don't choose to go south, to avoid hurricane country, you're left with mostly DTW, ORD, PHL, BOS, CVG, or IAD. ORD hits some serious unrest and governance issues that give me pause. PHL is also a bit of a hotbed. BOS means you probably wind up an hour and a half plus out in New Hampshire, or dealing with Massachusetts taxes.

DTW seems to be the only one that you get a good college town (Ann Arbor) within a short drive, if you want folks to have a social life and access to a local talent pool, but also don't want to be _in_ the college part of the city over unrest concerns. So with that in mind, my working model of a good location is either in or around Ann Arbor.

It gets you a half hour from an international airport, DTW, which is a hub airport for Delta, meaning travel for visiting scholars and for folks on the fringe of the edge of the community is easy, and stays single airline, covering the US, Europe, and Australia (2 hops, but same airline). 

If you want to be able to isolate away from people in anticipation of either ongoing COVID concerns or another COVID-like problem or unrest, 5 minutes out of Ann Arbor, either east into Superior Township or west, you hit farm country, and can buy lots of space. Go a bit north and you get some nicer lakefront places.

There is a sharp political gradient which gives me pause about flashpoint concerns, however, most of that ire is directed at Lansing, not Ann Arbor. It does mean that you can pretty much pick the politics of your neighbors based on where you plant your flag though.

Downsides:

There isn't any good public transit link to DTW. (Yes, there is a bus or something, but I've never seen anyone make it work.) But Uber still exists for now, and cabs will likely exist after.

There are some anti-mask whack-jobs in the state legislature. On the other hand, alternatives discussed here so far aren't any better on that front. e.g. New Hampshire has no state mask mandate either.

All of the above travel analysis is contingent on airlines still being a going concern, of course, which will probably be a function of how long it takes for things to approximate "normal".

Thoughts on other locations:

Vancouver: Has all the breathing problems of California, and adds Canadian immigration.

Hamilton or Waterloo: Canadian immigration, no easy travel.

Seattle: Still trouble breathing, bit cheaper than the bay, but if you're trying to escape feeling like the world is on fire it doesn't seem to check that box. Also, CHAZ for good or ill.

New Hampshire: Gives access to MIT instead of University of Michigan, which is admittedly a better cultural fit, but is significantly further away. Though, Dartmouth, UNH might fill the gap somewhat. Politically it seems a bit more stable, which I confess may be a strong consideration.

Austin checks a lot of the same boxes, except for the hub airport one, and is arguably a better cultural fit. There was some talk in 2018 of Delta making it a "mini-hub", but who knows where that went. I don't have enough travel experience in/out of Austin to compare.

Comment by Edward Kmett (edward-kmett) on Are we in an AI overhang? · 2020-08-09T20:04:43.638Z · LW · GW
Networking 500 V100 together is one challenge, but networking 500k V100s is another entirely.

Even if you might have trouble networking a 100x larger system together for training, you can train the smaller network 100x and stitch answers together using ensemble methods, and make decent use of the extra compute. It may not be as good as growing the network that full factor, but if you have extra compute beyond the cap of whatever connected-enough training system size you can muster, there are worse ways to spend it.

I am somewhat more prone to think that more selective attention (e.g. Big Bird's block-random attention model) could bring down the quadratic cost of the window size quickly enough to be a factor here. Replacing a quadratic term with a linear or n log n or heck even a n^1.85 term goes a long way when billions are on the table.

Comment by Edward Kmett (edward-kmett) on AIRCS Workshop: How I failed to be recruited at MIRI. · 2020-01-07T19:05:07.816Z · LW · GW

Congratulations on ending my long-time LW lurker status and prompting me to comment for once. =)

I think Ben's comment hits pretty close to the state of affairs. I have been internalizing MIRI's goals and looking for obstacles in the surrounding research space that I can knock down to make their (our? our) work go more smoothly, either in the form of subgoals or backwards chaining required capabilities to get a sense of how to proceed.

Why do I work around the edges? Mostly because if I take the vector I'm trying to push the world and the direction MIRI is trying to push the world, and project one onto the other, it currently seems to be the approach that maximizes the dot product. Some of what I want to do doesn't seem to work behind closed doors, because I need a lot more outside feedback, or have projects that just need to bake more before being turned into a more product-like direction.

Sometimes I lend a hand behind the curtain where one of their projects comes close to something I have experience to offer, other times, its trying to offer a reframing of the problem, or offering libraries or connections to folks that they can built atop.

I can say that I've very much enjoyed working with MIRI over the last 16 months or so, and I think the relationship has been mutually beneficial. Nate has given me a lot of latitude in choosing what to work on and how I can contribute, and I have to admit he couldn't have come up with a more effective way to leash me to his cause had he tried.

I've attend so many of these AIRCS workshops in part to better understand how to talk about AI safety. (I think I've been to something like 8-10 of them so far?) For better or worse, I have a reputation I have in the outside functional programming community, and I'd hate to give folks the wrong impression of MIRI, simply by dint of my having done insufficient homework, so I've been using this as a way to sharpen my arguments and gather a more nuanced understanding of what ways to talk about AI safety, rationality, 80,000 hours, EA, x-risks, etc. work for my kind of audience, and which seem to fall short, especially when talking to the kind of hardcore computer scientists and mathematicians I tend to talk to.

Arthur, I greatly enjoyed interacting with you at the workshop -- who knew that an expert on logic and automata theory was exactly what I needed at that moment!? -- and I'm sorry that the MIRI recruitment attempt didn't move forward.