Posts

GPT-3, belief, and consistency 2020-08-16T23:12:10.659Z
Where do people discuss doing things with GPT-3? 2020-07-26T14:31:11.721Z
Replicating the replication crisis with GPT-3? 2020-07-22T21:20:34.865Z
Charity to help people get US stimulus payments? 2020-03-27T03:16:38.130Z
Frivolous speculation about the long-term effects of coronavirus 2020-03-15T19:12:39.444Z
Coronavirus tests and probability 2020-03-11T23:09:47.701Z
Remove Intercom? 2017-11-07T04:35:26.399Z

Comments

Comment by skybrian on No Causation without Reification · 2020-10-24T22:50:23.942Z · LW · GW

What’s an example of a misconception someone might have due to having a mistaken understanding of causality, as you describe here?

Comment by skybrian on The bads of ads · 2020-10-24T22:18:03.079Z · LW · GW

This is a bizarre example, sort of like using Bill Gates to show why nobody needs to work for a living. It ignores the extreme inequality of fame.

Tesla doesn’t need advertising because they get huge amounts of free publicity already, partly due to having interesting, newsworthy products, partly due to having a compelling story, and partly due to publicity stunts.

However, this free publicity is mostly unavailable for products that are merely useful without being newsworthy. There are millions of products like this. An exciting product might not need advertising but exciting isn’t the same as useful.

So It seems like the confidence to advertise a boring product might be a signal of sorts? However, given that many people in business are often unreasonably optimistic, it doesn’t seem like a particularly strong one. Faking confidence happens quite a lot.

Comment by skybrian on Babble & Prune Thoughts · 2020-10-15T21:25:54.965Z · LW · GW

It seems like some writers have habits to combat this, like writing every day or writing so many words a day. As long as you meet your quota, it’s okay to try harder.

Some do this in public, by publishing on a regular schedule.

If you write more than you need, you can prune more to get better quality.

Comment by skybrian on Exposure or Contacts? · 2020-08-22T19:42:12.898Z · LW · GW

One aspect that might be worth thinking about is the speed of spread. Seeing someone once a week means that it slows down the spread by 3 1/2 days on average, while seeing them once a month slows things down by 15 days on average. It also seems like they are more likely to find out they have it before they spread it to you?

Comment by skybrian on GPT-3, belief, and consistency · 2020-08-17T03:21:58.289Z · LW · GW

Yes, sometimes we don't notice. We miss a lot. But there are also ordinary clarifications like "did I hear you correctly" and "what did you mean by that?" Noticing that you didn't understand something isn't rare. If we didn't notice when something seems absurd, jokes wouldn't work.

Comment by skybrian on GPT-3, belief, and consistency · 2020-08-17T00:52:50.162Z · LW · GW

It's not quite the same, because if you're confused and you notice you're confused, you can ask. "Is this in American or European date format?" For GPT-3 to do the same, you might need to give it some specific examples of resolving ambiguity this way, and it might only do so when imitating certain styles.

It doesn't seem as good as a more built-in preference for noticing and wanting to resolve inconsistency? Choosing based on context is built in using attention, and choosing randomly is built in as part of the text generator.

It's also worth noticing that the GPT-3 world is the corpus, and a web corpus is a inconsistent place.

Comment by skybrian on 10/50/90% chance of GPT-N Transformative AI? · 2020-08-11T18:39:53.628Z · LW · GW

Having demoable technology is much different than having reliable technology. Take the history of driverless cars. Five teams completed the second DARPA grand challenge in 2005. Google started development secretly in 2009 and announced the project in October 2010. Waymo started testing without a safety driver on public roads in 2017. So we've had driverless cars for a decade, sort of, but we are much more cautious about allowing them on public roads.

Unreliable technologies can be widely used. GPT-3 is a successor to autocomplete, which everyone already has on their cell phones. Search engines don't guarantee results and neither does Google Translate, but they are widely used. Machine learning also works well for optimization, where safety is guaranteed by the design but you want to improve efficiency.

I think when people talk about a "revolution" it goes beyond the unreliable use cases, though?

Comment by skybrian on Where do people discuss doing things with GPT-3? · 2020-07-26T19:03:30.848Z · LW · GW

In that case, I'm looking for people sharing interesting prompts to use on AI Dungeon.

Comment by skybrian on Where do people discuss doing things with GPT-3? · 2020-07-26T18:09:36.894Z · LW · GW

Where is this? Is it open to people who don't have access to the API?

Comment by skybrian on GPT-3 Gems · 2020-07-24T20:04:46.687Z · LW · GW

I'm suggesting something a little more complex than copying. GPT-3 can give you a random remix of several different clichés found on the Internet, and the patchwork isn't necessarily at the surface level where it would come up in a search. Readers can be inspired by evocative nonsense. A new form of randomness can be part of a creative process. It's a generate-and-test algorithm where the user does some of the testing. Or, alternately, an exploration of Internet-adjacent story-space.

It's an unreliable narrator and I suspect it will be an unreliable search engine, but yeah, that too.

Comment by skybrian on Replicating the replication crisis with GPT-3? · 2020-07-24T15:49:38.331Z · LW · GW

I was making a different point, which is that if you use "best of" ranking then you are testing a different algorithm than if you're not using "best of" ranking. Similarly for other settings. It shouldn't be surprising that we see different results if we're doing different things.

It seems like a better UI would help us casual explorers share results in a way that makes trying the same settings again easier; one could hit a "share" button to create a linkable output page with all relevant settings.

It could also save the alternate responses that either the user or the "best-of" ranking chose not to use. Generate-and-test is a legitimate approach, if you do it consistently, but saving the alternate takes would give us a better idea how good the generator alone is.

Comment by skybrian on Replicating the replication crisis with GPT-3? · 2020-07-24T01:29:42.036Z · LW · GW

I don't see documentation for the GPT-3 API on OpenAI's website. Is it available to the public? Are they doing their own ranking or are you doing it yourself? What do you know about the ranking algorithm?

It seems like another source of confusion might be people investigating the performance of different algorithms and calling them all GPT-3?

Comment by skybrian on Replicating the replication crisis with GPT-3? · 2020-07-23T17:55:03.755Z · LW · GW

How do you do ranking? I'm guessing this is because you have access to the actual API, while most of us don't?

On the bright side, this could be a fun project where many of us amateurs learn how to do science better, but the knowledge of how to do that isn't well distributed yet.

Comment by skybrian on GPT-3 Gems · 2020-07-23T04:15:40.976Z · LW · GW

We take the web for granted, but maybe we shouldn't. It's very large and nobody can read it all. There are many places we haven't been that probably have some pretty good writing. I wonder about the extent to which GPT-3 can be considered a remix of the web that makes it seem magical again, revealing aspects of it that we don't normally see? When I see writing like this, I wonder what GPT-3 saw in the web corpus. Is there an archive of Tolkien fanfic that was included in the corpus? An undergrad physics forum? Conversations about math and computer science?

Comment by skybrian on To what extent is GPT-3 capable of reasoning? · 2020-07-22T21:00:13.361Z · LW · GW

Rather than putting this in binary terms (capable of reason or not), maybe we should think about what kinds of computation could result in a response like this?

Some kinds of reasoning would let you generate plausible answers based on similar questions you've already seen. People who are good at taking tests can get reasonably high scores on subjects they don't fully comprehend, basically by bluffing well and a bit of luck. Perhaps something like that is going on here?

In the language of "Thinking, Fast and Slow", this might be "System 1" style reasoning.

Narrowing down what's really going on probably isn't going to be done in one session or by trying things casually. Particularly if you have randomness turned on, so you'd want to get a variety of answers to understand the distribution.

Comment by skybrian on To what extent is GPT-3 capable of reasoning? · 2020-07-21T07:13:48.539Z · LW · GW

GPT-3 has partially memorized a web corpus that probably includes a lot of basic physics questions and answers. Some of the physics answers in your interview might be the result of web search, pattern match, and context-sensitive paraphrasing. This is still an impressive task but is perhaps not the kind of reasoning you are hoping for?

From basic Q&A it's pretty easy to see that GPT-3 sometimes memorizes not only words but short phrases like proper names, song titles, and popular movie quotes, and probably longer phrases if they are common enough.

Google's Q&A might seem more magical too if they didn't link to the source, which gives away the trick.

Comment by skybrian on What will the economic effects of COVID-19 be? · 2020-03-25T03:25:48.586Z · LW · GW

This is more about expanding the question with slightly more specific questions:

Currently it seems like there are many people who are not scared enough, but I wonder if sentiment could quickly go the other way?

A worst-case scenario for societal collapse is that some "essential" workers are infected and others decide that it is too risky to keep working, and there are not enough people to replace them. Figuring out which sectors might be most likely to have critical labor shortages seems important.

An example of a "labor" shortage might be a lack of volunteers for blood donations.

Other than that, logistical supply bottlenecks seem more of an issue?

It seems likely that supply will be more important than demand until the recovery phase and then a big question will be to what extent do people make a persistent change in their preferences. Going without stuff for a while might cause some reconsideration about how important it actually is. An example might be that more people learn to cook and decide they like it, or maybe they try Soylent or whatever. Or, perhaps exercising in a gym is less important for people who get into an exercise routine at home or outside?

Maybe private ownership of cars and suburban living (enforcing social distance) get a boost, along with increased remote work making it more practical. The costs of lower density living might not seem so pressing?

Comment by skybrian on Frivolous speculation about the long-term effects of coronavirus · 2020-03-16T23:48:41.490Z · LW · GW

Yeah, I don't see it changing that drastically; more likely it will be a lot of smaller and yet significant changes that make old movies look dated. Something like how the airports changed after 9/11, or more trivially, that time when all the men in America stopped wearing hats.

Comment by skybrian on Crisis and opportunity during coronavirus · 2020-03-13T19:04:10.174Z · LW · GW

I'm wondering what's a way to keep better tabs on what people are talking in the rationalist community without reading everything? There is a lot of speculation, but sometimes very useful signal.

I feel like I'm reasonably in touch from reading Slate Star Codex and occasionally checking in here, and yet the first thing I saw that really got my attention was "Seeing the Smoke" getting posted on Hacker News. I guess I'm not following the right people yet?

Comment by skybrian on Assembling Sets for Contra · 2020-02-20T06:34:57.325Z · LW · GW

I'm wondering if anyone can recommend some recordings that they like on YouTube or Spotify of this sort of music? I don't know if I've heard it before.

Comment by skybrian on [Meta] New moderation tools and moderation guidelines · 2018-02-18T05:30:30.162Z · LW · GW

I'm just a lurker, but as an FYI, on The Well, hidden comments were marked <hidden> (and clickable) and deleted comments were marked <scribbled> and it seemed to work out fine. I suppose with more noise, this could be collapsed to one line: <5 scribbled>.

Comment by skybrian on Security Mindset and the Logistic Success Curve · 2017-12-01T06:32:01.954Z · LW · GW

I mean things like using mathematical proofs to ensure that Internet-exposed services have no bugs that a hostile agent might exploit. We don't need to be able to build an AI to improve defences.

Comment by skybrian on Security Mindset and the Logistic Success Curve · 2017-11-28T21:33:59.544Z · LW · GW

I think odds are good that, assuming general AI happens at all, someone will build a hostile AI and connect it to the Internet. I think a proper understanding the security mindset is that the assumption "nobody will connect a hostile AI to the Internet" is something we should stop relying on. (In particular, maintaining secrecy and internatonal cooperation seems unlikely. We shouldn't assume they will work.)

We should be looking for defenses that aren't dependent of the IQ level of the attacker, similar to how mathematical proofs are independent of IQ. AI alignment is an important research problem, but doesn't seem directly relevant for this.

In particular, I don't see why you think "routing through alignment" is important for making sound mathematical proofs. Narrow AI should be sufficient for making advances in mathematics.

Comment by skybrian on Security Mindset and the Logistic Success Curve · 2017-11-28T21:04:02.756Z · LW · GW

Even if there's no "friendly part," it seems unlikely that someone who learns the basic principles behind building a friendly AI will be unable to build an unfriendly AI by accident. I'm happy that we're making progress with safe languages, but there is no practical programming language in which it's the least bit difficult to write a bad program.

It would make more sense to assume that at some point, a hostile AI will get an Internet connection, and figure out what needs to be done about that.

Comment by skybrian on Security Mindset and the Logistic Success Curve · 2017-11-27T02:29:57.228Z · LW · GW

I'm happy to see a demonstration that Eliezer has a good understanding of the top-level issues involving computer security.

One thing I wonder though, is why making Internet security better across the board isn't a more important goal in the rationality community? Although very difficult (for reasons illustrated here), it seems immediately useful and also a good prerequisite for any sort of AI security. If we can't secure the Internet against nation-state level attacks, what hope is there against an AI that falls into the wrong hands?

In particular, building "friendly AI" and assuming it will remain friendly seems naive at best, since it will copied and then the friendly part will be modified by hostile actors.

It seems like someone with a security mindset will want to avoid making any assumption of friendliness and instead work on making critical systems that are simple enough to be mathematically proven secure. I wonder why this quote (from the previous post) isn't treated as a serious plan: "If your system literally has no misbehavior modes at all, it doesn't matter if you have IQ 140 and the enemy has IQ 160—it's not an arm-wrestling contest."

We are far from being able to build these systems but it still seems like a more plausible research project than ensuring that nobody in the world makes unfriendly AI.

Comment by skybrian on Open thread, November 13 - November 20, 2017 · 2017-11-14T22:22:27.122Z · LW · GW

Thanks! Bug filed. Regarding the Intercom chat bubble, I did post one comment a while back (accidentally in the wrong chat room for Lesswrong), but got no response, and I don't see any other responses in either chat room. Also, the indicator always says "away". To the naive user it looks abandoned. Are you sure it's working? Maybe the old chat room should be deleted?

Comment by skybrian on Open thread, November 13 - November 20, 2017 · 2017-11-12T22:41:29.140Z · LW · GW

Where do we report bugs? For example, I was unable to leave a comment here using Chrome on an Android tablet. (Desktop is okay.)

Also, is source available? I might be able to make suggestions.

Comment by skybrian on Living in an Inadequate World · 2017-11-11T04:14:32.821Z · LW · GW

I'd like to see citations for the claims about maganese and selenium.

Comment by skybrian on The Journal of High Standards · 2017-11-10T04:47:46.863Z · LW · GW

Prize money helps, but you'd also need to find relevant experts who know enough about each sub-field to tell whether the standards are indeed high. (Usually they are called "judges," but perhaps we could call them "peers?")

It might help to narrow the question: instead of looking for "high standards" (which is vague), the prizes could be awarded based on whether papers already published elsewhere appear to use good statistics. Then you'd only need reviewers who are experts in statistics.

Comment by skybrian on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-07T03:55:58.303Z · LW · GW

From an outside (but sympathetic) perspective, seems like this post would have been better if you started out with "Why we're starting a new rationalist community in Manchester" and took it from there? As it is, I wonder how many people made it to the end.

Comment by skybrian on Against naming things, and so on · 2017-11-06T04:56:23.376Z · LW · GW

I'm reminded of the Oblique Strategies playing cards. Obviously the cards don't provide any sort of rigor. But having them around might be useful for thinking creatively. Might the same apply for Less Wrong jargon?

Comment by skybrian on Moloch's Toolbox (1/2) · 2017-11-05T03:21:34.867Z · LW · GW

Looks like there is a detailed Wiki page about this.

Comment by skybrian on Moloch's Toolbox (1/2) · 2017-11-05T03:20:05.604Z · LW · GW

Yes, everything is terrible. But it seems like, if you're writing a book and discover something like the Omegaven story, it might be worth writing a blog post just about that and seeing if it can get some publicity via social media? (I settled for resharing the 2013 NBC article.)

Comment by skybrian on Inadequacy and Modesty · 2017-10-30T06:01:14.772Z · LW · GW

Maybe compare with epistemic learned helplessness?

http://squid314.livejournal.com/350090.html