Posts

Comments

Comment by han on April '17 I Care About Thread · 2017-04-21T00:52:07.934Z · score: 1 (1 votes) · LW · GW

There's a blogger you might enjoy reading whose name is Ramin Shokrizade: http://www.gamasutra.com/blogs/author/RaminShokrizade/914048/ . He's some kind of consultant for video game monetization schemes. I think he's a little bit hyperbolic and overwrought sometimes, but he has a lot of direct experience and textual evidence collected from other designers at companies like Zynga.

I think there are a lot of psych topics that are relevant for freemium games but not normal gambling, which means they're a great zone for research. Normal gambling games like poker, blackjack, slots, and lotteries tend to play the same way from round to round, which means they get a lot of addiction potential from normal variable reinforcement-type stuff. But freemium games are allowed to exhibit significant differences in play as time goes on, which means they can give the user some free wins to start with, some powerups that will eventually deplete, stuff like that. (There was a great wrestling game I ran into where the first three bosses could be beaten by mashing buttons and the fourth one was literally impossible without cheating -- too greedy, perhaps?)

Maybe the big important thing is that a lot of people are really loss-averse and having some kind of state between rounds means the game can threaten to take things away from you if it wants to. In a lot of normal gambling situations, you can cut your losses and walk away. Sunk cost fallacy means many people are bad at deciding to do that, but a force even stronger than sunk cost fallacy is loss-averseness. Common example: "you've won an item, but your inventory is full -- delete something or you'll lose it forever."

Pachinko machines are more loosely regulated and if I remember right, some even implement F2P-like loss-averseness schemes. Remember Mann Co. Keys in Team Fortress, where the game presents you the opportunity for a reward but disguises it as a reward by itself? At one point it was in vogue for pachinko machines to tell you earned a jackpot, but make you play a ton of extra rounds to "unlock" it. (by getting lucky again) -- effectively the same scheme, not that evil by itself. What made it evil was that if you stopped playing the machine, anyone else could start up playing it and steal your jackpot.

Comment by han on Open thread, Apr. 17 - Apr. 23, 2017 · 2017-04-21T00:33:04.736Z · score: 0 (0 votes) · LW · GW

I think you're right. I'm badly overlooking a subtlety because I'm narrowing "describe" down to "is a suffix of." But you're right that "describe" can be extended to include a lot of other relationships between parts of the big sentence and little sentences, and you're also right that this argument doesn't necessarily apply if you unconstrain "describe" that way. (I haven't formalized exactly what you can constrain "describe" to mean -- only that there are definitions that obviously make our sledgehammer argument break.)

I think "a sentence can be countably infinite" is implicit from the problem description because the problem implies that our "giant block of descriptions" sentence probably has countably infinite size. (it can't exactly be uncountably infinite)

Comment by han on Open thread, Apr. 17 - Apr. 23, 2017 · 2017-04-20T15:49:21.323Z · score: 0 (0 votes) · LW · GW

I thought Gurkenglas' solution was a lovely discrete math sledgehammer approach. There's a lot of subtly different problems that Thomas could have meant and I think Gurkenglas' approach would probably be enough to tackle most of them.

(Attempting to summarize his proof: Some English sentences, like the one this problem is asking you to dig around in, are countably infinite in length. If some English sentences are countably infinite in length, and any two of them have different infinite suffixes, then there's no way the text of this sentence contains both of them.)

Comment by han on Thoughts on Automoderation · 2017-04-20T05:19:34.526Z · score: 0 (0 votes) · LW · GW

Not a long note or a detailed dissection, but just a reminder: whenever you take single-dimensional data and make it multidimensional, it becomes harder and more subjective to analyze it. (EDIT: To clarify, you can represent multidimensional data multidimensionally. But mapping multidimensional data to a lower-dimensional space usually involves finding a fit, which can introduce error. Mapping it to a lower-dimensional space is usually an important step in explaining it.) I suspect you'll find that if you have this many dimensions for people to respond by, you'll get lots of different-looking representations of the same underlying signal.

Maybe that's not bad: the default sort order is newest-to-oldest -- basically arbitrary -- and for most cases, "generally positive" and "generally negative" signal will be sorted in the correct order. But I still feel some suspicion because it's just one UI feature and it took you about two pages of words to pitch it.

Comment by han on Holy Ghost in the Cloud (review article about christian transhumanism) · 2017-04-20T04:48:39.188Z · score: 5 (5 votes) · LW · GW

I think there's a rule-of-thumby reading of this that makes a little bit more sense. It's still prejudiced, though.

A lot of religions have a narrative that ends in true believers being saved from death and pain and after that people aren't going to struggle over petty issues like scarcity of goods and things. I run into transhumanists every so often who have bolted these ideas onto their narratives. According to some of these people, the robots are going to try hard to end suffering and poverty, and they're going to make sure most of the humans will live forever. In practice, that goal is dubious from a thermodynamics perspective and if it wasn't, some of our smarter robots are currently doing high-frequency trading and winning ad revenue for Google employees. That alone has probably increased net human suffering -- and they're not even superintelligent

I imagine some transhumanism fans must have good reasons to put these things in the narrative, but I think it's extremely worth pointing out that these are ideas humans love aesthetically. If it's true, great for us, but it's a very pretty version of the truth. Even if I'm wrong, I'm skeptical of people who try to make definite assertions about what superintelligences will do, because if we knew what superintelligences would do then we wouldn't need superintelligences. It would really surprise me if it looked just like one of our salvation narratives.

(obligatory nitpick disclaimer: a superintelligence can be surprising in some domains and predictable in others, but I don't think this defeats my point, because for the conditions of these peoples' narrative to be met, we need the superintelligence to do things we wouldn't have thought of in most of the domains relevant to creating a utopia)

Comment by han on April '17 I Care About Thread · 2017-04-20T04:33:20.141Z · score: 2 (2 votes) · LW · GW

I think it's a little bit worse than this.

A lot of people who gamble compulsively don't do it because the odds are beyond them. (It's really easy to play slots a bunch of times, lose a lot of money, and realize you lost a lot of money.) There's something neurologically strange about people who gamble frequently even though they lose, and it's hard to pinpoint it, but it seems like variable reinforcement is winning out over logic.

If you buy a large number of lottery tickets, you're pretty likely to win some sort of prize. Related example: slot machines are designed to generate a bonus round or a jackpot about once within each ~$100, and that's a pretty normal level of play for someone who does it compulsively. Also, like casino games like slots and blackjack, lottery tickets are pretty good at generating near misses and losses-disguised-as-wins, particularly scratcher and instant-ticket lotteries because those tend to involve a small pool of symbols and elaborate presentation.

There's also a giant sunk cost fallacy problem -- the problem is that understanding the sunk cost fallacy isn't enough to defeat it for a lot of people.

I would be willing to guess that a significant proportion of the people who play the lottery a lot probably have an accurate picture of the odds, but due to mental health problems they're going to continue to waste far too much money on it. I'd also be willing to guess that they generate most of the lottery proceeds just because even though they're numerically few, they buy more tickets than anyone else.

Comment by han on Open thread, Feb. 06 - Feb. 12, 2017 · 2017-02-10T22:44:17.319Z · score: 0 (0 votes) · LW · GW

Thank you for the information! My brain does something weird when I see the word "actually," so I don't think I was charitable when I read your post.

Comment by han on Are we running out of new music/movies/art from a metaphysical perspective? · 2017-02-10T22:43:19.603Z · score: 0 (0 votes) · LW · GW

Oh, absolutely! It's misleading for me to talk about it like this because there's a couple of different workflows:

  • train for a while to understand existing data. then optimize for a long time to try to impress the activation layer that konws the most about what the data means. (AlphaGo's evaluation network, Deep Dream) Under this process you spend a long time optimizing for one thing (network's ability to recognize) and then a long time optimizing for another thing (how much the network likes your current input)
  • train a neural network to minimize a loss function based on another neural network's evaluation, then sample its output. (DCGAN) Under this process you spend a long time optimizing for one thing (the neural network's loss function) but a short time sampling another thing. (outputs from the neural net)
  • train a neural network to approximate existing data and then just sample its output. (seq2seq, char-rnn, PixelRNN, WaveNet, AlphaGo's policy network) Under this process you spend a long time optimizing for one thing (the loss function again) but a short time sampling another thing. (outputs from the neural net)

It's kind of an important distinction because like with humans, neural networks that can improvise in linear time can be sampled really cheaply (taking deterministic time!), while neural networks that need you to do an optimization task are expensive to sample even though you've trained them.

Comment by han on Open thread, Feb. 06 - Feb. 12, 2017 · 2017-02-10T22:32:28.910Z · score: 0 (0 votes) · LW · GW

I'm confused. Isn't it evident from the rest of my comment that I agree with you?

(On an unrelated note: I think my upvote button has vanished. Otherwise I would have clicked it for your post!)

Comment by han on Open thread, Feb. 06 - Feb. 12, 2017 · 2017-02-08T15:41:54.653Z · score: 2 (2 votes) · LW · GW

You're probably right! (At least some of the time.)

In music, I know a lot of people who think about things the same way you do, and they sensibly learn to use versatile tools like FM synthesis because FM synthesis covers a wide range of sounds really broadly. A lot of them even know how to make human voice-like sounds using these tools.

On average if you stick to those tools you'll do pretty well. They still fall back on using physical instruments for a lot of techniques, because you can do elaborate expressive things with physical instruments a lot more easily than with the machine.

In music, machines have been getting better, but they aren't perfect yet. A lot of input devices, even well-regarded ones, don't have the build quality of instruments made for professionals. It's really hard to simulate the physical feel of an acoustic instrument without actually building an acoustic instrument -- don't ask me why, but I've shopped around a lot and I've only found a couple input devices that really feel great for me after long-term use.

In art, there are a lot of hardware limitations. It's hard to make a tablet that looks great and feels great, and talking to an art program means you're subject to a lot of latency, and -- if your tablet doesn't have a display -- you're going to see your drawing appear on a different plane than you made it on. A lot of digital artists struggle with line quality and width variation because those things can be awkward on tablet input devices -- and depending on medium, those are often super fundamental (1) to how you pick out parts and subparts of an image and (2) to how you read its form.

You will notice there are a lot of really good digital painters and a lot of really bad digital line artists. That's a part of why!

Don't get me wrong, though. I think your point totally holds for parts of art that can be rehearsed and repeated an indefinite number of times until they look right. I also think that for planning and prototyping, you need to be able to iterate really fast and it needs to be fun, or at least unobstructive. This is another one of those things that's also true for musicians: the really good musicians spend nine hours a day in the studio and there has to be something about it that motivates them to get up in the morning.

Comment by han on Decision Theory subreddit · 2017-02-08T07:16:11.900Z · score: 1 (1 votes) · LW · GW

Thanks for the trouble of posting this!

Comment by han on Open thread, Feb. 06 - Feb. 12, 2017 · 2017-02-08T07:11:38.831Z · score: 1 (1 votes) · LW · GW

I don't see the paint of exploring many different kinds of 2D painting. I would expect that a digital pen beats most other tools. Especially in the future as technology advances.

There are a lot of people who say that piano is the most versatile instrument, and they're right about that on a superficial level. You can do polyphonic things with a piano that you can't do with a clarinet or a trumpet. And like a digital pen, a digital piano can simulate a lot of other instruments, especially if you hook it up to flashy synthesis software that knows all the different articulations for those instruments.

But using a digital piano doesn't feel very much like using those instruments, and you won't express the same way you would if you had one.

A calligraphy brush is really fun and you can't replicate the feeling of using it without the physical tools. Many of them have nice texture and you can feel their shape when you rotate them in your hands -- they're also lightweight, so if you're not holding one to the page it feels more like a pencil than like a paintbrush.

A lot of my friends do art, and I do art too when they ask me to try it with them. Different art tools feel different, and for some people, some tools are more fun than others. I think it's really important to try these things out before you make a decision about them.

Comment by han on Hacking humans · 2017-02-08T06:58:47.761Z · score: 0 (0 votes) · LW · GW

The risk with an AI is that it would be capable of changing humans in ways similar to the more dubious methods, while only using the "safe" methods.

I think what you're saying makes sense, but I'm still on Dagon's side. I'm not convinced this is uniquely an AI thing. It's not like being a computer gives you charisma powers or makes you psychic -- I think that basically comes down to breeding and exposure to toxic waste.

I'm not totally sure it's an AI thing at all. When a lot of people talk about an AI, they seem to act as if they're talking about "a being that can do tons of human things, but better." It's possible it could, but I don't know if we have good evidence to assume AI would work like that.

A lot of parts of being human don't seem to be visible from the outside, and current AI systems get caught in pretty superficial local minima when they try to analyze human behavior. If you think an AI could do the charisma schtick better than mere humans, it seems like you'd also have to assume the AI understands our social feelings better than we understand them.

We don't know what the AI would be optimizing for and we don't know how lumpy the gradient is, so I don't think we have a foothold for solving this problem -- and since finding that kind of foothold is probably an instance of the same intractable problem, I'm not convinced a really smart AI would have an advantage against us on solving us.

Comment by han on Are we running out of new music/movies/art from a metaphysical perspective? · 2017-02-08T06:35:23.720Z · score: 0 (0 votes) · LW · GW

Oh, thanks for the link!

I think you misunderstood me, or maybe I wasn't clear. I meant "of the strategies which we used to search for musical ideas, none of them involved solving NP-complete problems, and some of them have dried up." I think what neural nets do to learn about music are pretty close to what humans do -- once a learning tool finds a local minimum, it keeps attacking that local minimum until it refines it into something neat. I think a lot of strategies to produce music work like that.

I definitely don't think most humans intentionally sit down and try to solve NP-complete problems when they write music, and I don't think humans should do that either.

Comment by han on Are we running out of new music/movies/art from a metaphysical perspective? · 2017-02-07T20:54:55.759Z · score: 2 (2 votes) · LW · GW

I really like your thread: thank you for writing me back!

I think you have good intuitions about how sound works. I don't think I can determine whether there's a consensus on what is good: I'd venture to guess that any audio humans can perceive sounds good to someone. A friend of mine sent me an album that was entirely industrial shrieking.

But I agree with you that there's a limit to the distinctness -- humans can only divide the frequency spectrum a certain number of times before they can't hear gradation any more, they can only slice the time domain to a certain extent before they can't hear transitions any more, and you can only slice the loudness domain to a certain extent before you can't hear the difference between slightly louder and slightly quieter.

We can make basically any human-perceivable sound by sampling at 32 bits in 44.1khz. Many of those sounds won't be interesting and they'll sound the same as other sounds, of course. But if nothing else, that puts an upper limit on how much variation you can have. In ten minutes, at 32 bits, in 44.1khz, you have about 840MB of audio data. You could probably express any human-perceivable song in 840MB, and in practice, using psychoacoustic compression like MP3, it would take a lot less space to do the interesting ones.

I think that for us to run out of music, the domain of things that sound good has to be pretty small. Humans probably haven't produced more than a billion pieces of music, but if we pretend all music is monophonic, that there are four possible note lengths, and twelve possible pitches (note: each of these assumptions is too small, based on what we hear in real music), then you only need to string six notes together before you get something that nobody has probably tried.

What I was really responding to were these ideas that I thought were implicit in what you were saying (but I don't think you thought they were implicit):

  • if you try every human-perceptible sound, most of them will sound bad. (we don't know if they'll sound bad because there's a ton of variation in what sounds good)

  • if you try every human-perceptible sound, most of them won't be distinguishable. (The search space is so big that it doesn't matter if 99.99% of them aren't distinguishable. We don't know, in general, what makes music ideas distinguishable, so we don't know how big that is as a portion of the search space. If you think that this comes down to Complex Brain Things, which I imagine most composers do, then figuring out what makes them distinguishable might reduce to SAT. see all the things neural network researchers hate doing)

  • we are good enough at searching for combinations that we have probably tried all the ones that sound good. (there are so many combinations that exhaustively searching for them would take forever. If the problem reduces to SAT, we can't do that much better than exhaustively searching them)

I think that some of the strategies we use to search for musical ideas without having to solve any NP-complete problems have dried up. Minimalism is one technique we used to generate music ideas for a while, and it was easy enough to execute that a lot of people generated good songs very fast. But it only lasted about a decade before composers in that genre brought in elements of other genres to fight the staleness.

After a couple hundred years, Bach-type chorales have dried up. (even though other kinds of medieval polyphony haven't) The well of 1950s-style pop chord progressions appears to have dried up, but the orchestration style doesn't seem to have. (If we think "nothing new under the sun" comes down to Complex Brain Things, then we can't know for sure-- we can just guess by looking around and figuring out if people are having trouble being creative in them.) A lot of conventional classical genres don't appear to have dried up -- new composers release surprising pieces in them all the time. (see e.g. Romantic-style piano. Google even did some really cool work in computer-generating original pieces that sound like that.)

When these search strategies die, a lot of composers are good at coming up with new search strategies for good songs. We don't know exactly how they do that, but modern pop music contains a lot of variation that's yet to filter into concert music, and my gut tells me that means the future is pretty bright.

Thanks!

Comment by han on Are we running out of new music/movies/art from a metaphysical perspective? · 2017-02-07T16:48:34.511Z · score: 3 (3 votes) · LW · GW

I think two of your premises aren't necessarily true:

So if I hit random piano keys with my hands a few times and call it a song, the consensus of music listeners would be that Beethoven's Fur Elise is a better song.

Probably, but I think your example is a little bit too extreme to demonstrate your point. There are a lot of genres, like taarab, that won't sound like good music to you because of your cultural background. Acid house probably wouldn't sound good to people who were raised in the 1800s, either. There are commonalities between how people appreciate music, but people come up with new ways to introduce musicality to a piece really often, which means that it's hard to enumerate all the songs there could be.

If atonal or microtonal music suddenly got trendy, you'd come up with all kinds of new tone patterns we didn't have before. If people started thinking about timbre differently, we could come up with instruments we don't know how to listen to now. Both of these things happened after the first synthesizers came out. I don't think you can predict in advance what will make people think "this sounds good."

the general consensus is that the best classical artists are from over 50-100 years ago

The great classical artists of the time of Debussy and Ravel were musicians like Chopin and Beethoven. The great classical artists of the time of Stravinsky and Schoenberg were musicians like Debussy and Ravel. Reich and Glass had Stravinsky and Schoenberg. (and maybe Gershwin), and now we're venerating Reich and Glass. Arvo Part is probably going to get canonized real soon now.

I think that when you're talking about "classical music" you're talking about music that most people are only exposed to in curated form. It seems like when that happens, curators stick to examples that are really broadly accessible, which isn't a good way to get a picture of the whole genre. The last trends of really broadly accessible music were 1800s romanticism and 1960s minimalism, and 1960s minimalism doesn't seem old enough for curators to put it on the classical music shelf.

It's not like painting ended with Da Vinci, but today's public doesn't particularly like Liechtenstein, Warhol, Rothko, Picasso, and so on.

This doesn't undermine your point, but I think you might want to investigate modern concert music a little more before you make some of these assertions.