Posts

Comments

Comment by mathiaskirkbonde on Should we write more about social life? · 2020-08-20T12:32:57.635Z · score: 5 (3 votes) · LW · GW

I would strongly prefer a Lesswrong that is completely devoid of this.

Half the time it ends up in spiritual vaguery, of which there's already too much on Lesswrong. The other half ends up being toxic male-centric dating advice.

Comment by mathiaskirkbonde on Inner Alignment: Explain like I'm 12 Edition · 2020-08-01T17:57:44.362Z · score: 15 (9 votes) · LW · GW

For those who, like me, have the attention span and intelligence of a door hinge the ELI5 edition is:

Outer alignment is trying to find a reward function that is aligned with our values (making it produce good stuff rather than paperclips)

Inner alignment is the act of ensuring our AI actually optimizes the reward function we specify.

An example of poor inner alignment would be us humans in the eyes of evolution. Instead of doing what evolution intended, we use contraceptives so we can have sex without procreation. If evolution had gotten its inner alignment right, we would care as much about spreading our genes as evolution does!

Comment by mathiaskirkbonde on To what extent is GPT-3 capable of reasoning? · 2020-07-22T20:35:40.253Z · score: 7 (6 votes) · LW · GW

GPT-3's goal is to accurately predict a text sequence. Whether GPT-3 is capable of reason, or whether we can get it to explicitly reason is two different questions.

If I had you read Randall Munroe's book "what if" but tore out one page and asked you to predict what will be written as the answer, there's a few good strategies that come to mind.

One strategy would be to pick random verbs and nouns from previous questions and hope some of them will be relevant for this question as well. This strategy will certainly do better than if you picked your verbs and nouns from a dictionary.

Another, much better strategy, would be to think about the question and actually work out the answer. Your answer will most likely have many verbs and nouns in common, the numbers you supply will certainly be closer than if they were picked at random! The problem is that this requires actual intelligence, whereas the former strategy can be accomplished with very simple pattern matching.

To accurately predict certain sequences of text, you will get better performance if you're actually capable of reasoning. So the best version of GPT, needs to develop intelligence to get the best results.

I think it has, and is using varying degrees of reason to answer any question depending on how likely it thinks the intelligent answer will be to predict the sequence. This why it's difficult to wrangle reason out of GPT-3, it doesn't always think using reason will help it!

Similarly it can be difficult to wrangle intelligent reasoning out of humans, because that isn't what we're optimized to output. Like many critiques I see of GPT-3, I could criticize humans in a similar manner:

"I keep asking them for an intelligent answer to the dollar value of life, but they just keep telling me how all life has infinite value to signal their compassion."

Obviously humans are capable of answering the question, we behave every day as if life has a dollar value, but good luck getting us to explicitly admit that! Our intelligence is optimized towards all manner of things different from explicitly generating a correct answer.

So is GPT-3, and just like most humans debatably are intelligent, so is GPT-3.

Comment by mathiaskirkbonde on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T11:19:32.879Z · score: 7 (4 votes) · LW · GW

I don't get the divestment argument, please help me understand why I'm wrong.

Here's how I understand it:

If Bob offers to pay Alice whatever Evil-Corp™ would have paid in stock dividends in exchange for what Alice would have paid for an Evil-Corp™ stock, Evil-Corp™ has to find another buyer. Since Alice was the buyer willing to pay the most, Evil-Corp™ now loses the difference between what Alice was willing to pay and the next-most willing buyer, Eve, is willing to pay.

Is that understanding correct, or am I missing something crucial?

If my understanding is right, then I don't understand why divestment works.

Lets assume I know Bob is doing this and I have the same risk-profile as Alice. I know the market price to be distorted, Evil-Corp™ stocks are being sold for less than what they're worth! After all, Alice deemed the stock to be worth more than what the stock was sold for. If it was not worth the price Alice was willing to pay for it, she wouldn't have offered to give that price.

Why wouldn't I just buy the stock from Eve offering to pay the price set by Alice?

Comment by mathiaskirkbonde on Self-Predicting Markets · 2020-06-11T14:18:18.426Z · score: 5 (3 votes) · LW · GW

As Benjamin Graham put it:

in the short run, the market is a voting machine; in the long run, the market a weighing machine.

Comment by mathiaskirkbonde on The unexpected difficulty of comparing AlphaStar to humans · 2019-09-23T18:25:19.777Z · score: 3 (3 votes) · LW · GW

I think that's a very fair way to put it, yes. One way this becomes very apparent, is that you can have a conversation with a starcraft player while he's playing. It will be clear the player is not paying you his full attention at particularly demanding moments, however.

Novel strategies are thought up inbetween games and refined through dozens of practice games. In the end you have a mental decision tree of how to respond to most situations that could arise. Without having played much chess, I imagine this is how people do chess openers do as well.

I considered using system 1 and 2 analogies, but because of certain resevations I have with the dichotomy, I opted not to. Basically I don't think you can cleanly divide human intelligence into those two catagories.

Ask a starcraft player why they made a certain maneuver and they will for the most part be able to tell you why he did it, despite never having thought the reason out loud until you asked. There is some deep strategical thinking being done at the instinctual level. This intelligence is just as real as system 2 intelligence and should not be dismissed as being merely reflexes.

My central critique is essentially of seeing starcraft 'mechanics' as unintelligent. Every small maneuver has a (most often implicit) reason for being made. Starcraft players are not limited by their physical capabilities nearly as much as they are limited by their ability to think fast enough. If we are interested in something other than what it looks like when someone can think at much higher speeds than humans, we should be picking another game than starcraft.

Comment by mathiaskirkbonde on The unexpected difficulty of comparing AlphaStar to humans · 2019-09-20T23:33:48.545Z · score: 33 (17 votes) · LW · GW

Before doing the whole EA thing, I played starcraft semi-professionally. I was consistently ranked grandmaster primarily making money from coaching players of all skill levels. I also co-authored a ML paper on starcraft II win prediction.

TL;DR: Alphastar shows us what it will look like when humans are beaten in completely fair fight.

I feel fundamentally confused about a lot of the discussion surrounding alphastar. The entire APM debate feels completely misguided to me and seems to be born out of fundamental misunderstandings of what it means to be good at starcraft.

Being skillful at starcraft, is the ability to compute which set of actions needs to be made and to do so very fast. A low skilled player, has to spend seconds figuring out their next move, whereas a pro player will determine it in milliseconds. This skill takes years to build, through mental caching of game states, so that the right moves become instinct and can be quickly computed without much mental effort.

As you showed clearly in the blogpost, Mana (or any other player) reach a much higher apm by mindlessly tabbing between control groups. You can click predetermined spots on the screen more than fast enough to control individual units.

We are physically capable of playing this fast, yet we do not.

The reason for this, is that in a real game my actions are limited by the speed it takes to figure them out. Likewise if you were to play speedchess against alpha-zero you will get creamed, not because you can't move the pieces fast enough, but because alpha-zero can calculate much better moves much faster than you can.

I am convinced a theoretical AI playing with a mouse and keyboard with the motorcontrols equivalent of a human, would largely be making the same 'inhuman' plays we are seeing currently. Difficulty of input is simply not the bottleneck.

Alphastar can only do its 'inhuman' moves because it's capable of calculating starcraft equations MUCH faster than humans are. Likewise, I can only do 'pro' moves because I'm capable of calculating starcraft equations much faster than an amateur.

You could argue that it's not showcasing the skills we're interested in, as it doesn't need to put the same emphasis on long-term planning and outsmarting its opponent, that equal human players have to. But that will also be the case if you put me against someone who's never played the game.

If what we really care about is proving that it can do long term thinking and planning in a game with a large actionspace and imperfect information, why choose starcraft? Why not select something like Frozen Synapse where the only way to win is to fundamentally understand these concepts?

The entire debate of 'fairness' seems somewhat misguided to me. Even if we found an apm measure that looks fair, I could move the goal post and point out that it makes selections and commands with perfect precision, whereas a human has to do it through a mouse and keyboard. There are moves that are extremely risky to pull off due to the difficulty of precisely clicking things. If we supplied it a virtual mouse to move arround, I could move the goal post again and complain how my eyes cannot take in the entire screen at once.

It's clear alphastar is not a fair fight, yet I think we got a very good look at what the fair fight eventually will look like. Alphastar fundamentally is what superhuman starcraft intelligence looks like (or at least it will be with more training) and it's abusing the exact skillset that make pro players stand out from amateurs in the first place.

Comment by mathiaskirkbonde on Sunny's Shortform · 2019-08-18T07:56:11.720Z · score: 1 (1 votes) · LW · GW

"Science confirms video games are good" is essentially the same statement as "The bible confirms video games are bad" just with the authority changed. Luckily there remains a closer link between the authroity "Science" and truth than the authority "The bible" and truth so it's still an improvement.

Most people still update their worldview based upon whatever their tribe as agreed upon as their central authority. I'm having a hard time critisising people for doing this, however. This is something we all do! If I see Nick Bostrom writing something slightly crazy that I don't fully understand, I will still give credence to his view simply for being an authority in my worldview.

I feel like my criticism of people blindly believing anything labeled "science" is essentially criticising people for not being smart enough to choose better authorities, but that's a criticism that applies to everyone who doesn't have the smartest authority (who just so happens to be Nick Bostrom, so we're safe).

Maybe there's a point to be made about not blindly trusting any authority, but I'm not smart enough to make that point, so I'll default to someone who is.

Comment by mathiaskirkbonde on Announcing the Center for Applied Postrationality · 2019-04-02T16:23:31.544Z · score: 3 (2 votes) · LW · GW

GPT2, turned post-rationalist maybe!

Comment by mathiaskirkbonde on Experimental Open Thread April 2019: Socratic method · 2019-04-02T16:11:43.258Z · score: 1 (1 votes) · LW · GW

I really like this line of thinking. I don't think it is necessarily opposed to the typical map-territory model, however.

You could in theory explain all there is to know about the territory with a single map, however that map would become really dense and hard to decipher. Instead having multiple maps, one with altitude, another with temperature, is instrumentally useful for best understanding the territory.

We cannot comprehend the entire territory at once, so it's instrumentally useful to view the world through different lenses and see what new information about the world the lens allows us to see.

You could then go the step further, which I think is what you're doing, and say that all that is meaningful to talk about are the different maps. But then I start becoming a bit confused about how you would evaluate any map's usefulness, because if you answered me: 'whether it's instrumentally useful or not', I'd question how you would evaluate if something is instrumentally useful when you can only judge something in terms of other maps.

Comment by mathiaskirkbonde on Announcing the Center for Applied Postrationality · 2019-04-02T14:45:08.169Z · score: 3 (3 votes) · LW · GW

Believing the notion that one can 'deconfuse' themself on any topic, is an archetypal mistake of the rationalist. Only in the spirit of all things that are essential to our personal understanding, can we expect our beliefs to conform to the normality of our existence. Asserting that one can know anything certain of the physical world is, by its definition, a foolhardy pursuit only someone with a narrow and immature understanding of physicality would consider meaningful.

Believing that any 'technique' could be used to train ones mind in the post-rationalistic school of thought, is to entirely miss the purpose of its existence. By its very nature, any technique applied would seize to have an effect, as any individual in applying the technique would become aware of the totality of the situation, and by doing so rejecting the message of the technique. The best any foundation could do is to become aware of its collective shadow and integrate it into the eternal subconscious that is its culture.

Only then can we achieve the post-rationalistic zeitgeist and rid ourselves of the collective cloak of delusion 'rationalism' has allowed us to wear!

EDIT: I may have nailed the impression too well