Posts

Comments

Comment by karl_smith on Open Thread: March 2010, part 2 · 2010-03-12T17:04:02.743Z · score: 1 (1 votes) · LW · GW

You are at the state flagship. 82% at College Park is roughly equal to Urbana-Champaign's 80%. The point is that top schools pick students who can get through and/or do a better job of getting students through.

Comment by karl_smith on Open Thread: March 2010, part 2 · 2010-03-12T16:52:18.345Z · score: 0 (0 votes) · LW · GW

Tim,

Thanks, input like this helps me try to think about the economic issues involved.

Can you talk a little about the depth of recursion already possible. How much assistance are these refactoring programs providing? Can the results the be used to speed up other programs or does can it only improve its own development, etc?

Comment by karl_smith on Open Thread: March 2010, part 2 · 2010-03-11T18:40:19.093Z · score: 1 (1 votes) · LW · GW

I'd appreciate some feedback on a brain dump I did on economics and technology. Nothing revolutionary here. Just want people with more experience on the tech side to check my thinking.

Thanks in advance

http://modeledbehavior.com/2010/03/11/the-economics-of-really-big-ideas/

Comment by karl_smith on Open Thread: March 2010 · 2010-03-11T17:15:01.211Z · score: 1 (1 votes) · LW · GW

I have a 2000+ word brain dump on economics and technology that I'd appreciate feedback on. What would be the protocol. Should I link to it? Copy it into a comment? Start a top level article about it?

I am not promising any deep insights here, just my own synthesis of some big ideas that are out there.

Comment by karl_smith on Priors and Surprise · 2010-03-03T18:52:36.832Z · score: 5 (5 votes) · LW · GW

Perhaps I am missing something but it seems to me that a world in which Godzilla was common knowledge would have a completely different history of biology. For one thing it's hard to imagine that explaing Godzillia would not be a major goal of philosophers and scientists since the earliest days.

I imagine one of the basic questions would be whether Godzillia was a beast or a god and answering this would be a high priority. What does Godzillia want? Where did he come from? Has he always existed? Are there more? Do they mate?

These seem like really big deal questions when confronted by a sea monster which occasionally destroys towns.

Comment by karl_smith on For progress to be by accumulation and not by random walk, read great books · 2010-03-03T18:20:12.321Z · score: 4 (4 votes) · LW · GW

So the easy answers might be:

Ben Bernanke

Mark Gertler

Micheal Wooford

Greg Mankiw

Its not clear to me why macro-economists are rightly subject to such criticism. To me its like asking a mathematician, "If you're so good at logical reasoning why didn't you create the next killer app"

Understanding how the economy works and applying that knowledge to a particular task are completely different.

Comment by karl_smith on For progress to be by accumulation and not by random walk, read great books · 2010-03-03T03:13:04.366Z · score: 0 (0 votes) · LW · GW

So clearly adapting the new idea is useful.

However, it may also be the case that there is an old idea which if re-examined will be seen to be useful in and of itself.

The problem with the Austrians is that their ideas are being considered and they are being rejected. See Byran Caplan's Why I am Not an Austrian Economist. (link seems not to be working)

Comment by karl_smith on For progress to be by accumulation and not by random walk, read great books · 2010-03-02T20:54:24.080Z · score: 4 (4 votes) · LW · GW

I think this post overstates the case a bit. My general impression is that the scientific method "wins" even in economics and that later works are better than earlier works.

Now it might be true that the average macro-economist of today understands less than Keynes did but I'd be hard pressed to say that the best don't understand more. Moreover, there are really great distillers. In macro for example, Hicks distilled Keynes into something that I would consider more useful that the original.

Nonetheless, I think it is correct that someone should be reading the originals. If not there is the propensity for a particular distiller to miss an important insight and then for everyone else to go one missing it.

What this says to me is that there should be rewards to re-discovery. Suppose that I read Adam Smith and rediscover something great. I should be rewarded for that just as much as if I had come up with the idea myself. Afterall, it has the same effect on the current state of knowledge. However, that will not happen.

Rediscovering is not as prestigious as discovering, because it is not as difficult and does not signal intellectual greatness.

Comment by karl_smith on For progress to be by accumulation and not by random walk, read great books · 2010-03-02T20:27:45.577Z · score: 3 (5 votes) · LW · GW

I remember reading that one of the most g loaded tests was recognition time. I think the experiment involved flashing letters and timing how fast it took to press the letter on a keyboard. The key correlate was "time until finger left the home keys" which the authors interpreted as the moment you realized what the letter was.

I also heard a case that sensory memory lasts for a short a relatively constant time among humans and that difference in cognitive ability were strongly related to how speed on pushing information into sensory memory. The greater the speed the larger a concept could be pushed in before key elements started to leak out.

Comment by karl_smith on Open Thread: March 2010 · 2010-03-02T02:35:51.854Z · score: 0 (0 votes) · LW · GW

I had conceived of something like the Turing test but for intelligence period, not just general intelligence.

I wonder if general intelligence is about the domains under which a control system can perform.

I also wonder whether "minds" is a too limiting criteria for the goals of FAI.

Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don't know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.

Maybe this is a more general formulation?

Comment by karl_smith on Rationality quotes: March 2010 · 2010-03-02T00:59:41.758Z · score: 0 (0 votes) · LW · GW

This was my original thought until I realized that of course it cancels or else the earth would crack into pieces.

Comment by karl_smith on Open Thread: March 2010 · 2010-03-02T00:30:16.173Z · score: 0 (0 votes) · LW · GW

Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not?

Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.

Comment by karl_smith on Open Thread: March 2010 · 2010-03-01T20:15:06.087Z · score: 1 (1 votes) · LW · GW

Well I would consider the Pencil-MrHen system as intelligent. I think further investigation would be required to determine that the pencil is not intelligent when it is not connected to MrHen, but that MrHen is intelligent when not connected to the pencil. It then makes sense to say that the intelligence originates from MrHen.

The problem with the self-referential from my perspective is that it presumes a self.

It seems to me that ideas like "I" and "want" graph humanness on to other objects.

So, I want to see what happens if I try to divorce all of my anthrocentric assumptions about self, desires, wants, etc. I want to measure a thing and then by a set of criteria declare that thing to be intelligent.

Comment by karl_smith on Rationality quotes: March 2010 · 2010-03-01T20:07:23.033Z · score: 4 (4 votes) · LW · GW

It doesn't. My though process was too silly to even bother explaining.

Comment by karl_smith on Open Thread: March 2010 · 2010-03-01T18:17:03.736Z · score: 0 (0 votes) · LW · GW

Thoughts about intelligence.

My hope is that some altruistic person will read this comment, see where I am wrong and point me to the literature I need to read. Thanks in advance.

I've been thinking about the problem of general intelligence. Before going too deeply I wanted to see if I had a handle on what intelligence is period.

It seems to me that the people sitting in the library with me now are intelligent and that my pencil is not. So what is the minimum my pencil would have to do before I suddenly thought that it was intelligent?

Moving alone doesn't count. If I drop the pencil it will fall towards the table. You could say that I caused the pencil to move, but I am not sure this isn't begging the question.

Now suppose the first time I dropped the pencil and it fell to the floor. Now I go to drop it a second time but I do it over the table. However, the pencil flies around the table and hits the same spot on the floor.

Now its got my attention. But maybe its something about the table. So I drop the pencil but put my hand in the way. Still the pencil goes around my hand.

I put my foot over the spot on the floor and drop the pencil. It flies around my foot and then into the crevice between my foot and the floor and gets stuck. As soon as I lift my foot the pencil goes to the same spot.

I believe I should now conclude that my pencil is intelligent. This has something to do with the following facts.

1) The pencil kept going to the same spot as if it had a "goal"

2) The pencil was able to respond to "obstacles" in ways not predicted by my original simply theory of pencil behavior.

I believe that I would say the pencil is more intelligent if it could pass through more "complicated" obstacles.

Here are some of my basic problems

1) What is a "goal" beyond what my intuition says

2) Similarly what is an "obstacle"

3) And what is "complicated"

I have some sense that "obstacle" is related to reducing the probability that the goal will be reached

I have some s that complicated has to do with the degree to which the probability is reduced.

Thoughts? Suggestions for readings?

Comment by karl_smith on Rationality quotes: March 2010 · 2010-03-01T17:57:44.141Z · score: 1 (1 votes) · LW · GW

I just read their website.

Its embarrassing but I have to say that honestly the centripetal force argument never occurred to me before. Rough calculations seem to indicate that a large man 100Kg should be almost half a pound heavier in the day time as he is at night. Kinda cool.

Now I am dying to get something big and stable enough to see if my home scale can pick it up.

Comment by karl_smith on Welcome to Less Wrong! · 2010-02-24T17:25:50.145Z · score: 1 (1 votes) · LW · GW

Yes,

I could try to say that my work focuses only on understand how growth and development take place for example but this in practice this it doesn't work that way.

A conversation with students, policy makers, even fellow economists will not go more than 5 - 10 mins without taking a normative tact. Virtually everyone is in favor of more growth and so the question is invariably, "what should we DO to achieve it"

Comment by karl_smith on Welcome to Less Wrong! · 2010-02-21T21:45:58.623Z · score: 0 (0 votes) · LW · GW

I don't have any connection to BIAC.

My specialty is human capital (education) and economic growth and development

Comment by karl_smith on Welcome to Less Wrong! · 2010-02-19T00:23:20.489Z · score: 6 (6 votes) · LW · GW

Name: Karl Smith

Location: Raleigh, North Carolina

Born: 1978

Education: Phd Economics

Occupation: Professor - UNC Chapel Hill

I've always been interested in rationality and logic but was sidetracked for many (12+) years after becoming convinced that economics was the best way to improve the lives of ordinary humans.

I made it to Less Wrong completely by accident. I was into libertarianism which lead me to Bryan Caplan which lead me Robin Hanson (just recently). Some of Robin's stuff convinced me that Cryonics was a good idea. I searched for Cryonics and found Less Wrong. I have been hooked ever since. About 2 weeks now, I think.

Also, skimming this I see there is a 14 year-old on this board. I cannot tell you how that makes burn with jealousy. To have found something like this at 14! Soak it in Ellen. Soak it in.

Comment by karl_smith on Open Thread: February 2010, part 2 · 2010-02-19T00:04:02.575Z · score: 0 (0 votes) · LW · GW

I am no where near caught up on FAI readings but here are is a humble thought.

What I have read so far seems to be assuming a single jump FAI. That is once the FAI is set it must take us to where we ultimately want to go without further human input. Please correct me if I am wrong.

What about a multistage approach?

The problem that people might immediately bring up is that a multistage approach might lead elevating subgoals to goals. We say, "take us to mastery of nanotech" and the AI decides to rip us apart and organize all existing ribosomes under a coherent command.

However, perhaps what we need to do is verify that any intermediate state goal better than the current state.

So what if we have the AI guess a goal state. Then simulate that goal state and expose some subset of humans to that simulation. The AI the asks "Proceed to this stage or no" The humans answer.

Once in the next stage we can reassess.

To give a sense of motivation: it seems that verifying the goodness of future-state is easier than trying to construct the basic rules of good statedness.

Comment by karl_smith on Open Thread: February 2010, part 2 · 2010-02-18T22:10:38.939Z · score: 3 (3 votes) · LW · GW

Yes, I am working my way through the sequences now. Hearing these ideas makes one want to comment but so frequently its only a day or two before I read something that renders my previous thoughts utterly stupid.

It would be nice to have a "read this and you won't be a total moron on subject X" guide.

Also, it would be good to encourage the readings about Eliezer Intellectual Journey. Though its at the bottom of the sequence page I used it a "rest reading" between the harder sequences.

It did a lot to convince me that I wasn't inherently stupid. Knowing that Eliezer has held foolish beliefs in the past is helpful.

Comment by karl_smith on You're Entitled to Arguments, But Not (That Particular) Proof · 2010-02-17T15:10:02.108Z · score: 2 (2 votes) · LW · GW

Well thats of course not right. The primary loss in dropping an H-bomb on NYC is the loss of human life - both in a moral and an economic sense.

Here is a point to consider. Over the last 100 years the population of the earth has increased by 5 billion. We have created new places for all of those people to live and work. And that was done with a population much smaller than we have today. Over the next 100 years we may add 3 billion more and we will need place for those people to live and work.

Its not immediately clear that the costs of building all of this in a new location is that huge relatively speaking.

Comment by karl_smith on Open Thread: February 2010, part 2 · 2010-02-17T15:03:04.724Z · score: 3 (3 votes) · LW · GW

"Probably good enough" doesn't engender a lot of confidence. It would seem a tragedy to go through all of this and then not be reanimated because you carelessly chose the wrong org.

On the other hand spending too much time trying to pick the right org does seem like raw material for cryocrastination.

Does anyone have thoughts / links on whole body vitrification? ALCOR claims that this is less effective than going neuro, but CI doesn't seem to offer neuro option anymore.

Comment by karl_smith on Open Thread: February 2010, part 2 · 2010-02-16T20:37:05.376Z · score: 7 (7 votes) · LW · GW

Could someone discuss the pluses and minuses of ALCOR vs Cryonics Institute.

I think Eliezer mentioned that he is with CI because he is young. My reading of the websites seem to indicate that CI leaves a lot of work to be potentially done by loved ones or local medical professionals who might not be in the best state of mind or see fit to co-operate with a cryonics contract.

Thoughts?

Comment by karl_smith on You're Entitled to Arguments, But Not (That Particular) Proof · 2010-02-15T18:47:43.657Z · score: 3 (5 votes) · LW · GW

Eliezer:

Don't you realize that I have work to do and a personal life to engage in without you posting things that I must obviously drop everything and read and think about like the Bostrom paper. Have a heart, man. Have a heart.

Comment by karl_smith on Epistemic Luck · 2010-02-12T19:23:32.955Z · score: 5 (5 votes) · LW · GW

I see some problems here but it doesn't seem quite as intractable as Alicorn suggests.

If your beliefs are highly correlated with those of your teachers then you need to immerse yourself in the best arguments of the opposing side. If you notice that you are not changing your mind very often then you have a deeper problem.

To give a few related examples. One of the things that gives me confidence in my major belief structure is that I am an Atheist Capitalist. But, as I child I was raised and immersed in Atheist Communism. I rejected the communism but not the Atheism. At least in the small set my parents/early teachers were only 50% right in their basic belief structure and that doesn't sound too unlikely.

On the other hand I have been troubled by the extent to which I have become more sensitive to liberal arguments over the past 2 years. My social and professional circle is overwhelmingly liberal. It is unlikely that this does not have an effect on my beliefs.

To compensate I am attempting immerse myself in more conservative blogs.

Now of course there is no way to be sure that the balancing act is working. However, if we take as a starting point that errors among well informed people are randomly distributed then as a rough approximation your adherence to the beliefs of your community should be proportional to the number of intellectuals who hold those same beliefs.