Mathisco's Shortform
post by Mathisco · 2021-04-04T08:09:53.671Z · LW · GW · 5 commentsContents
5 comments
5 comments
Comments sorted by top scores.
comment by Mathisco · 2021-04-04T08:09:53.886Z · LW(p) · GW(p)
I once read a comment somewhere that Paul Graham is not a rationalist, though he does share some traits, like writing a lot of self-improvement advice. From what I can tell Paul himself considers himself a builder; a builder of code and companies. But there is some overlap with rationalists, Paul Graham mostly builds information systems. (He is somewhat disdainful of hardware, which I consider the real engineering, but I am a physicist.) Rationalists are focussed on improving their own intelligence and other forms of intelligence. So both spend a great deal of time building and improving intelligent information systems, as well as improving their own mind, but for different reasons. For one the goal is merely to build, and self-improvement is a method, for the other self-improvement is the goal and building is a method. Well, and for some the goal is to build a self-improving intelligence (that doesn't wipe us out).
Builders and rationalists. Experimentalists and theoretical empiricists. I suppose they work well together.
Replies from: eigen, None↑ comment by eigen · 2021-04-14T21:22:42.514Z · LW(p) · GW(p)
This is a really good comment. If you care to know more about his thinking, he has a book called, "hackers and painters" which I think sums up very well his views. But yes, it's a redistribution of wealth and power from strong people and bureaucrats to what he calls "nerds" as in people who know technology deeply and actually build things.
The idea of instrumental rationality touches at the edges of builders, and you need to if you ever so desire to act in the world.
↑ comment by [deleted] · 2021-04-05T00:46:23.086Z · LW(p) · GW(p)
Note that for hardware, the problems are that you need a minimum instruction set in order to make a computer work. So long as you at least implement the minimum instruction set, and for all supported instructions perform the instructions (which are all in similar classes of functions) bit for bit correctly, done. It's ever more difficult to make a faster computer for a similar manufacturing cost and power consumption because the physics keep getting harder.
But it is in some ways a "solved problem". Whether a given instance of a computer is 'better' is a measurable parameter, the hardware people try things, the systems and software engineers adopt the next chip if it meets their definition of 'better'.
So yeah if we want to see new forms of applications that aren't in the same class as what we have already seen - that's a software and math problem.
Replies from: Mathisco↑ comment by Mathisco · 2021-04-05T11:40:13.747Z · LW(p) · GW(p)
I'm not sure I follow. Whether it's the evolving configuration of atoms or bits, both can lead to new applications. The main difference to me seems that today it is typically harder to configure atoms than bits, but perhaps that's just by our own design of the atoms underlying the bits? If some desired information system would require a specific atomic configuration, then you'd be hardware constrained again.
Let's say that in order to build AGI we find out you actually need super power efficient computronium, and silicon can't do that, you need carbon. Now it's no longer a solved hardware problem, you are going to have to invest massively in carbon based computing. Paul and the rationalists are stuck waiting for the hardware engineers.
Replies from: None↑ comment by [deleted] · 2021-04-05T17:39:08.253Z · LW(p) · GW(p)
I am saying below a certain level of abstraction it becomes a solved problem in that you precisely have defined what correctness is and have fully represented your system. And you can trivially check any output and validate it versus a model.
The reason software fails constantly is we don't have a good definition that can be checked by computer of what correctness means. Software Unit tests help but are not nearly as reliable as tests for silicon correctness. Moreover software just ends up being absurdly more complex than hardware and ai systems are worse.
Part of it is "unique complexity". A big hardware system is millions of copies of the same repeating element. And locality matters - an element cannot affect another one far away unless a wire connects them. A big software system is millions of copies of often duplicated and nested and invisibly coupled code.