Rationality Power Tools
post by Nic_Smith · 2010-09-19T06:20:05.481Z · LW · GW · Legacy · 67 commentsContents
67 comments
Summary: Rationalists should win; however, it could take a really long time before a technological singularity or uploading provide powerful technology to aid rationalists in achieving their goals. It's possible today to create assistant computer software to help direct human effort and provide "hints" for clearer thinking. We should catalog such software when it exists and create it when it doesn't.
The Problem
We may be waiting awhile for a Friendly AI or similar “world changing” technology to appear. While technology continues to improve, the process of creating a Friendly AI seems extremely tricky, and there’s no solid ETA on the program. Uploading is still years to decades away. In the meantime, aspiring rationalists still have to get on with our lives.
Rationality is hard. Merely knowing about a bias is often not enough to overcome it. Even in cases where the steps to act rationally are known, the algorithm required may be more than can be done manually, or may require information which itself is not immediately at hand. However, a lot of things that are difficult become easier when you have the right tools. Could there be tools that supplement the effort involved in making a good decision? I suspect that this is the case, and will give several examples of programs that the community could work to create -- computer software to help you win. Because a lot of software is specifically created to address problems as they come up, it would also be worthwhile to maintain an index of already available software with special usefulness and applicability to Less Wrong readers.
The Opportunity
Some people have expressed concern that Less Wrong should be more focused on “doing things” rather than “knowing things” -- instrumental rather than epistemic rationality. Computer software, as a set of instructions for “doing something” fall closer to the “instrumental” end of this spectrum (although for a very good program, it’s possible to learn a thing or two from the source code, or at least the documentation). The concept of open-source software is well-tested, and platforms for open-source projects are freely available and mature -- see Google Code, SourgeForge, GitHub, etc. Additionally, many of us on Less Wrong already have the skills to contribute to such an effort directly. By my count, the 2009 survey shows that 71 of the respondents are involved in computing professionally -- 46%; it seems at first glace as if the basic skills to generate rationality power tools are already present in the community.
Currently, the only software listing on the wiki seems to be puzzle games and the only discussion I’m aware of for creating software in the community is the proposal for a Less Wrong video game.
What’s a Rationality Power Tool?
By “rationality power tool” I mean a computer program which is:
- At least somewhat specialized for a “to win” goal. Word processors are handy, but are not a rationality power tool. Web servers are essential but are basically used for anything and everything on the Internet. A rationality power tool lends itself especially toward rationality.
- General enough to be useful in many different situations. A script that takes data from a proprietary database, combines it with information from public records, and then prints out a report related to a particular company’s goals might be really handy, but is more akin to the instructions for an industrial robot on an assembly line than a power drill.
- Aren’t just for training or demonstration -- games are great and I wholeheartedly support them, but most puzzles cannot be picked up and placed “in production.” Power tools are tools.
- Not just a lifehack. This is a more inclusive concept that, say, also includes the right web browser settings to make websites load 5% faster.
- Not trivial. A set of rules to sort email by keywords in the subject or a countdown timer to see how long you’ve been doing something are generally just too small to do a lot of “heavy lifting.” Power tools have power. However, even trivial concepts could be expanded into programs that fit -- see CRM114 and this discussion of website-time-tracking programs.
Program Examples and Proposals
Facebook Idea Futures
Idea futures are neat, but public opportunities for participating in them are few and far between. The Foresight Exchange is mostly moribund, and definitely shows its age -- Consensus Point has mostly focused on bringing prediction markets to large corporations, which is fine as far as it goes, but not necessarily optimal for getting the word out and letting people who are new to the concept play with it. The Popular Science prediction market closed in 2009. Intrade has questionable legality, at best, from the U.S., has lackluster marketing (e.g. the 2008 U.S. elections are still listed at the top of the sidebar!), and is limited in the number of claims it can support and its usefulness in introducing the concept to those unfamiliar with it by using real money. Additionally, none of these markets have a large number of conditional claims (e.g. you can use Intrade for a probability that a Democrat wins the 2012 presidential election and a probability that Palin is the 2012 Republican nominee, but a conditional claim of some sort would be needed for a probability that a Democrat wins given that Palin is the Republican nominee).
At the same time, browser-based games have become much more popular recently. The basic concept of the Foresight Exchange would make a really good Facebook game if the interface were updated, and claim creation were encouraged rather than discouraged.
Prediction markets can provide good exercise in critically evaluating claims for their participants, while simultaneously quantifying the “wisdom of crowds” -- a more expensive and difficult job when a survey is needed. Zocalo and Idea Futures (a descendant of the software that runs FX), are two packages that could possibly be updated.
Related: TakeOnIt, a database of expert opinion by Ben Albahari introduced on LessWrong earlier this year.
Coordinated Efforts -- Fundraising and Otherwise
Money is [basically] a unit of caring. Sometimes it’s useful for people to donate toward something if and only if other people donate toward that cause. Websites like ChipIn and KickStarter can somewhat help in this sort of “coordination game.” The Point is a similar website with a less financial focus.
Scheduling Software
Time is almost invariably an element in a plan. It’s very rare that someone can overcome akrasia and procrastination, go do whatever needs to be done on the spot, and that’s the end of it. More often, something needs to be done every week, every day, every month, always in response to some environmental queue, randomly over a period of 3 years, in a particular sequence, at a particular time, when the resources become available, etc. There’s already been some discussion on Less Wrong of various approaches and issues of time management, but again, there hasn’t been much effort to critique these systems, find software implementations, and catalog them.
FWIW, I’ve been using a PHP script to randomly create schedules for myself for the last several weeks. While still very crude, I’ve found that being able to just “adjusting the dials” to how much I work on things on a week-by-week basis and making small changes to the auto-scheduler's output (currently a tab-delimited spreadsheet) is a lot better than having to actually come up with a complete schedule ex nihilo. The less time spent thinking about what you’re going to do, the more time you have to actually do it.
Mentor Match and Practice Program Database
Based on some advice in Memetic Hazards in Videogames, I have been reading Talent is Overrated, which also ties well to the recent Shiny Distraction discussion. It suggests that well-designed practice is key to developing skills, and this is often easier if you have a mentor for feedback and hints on what you should be working on. This leads me to two programs that don’t currently exist, as far as I know, but that would be very useful -- first, a database of practice programs for developing various skills, and second, a mentor finder to pair up rationalists that want to learn things with those that already know them.
Less Wrong
In “Building Communities With Software,” Joel Spolsky discusses how various way of setting up the software around an online community can affect its behavior. It’d be possible to create a rationalist community using nothing but IRC, but it’d be more difficult. Less Wrong itself (based on Reddit) certainly falls into the category of rationalist power tools. For those not already aware, there is currently a debate on the appropriateness of “community” items like job postings and meetings on the front page. Following the principle that the things you want people to do on a website should be easier to do, We shouldn't simply bury these -- if we want people to meet in person and implement rationality in the workforce, we should put community items in full on the front page, if perhaps not with the same prominence as articles (in the sidebar?).
Where To Go From Here
I’ve created a page on the wiki to catalog rationality power tools that currently exist and proposals to create new ones -- feel free to edit in links to programs you think fit and your ideas for new ones (perhaps with a link to an appropriate link to a article or comment on the main site for discussion). My definition of “rationality power tools” above is a bit awkward, and I’d appreciate any refinement that helps us find or make such tools. For especially worthy projects, it may make sense to solicit bids for them, and then commission them, perhaps with Kickstarter. If you’re willing to take on a project and know some programming language, just do it.*Actually, will soon create a page on the wiki, as I’m ironically experiencing technical difficulties in doing so.
67 comments
Comments sorted by top scores.
comment by magfrump · 2010-09-19T09:11:09.811Z · LW(p) · GW(p)
This may not be an appropriate place for this suggestion, but it might be. Related to a previous comment I made and the mention of the job ads debate and the sidebar idea.
I would like to see various concrete projects that should be referenced often, like the akrasia tactics review or links to various power tool projects on the main site. If there were a sidebar which contained something like a list of job ads relevant to the topics of the site, a link to akrasia tactics to help get everyone focused, and links to various power tools to help people pursue their goals effectively or find projects to contribute to.
These seem like topics that are very likely to be instrumentally useful to most users, and far less likely to be used or known of when they are not on the front page.
Replies from: atucker, curiousepic↑ comment by atucker · 2010-09-19T17:07:37.539Z · LW(p) · GW(p)
I agree that anti-akrasia tactics should be a major part of most rationality power tool sets. Being in high school, the vast majority of things that I do don't really hinge on the likelihood of specific things (politics, technologies, etc.) happening in the near future.
Akrasia, on the other hand, often gets in the way of things that I'm trying to do, and seems like a major roadblock in turning any thought into useful action.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-09-22T09:34:06.859Z · LW(p) · GW(p)
I've wondered whether akrasia of the time-killing variety as the central problem is a characteristic of the folks at LW, and if other groups would have more people who need to work using thought to moderate high-energy impulsiveness.
↑ comment by curiousepic · 2010-09-20T21:37:41.737Z · LW(p) · GW(p)
This should be posted as a suggestion here: http://lesswrong.com/lw/1w4/fall_2010_meta_thread/2n73?c=1
comment by AdeleneDawner · 2010-09-19T08:48:14.297Z · LW(p) · GW(p)
Awesome.
My tech skills aren't of the type that seem likely to be useful to this, but I'll earmark $100 to donate to some part of this that looks interesting and needs money thrown at it.
comment by Vladimir_Golovin · 2010-09-19T09:13:04.079Z · LW(p) · GW(p)
I had an idea for a web-based app for evaluating instrumental rationality techniques, something like Digg or UserVoice-based forum where techniques get upvoted, downvoted, merged, separated and discussed. However, I don't currently have a solution for the problem of 'impulse upvoting' ("hey, this technique sounds cool, let's upvote it!") -- I don't know how to make the upvotes reflect long-term usefulness of the techniques.
We must also be aware of the PredictionBook problem. It isn't integrated into LW, therefore it gets little traffic, and, as a result, is underused -- as of this writing, it's not even mentioned in this thread. Lesson to learn: if we want the power tool to work, it must be properly integrated into LW (same login, homepage / sidebar widget, LW moderation policy to redirect relevant discussions there etc.)
Replies from: gwern, patrissimo, mattnewport↑ comment by gwern · 2010-09-29T16:47:29.191Z · LW(p) · GW(p)
So, what suggestions do we have for integrating PB? I've been making and linking clear predictions I see on LW, but apparently that's not enough.
I doubt making every LW account a valid PB account would do the job alone, although it's a good start.
Perhaps there could be a LW extension which shows a random prediction in the sidebar? Or even just the X most recent predictions, similar to OB and the LW wiki.
Replies from: wedrifid, Vladimir_Golovin, Morendil↑ comment by wedrifid · 2010-09-29T17:07:16.240Z · LW(p) · GW(p)
Make it a prediction market and let us bet karma. :)
Replies from: Morendil↑ comment by Morendil · 2010-09-29T18:24:13.680Z · LW(p) · GW(p)
Please don't.
Replies from: gwern↑ comment by Vladimir_Golovin · 2010-09-29T18:56:02.309Z · LW(p) · GW(p)
I doubt making every LW account a valid PB account would do the job alone, although it's a good start.
The main directions here are 1) to drive traffic to PB, and 2) to identify and remove the biggest trivial inconveniences. Having separate accounts is definitely near the top of the list.
X most recent predictions, similar to OB and the LW wiki.
Yes, this is exactly what I had in mind -- a sidebar thingy (is 'extension' the official term?) that serves three key functions:
- Gives new LW visitors an idea of what PB is.
- Reminds existing LW users that PB exists.
- Drives traffic to PB.
↑ comment by Morendil · 2010-09-29T18:34:22.659Z · LW(p) · GW(p)
I would be willing to follow a PB blog if it drew attention to things that are "good" to make predictions about, with testimonials from users. Don't just encourage people to make predictions, write about why that's a useful thing to do and the nuts and bolts of it.
I've been making private predictions about important work-related events, because I'd like to become better calibrated in this domain. At the moment, I suck at it.
Replies from: gwern↑ comment by gwern · 2010-10-09T02:22:13.287Z · LW(p) · GW(p)
Hm. I would like to be able to write about useful things to do with PB, but Matthew tells me I'm 1 of 4 (or less) regular users, so there aren't many people to write such posts.
Further, my main goals with PB is to become more calibrated in general - which is almost impossible to give good examples for by its very generality - and as a record of my predictions and politics for the future which I can examine to see how I got wrong (entirely aside from how much I was over/underconfidence) which I also can't give any good examples for within the next few years/decades.
So I don't really know what I could do with your suggestions. (I do have a little project involving >100 anime-related predictions, but I don't know how well that will work out.)
↑ comment by patrissimo · 2010-09-29T15:54:28.310Z · LW(p) · GW(p)
What's wrong with the obvious solution of having the software downgrade or separately track instant upvotes, and quiz users at occasional later intervals on what techniques actually worked for them over time, and weight those results more highly. Or display separate short/medium/long term ratings so users can decide whether they want to read shiny things or work on long-term helpful things?
↑ comment by mattnewport · 2010-09-19T09:57:05.480Z · LW(p) · GW(p)
My only prediction book prediction is about less wrong.
comment by whpearson · 2010-09-19T09:58:56.735Z · LW(p) · GW(p)
In October I will be writing a Ruby on rails web app for experimenting with control markets to control a real world organisation.
Simple control markets* are used in Learning Classifier systems to make the system goal oriented. I'm heavily modifying it from that, but the basic idea remains the same, control of parts of the system is based on bidding a currency which is gained from feedback. I'm curious to see if they will work in the real world, or whether aspects of human psychology will make them untenable.
The theory behind them needs to be tightened up a lot as well. What you need to do is prove that the Evolutionary Stable Strategies for your control markets are non-malicious. But that is hard for me, at the moment.
*Warning - Neologism. I chose it for the allusion to prediction markets. Related: Agorics and market-based control.
Edit: An article not designed for lesswrong, can be found here please put any notes of what you would like in a lesswrong article there as well.
Replies from: gwern, whpearson↑ comment by gwern · 2010-12-22T22:33:59.782Z · LW(p) · GW(p)
How has progress been?
Replies from: whpearson↑ comment by whpearson · 2010-12-23T12:31:04.113Z · LW(p) · GW(p)
Heh. I've switched track; I'm working on another idea. Having code modified at load time to make exploiting it harder (due to return-oriented code not knowing where to exploit). That is going slow, mainly due to having to understand other peoples code and machine code, which I am rusty on. I've been meaning to contact developers of as and the elf-loaders.
The reasoning behind swtiching track. It'll look better on my CV. It is easier to get people to adopt. And may help reduce existential risk/ world takeover of computers. It'll take up less of my time when it is done, so it makes sense to do it first.
I'll get back on the web-app once that is done.
↑ comment by whpearson · 2010-09-19T11:06:39.683Z · LW(p) · GW(p)
If anyone is interested in playing a part in the experimental organisation, let me know. I'll write a post on it, once I have the software up and running.
Replies from: khafra, RobinZ↑ comment by khafra · 2010-09-19T23:01:20.865Z · LW(p) · GW(p)
Since I identify politically as a futarchist, I can hardly turn down an opportunity to participate in a real world experiment. I'm looking forward to the full post.
Replies from: whpearson↑ comment by whpearson · 2010-09-20T07:03:45.233Z · LW(p) · GW(p)
Thanks.
It is not strictly a futarchy (else I would have said), No voting is involved anywhere, and the only predictions the market makes are in the expected ability of the organisation to please the feedback mechanism.
So it has a different range of expected benefits and costs to futarchy.
↑ comment by RobinZ · 2010-09-19T15:46:36.238Z · LW(p) · GW(p)
Have you considered a test vessel for a control market? It seems to me a system with a known scale for success - like a Food Chain player - would be useful.
Replies from: whpearson↑ comment by whpearson · 2010-09-20T06:48:26.934Z · LW(p) · GW(p)
I've been thinking about that sort of test a bit. As the system is meant to be to control for incompetent/corrupt leadership, I doubt a group would do any better than the best normal players. So you would have to introduce corrupt players into the team for testing purposes. It doesn't lend itself to well controlled testing.
comment by DSimon · 2010-09-20T19:39:50.191Z · LW(p) · GW(p)
I'd like to second the idea of a system for matching people who are willing to teach a particular skill with those who are interested in learning it. To throw out some ideas:
Reputation tracking ala eBay so that people who are good learners or good teachers will be recognized and easily distinguishable from people who make trouble.
One-on-one training, rather than one-to-many. The former is much easier for teachers who have domain experience but no education experience, and also allows the project to move more towards attempting novel things that haven't yet been done in standard classroom & online training sessions.
Reciporical training: i.e. Alice teaches Bob how to program Ruby at the same time as Bob teaches Alice how to do integral calculus. I think this might help prevent a status difference from forming, and also create a motivation for people who prefer the teaching side over the learning side (or vice versa) to engage in both.
Training-related conversations (over IRC and the like) to be recorded on the website and accessible indefinitely to the people involved. Optionally the trainer and student could jointly choose to make those conversations publically accessible, along with any supplemental documentation that was created or used (homework assignments or the like).
comment by djcb · 2010-09-19T15:35:54.454Z · LW(p) · GW(p)
I'm not sure these tools are really about rationality per se, but anyhow the LW-community seems to have many people who are interested in this kind of thing - myself included.
My personal favorite to bring to a Swiss army knife fight would be emacs and, in particular, org-mode, which keeps me sane in the daily maelstrom of information coming to my desk, in the form of emails, meeting notes, action points etc. org-mode can do clocking (to see how long I'm spending on tasks), all kinds of reminders, todo-lists and can even check which of my 'projects' currently don't have a NextAction defined (as per GTD). It can display all these things in many ways, say, only '@phone' items or only items I've been waiting for for more than a week.
It even integrates with my e-mail, and I have hooked it up with my web browser, so with a few clicks I can store interesting snippets (with a source link of course) for review later.
The downside? Well, you'll have to learn emacs first, which does many things differently from other programs. All can be customized by writing a little lisp to make it behave as you want, but I understand that could be a non-trivial barrier. Having crossed that barrier, I would say that it's worth it (bias alert).
Replies from: Relsqui, JenniferRM↑ comment by JenniferRM · 2010-09-22T05:00:47.865Z · LW(p) · GW(p)
I appreciate this link/testimonial and am somewhat likely (50%?) to actually follow up on it. Thank you :-)
EDIT: Actually, consider inside/outside view and base rates and and such, I should probably adjust that to more like 20%. In any case, it looks neat!
comment by Vladimir_Nesov · 2010-09-19T07:11:49.399Z · LW(p) · GW(p)
Uploading is still years to decades away.
Years? Are you insane?
Replies from: Snowyowl, Will_Newsome, Nic_Smith↑ comment by Snowyowl · 2010-09-19T08:15:57.724Z · LW(p) · GW(p)
We'll just have to create an entry for it on a prediction market. Once someone gets a decent one going.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-09-19T08:41:08.379Z · LW(p) · GW(p)
Sounds like some kind of curiosity-stopping fallacy of gray: "All beliefs are equal before a prediction market; we'll just have to wait and see."
Replies from: Snowyowl↑ comment by Will_Newsome · 2010-09-19T08:54:42.421Z · LW(p) · GW(p)
Most models I've contemplated indicate that uploading is not going to happen unless it's the result of FAI. (Neuromorphic uFAI would happen long before uploads which are technically way more difficult unless there's some serious governmental regulation going on that would take more than years to implement.) This then becomes a question of the conjunction of probabilities that an FAI would 'upload' humans in any meaningful sense and that FAI will be developed in less than a decade. I would bet pretty heavily against that conjunction, but at any rate it's not really that fun of a prediction to speculate about so much as its constituent parts individually.
Replies from: Emile↑ comment by Emile · 2010-09-19T09:07:46.646Z · LW(p) · GW(p)
Huh, I got the opposite impression - that the timeline for brain emulation was less uncertain than the timeline for AI. Our brain-scanning capacity is getting better and better, and once the resolution is high enough to get individual neurons and their connections, we can "just" make a computer model of that, run it, et voila, you got brain emulation!
There are some difficulties, but these seem to require less of a conceptual breakthrough than AI or (especially) FAI do. It's possible that some of it is technically impossible (maybe we can't get the resolution needed to track individual dendrites), or that some bits of neuron interaction are trickier than we thought.
Replies from: Richard_Kennaway, Will_Newsome, timtyler↑ comment by Richard_Kennaway · 2010-09-19T23:06:40.533Z · LW(p) · GW(p)
Our brain-scanning capacity is getting better and better, and once the resolution is high enough to get individual neurons and their connections, we can "just" make a computer model of that, run it, et voila, you got brain emulation!
Some simple organisms have had their entire brains completely mapped, yet as far as I know, no-one has done a whole-brain emulation of them. If anyone knows to the contrary I'd be interested in a reference, but if not, why not? If someone thinks they know how such a system works, then building a working model is the obvious test to perform.
Replies from: wedrifid↑ comment by wedrifid · 2010-09-19T23:57:53.432Z · LW(p) · GW(p)
Some simple organisms have had their entire brains completely mapped, yet as far as I know, no-one has done a whole-brain emulation of them. If anyone knows to the contrary I'd be interested in a reference, but if not, why not? If someone thinks they know how such a system works, then building a working model is the obvious test to perform.
Having something mapped is not the same thing as knowing how it works.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-09-20T06:20:22.586Z · LW(p) · GW(p)
Having something mapped is not the same thing as knowing how it works.
Quite, but the comment I was replying to spoke only of mapping individual neurons and their connections. This has been done, and presumably a lot is known about what they do. But it appears not to be enough. In view of that, what will it really take to do a whole human brain emulation? Until it has been demonstrated in, say, C. elegans, it is so much moonshine.
Replies from: Douglas_Knight, wedrifid↑ comment by Douglas_Knight · 2010-09-21T05:24:12.669Z · LW(p) · GW(p)
But it appears not to be enough.
I'm not convinced that anyone actually did any work towards the nematode upload project. Who would fund it? I did hear recently the claim that it was tried and failed, but I haven't seen any evidence. ETA: at Nick's link, David says that there hasn't been any work since 2001. The work I saw from 2001 looked just like a proposal. He also mentions (at github) a paper from 2005 that is relevant, but not, I think, simulation.
Until it has been demonstrated in, say, C. elegans, it is so much moonshine.
Just because Markram isn't doing the obvious thing doesn't mean he is a fraud. Funding agencies and journalists aren't suspicious, so there's no incentive to work on non-sexy projects. It should make you nervous that he might fool himself, but he might not; he certainly believes he has other checks.
I heard a rumor that there has been renewed interest in the nematode upload project, but I don't have a reference. ETA: this was probably what Nick links to.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-09-22T00:18:29.314Z · LW(p) · GW(p)
I heard a rumor that there has been renewed interest in the nematode upload project, but I don't have a reference.
↑ comment by wedrifid · 2010-09-20T07:46:07.529Z · LW(p) · GW(p)
But it appears not to be enough. In view of that, what will it really take to do a whole human brain emulation? Until it has been demonstrated in, say, C. elegans, it is so much moonshine.
A lot, I expect, and yes. And I expect all sorts of difficulties to work through in the first attempts (giving rise to both ethical and existential difficulties).
↑ comment by Will_Newsome · 2010-09-19T09:22:41.913Z · LW(p) · GW(p)
Huh, I got the opposite impression - that the timeline for brain emulation was less uncertain than the timeline for AI.
It is less uncertain, but be careful to distinguish between uploads and emulation. Emulation just takes being able to scan at a sufficient level to get something brain-like; uploading requires getting sufficient resolution to get actual personalities and the like. It's intuitively probable that you can get dangerous neuromorphic AI via emulation before you can get a fully emulated previously in-the-flesh human specific simulation that would count as an 'upload'. But I don't have a strong technical argument for that proposition. Perhaps the Whole Brain Emulation Roadmap (PDF) would have more to say.
Replies from: JohnDavidBustard, Emile↑ comment by JohnDavidBustard · 2010-09-19T12:05:59.895Z · LW(p) · GW(p)
In terms of emulation, the resolution is currently good enough to identify molecules communicating across synapses. This enables an estimate of synapse strengths as well as a full wiring diagram of physical nerve shape. There are emulators for the electrical interactions of these systems. Also our brains are robust enough that significant brain damage and major chemical alteration (ecstasy etc.) are recoverable from, so if anything brains are much more robust than electronics. AI, in contrast, has real difficulty achieving anything but very specific problem areas which rarely generalise. For example, we cannot get a robot to walk and run in a robust way (BigDog is a start but it will be a while before its doing martial arts), we can't create a face recognition algorithm that matches human performance. We can't even make a robotic arm that can dynamically stabilise an arbitrary weight (i.e. pick up a general object reliably). All our learning algorithms have human tweaked parameters to achieve good results and hardly any of them can perform online learning beyond the constrained manually fed training data used to construct them. As a result there are very few commercial applications of AI that operate unaided (i.e. not as a specific tool equivalent to a word processor ). I would love to imagine otherwise, but I don't understand where the confidence in AI performance is coming from. Does anyone even have a set of partial Turing-test like steps that might lead to an AI (dangerous or otherwise).
Replies from: jimrandomh, Will_Newsome↑ comment by jimrandomh · 2010-09-19T14:27:37.049Z · LW(p) · GW(p)
For example, we cannot get a robot to walk and run in a robust way (BigDog is a start but it will be a while before its doing martial arts), we can't create a face recognition algorithm that matches human performance. We can't even make a robotic arm that can dynamically stabilise an arbitrary weight (i.e. pick up a general object reliably).
Two of these (walking/running, and stabilizing weights with a robotic arm) are at least partially hardware limitations, though. Human limbs can move in a much broader variety of ways, and provide a lot more data back through the sense of touch than robot limbs do. With comparable hardware, I think a narrow AI could probably do about as well as humans do.
Replies from: JohnDavidBustard↑ comment by JohnDavidBustard · 2010-09-19T15:45:04.408Z · LW(p) · GW(p)
The real difficulty with both these control problems is that we lack a theory for how to ensure the stability of learning based control systems. Systems that appear stable can self destruct after a number of iterations. A number of engineering projects have attempted to incorporate learning. However, because of a few high profile disasters, such systems are generally avoided.
Replies from: jimrandomh↑ comment by jimrandomh · 2010-09-19T16:14:35.280Z · LW(p) · GW(p)
Clumsy humans have caused plenty of disasters, too. Matching human dexterity with human-quality hardware is not such a high bar.
Replies from: JohnDavidBustard↑ comment by JohnDavidBustard · 2010-09-19T16:27:33.303Z · LW(p) · GW(p)
True, in fact despite my comments I am optimistic of the potential for progress in some of these areas. I think one significant problem is the inability to collaborate on improving them. For example, research projects in robotics are hard to build on because replicating them requires building an equivalent robot, which is often impractical. The robocup is a start as at least it has a common criteria to measure progress with. I think a standardised simulator would help (with challenges that can be solved and shared within it) but even more useful would be to create robot designs that could be printed with a 3D printer (plus some assembly like lego) so that progress could be rapidly shared. I realise this is much less capable than human machinery but I feel there is a lot further to go with the software and AI side.
Replies from: None↑ comment by [deleted] · 2010-09-20T02:24:10.933Z · LW(p) · GW(p)
I would use makerbot instead since the development trajectory is enhanced with thousand of interested makerbot operators who can improve and build upgrade for the printer. UP! 3D printer on the other hand is not open source and a lot more expensive.
↑ comment by Will_Newsome · 2010-09-20T00:09:22.033Z · LW(p) · GW(p)
I'm confused. You're saying de novo AGI is harder than brain emulation. That's debatable (I'd rather not debate it on Less Wrong), but I don't see how it's a response to anything I said.
↑ comment by Emile · 2010-09-19T10:01:14.628Z · LW(p) · GW(p)
In my mind the distance between the resolution necessary to make something brain-like and functional, and the resolution necessary to make a perfect copy of the target brain is not very large - at least, not large enough to have a big difference in expected time of arrival.
By analogy to a computer: once you can scan and copy a computer well enough for the copy to function, you're not very far from being able to make a copy that's functionally equivalent.
Replies from: wedrifid, wedrifid↑ comment by wedrifid · 2010-09-19T10:07:57.598Z · LW(p) · GW(p)
By analogy to a computer: once you can scan and copy a computer well enough for the copy to function, you're not very far from being able to make a copy that's functionally equivalent.
Bearing in mind that we created computers in such a way that copying is easy. And we created them digital and use checksums.
↑ comment by wedrifid · 2010-09-19T10:04:33.488Z · LW(p) · GW(p)
In my mind the distance between the resolution necessary to make something brain-like and functional, and the resolution necessary to make a perfect copy of the target brain is not very large - at least, not large enough to have a big difference in expected time of arrival.
Even given that the technology is being created by the same species that takes decades to weed out bugs in something as (relatively) trivial as a computer operating system?
↑ comment by timtyler · 2010-09-20T20:29:27.030Z · LW(p) · GW(p)
I got the opposite impression - that the timeline for brain emulation was less uncertain than the timeline for AI.
My understanding is that - if we can't find even a single shortcut, whole human brain emulation will produce machine intelligence... eventually. However, engineering-based approaches seem highly likely to beat that path by a considerable margin.
Aeroplanes are not scanned birds, submarines are not scanned fish, cars are not scanned horses - and so on, and so forth.
As far as I can tell, whole human brain emulation as a route to machine intelligence is an approach that is based almost entirely upon wishful thinking by philosophers.
comment by patrissimo · 2010-09-29T16:52:04.488Z · LW(p) · GW(p)
To build such tools, don't we need to know what techniques help increase rationality?
I suspect that tools built in the course of a directed rationality practice will be much more useful than those we come up with in advance.
That said, a website specifically structured to share rationality practices, as I discussed in my Shiny Distraction post, would be very useful. I could contribute content, wouldn't have time to contribute code.
comment by Vladimir_Nesov · 2010-09-19T07:40:14.660Z · LW(p) · GW(p)
Currently, the only software listing on the wiki seems to be puzzle games and the only discussion I’m aware of for creating software in the community is the proposal for a Less Wrong video game.
See Debate tools (note to volunteers: this page is in need for improvement, it's terribly structured).
comment by djcb · 2010-09-19T15:14:09.199Z · LW(p) · GW(p)
IRC was mentioned -- it'd be nice to see an IRC-bot with knowledge about LW-related topics; other tools could be added as well, to help discussions.
[ After that it's only a small matter of programming to add recursive self-improvement to the bot and see where that leads.. :) ]
comment by Emile · 2010-09-19T08:24:09.101Z · LW(p) · GW(p)
I can probably help some - ages ago I had set up a few wiki-based experiments on debate, but they never did amount to much. I'm also pretty interested in facebook game design, I follow a few blogs on the subject, have stacks of design ideas in various notebooks, and once made a prototype for the company I work for (though flash and web development aren't my primary domain of expertise).
My track record of getting this kind of project done on my own is, um, imperfect, though. I tried helping lesswrong development a year or so ago, but after a week of fighting with configuration files and versions of the database library (was it mysql? I think so) and github, I put things on hold and never picked them up since. I can certainly understand when Eliezer and Michael Vassar complain about getting a lot of offers for help, but having very few of them follow through.
In the meantime, I did improve my skills in Python web development and in using git, so maybe things wouldn't seem so complicated if I tried again now.
comment by Relsqui · 2010-09-19T11:29:08.991Z · LW(p) · GW(p)
It's more of an awl than a power drill, but for gathering data on the fly, I've been using your.flowingdata. It's useful for tracking quantifiable information over time, and might work well in concert with other tools for e.g. fighting akrasia. There are other such sites--daytum is one--but yfd is the only one I've used personally.
comment by Houshalter · 2010-10-01T01:43:22.193Z · LW(p) · GW(p)
I immagine a kind of program that tracks everything you do on your computer, not like personal information or anything, but just what kind of activities you are doing and how you are managing your time. Then some kind of function would determine how productive you are, maybe by manually inputting some measure of how much you got done that day, or just by analyzing where you spent your time, how much you were typing, etc. Then over time you could get some useful data about what makes you more or less productive. The program could then suggest when and where you should spend your time.
For example, watching youtube videos is not productive (usually), but it may help you refresh your mind so you can get better performance at things that do. This kind of constant experimentation is exactly what I think is needed to generally optimize your time managment. Doing it on my own just doesn't work. I don't experiment enough, something sounds like it should intuitively help so I do it and don't get any objective measure of how much it helped. I tend to make spontaneous decisions on how I should manage my time and not carefully thought out plans.
Anyways, this is beyond my skills to program at the momment, but does this sound plausible to anyone and if so do you have any suggestions?
comment by [deleted] · 2010-09-29T17:30:19.626Z · LW(p) · GW(p)
I would like a tool that helps me manage my goals and my tasks, as well as information relevant to them.
It could be something like a hierarchical list: supergoals would be at the top, subgoals would inherit from some supergoal, and tasks (things to do now) would be at the bottom, inheriting from some subgoal. Additionally, each (super/sub)goal could have attached information concerning achievement criteria, possible strategies, current strategy, etc. My hope is that this system helps me keep my tasks connected to my goals and aids me in becoming more strategic.
I'm currently using Google Tasks for the hierarchical list part, but there isn't a nice way to integrate relevant information. I haven't looked into it much yet but Goalbot might be a step in this direction.
I would be happy to donate money for development.
Replies from: mattnewport↑ comment by mattnewport · 2010-09-29T17:43:26.804Z · LW(p) · GW(p)
I would be happy to donate money for development.
There's a lot of tools out there for task / goal tracking. I'd suggest spending some time researching them before thinking about developing a new one. Beware of falling into the trap of chasing after the elusive perfect system rather than just getting in the habit of using something good enough.
I quite like remember the milk but my main problem is still the whole getting into the habit of using it consistently part. It's pretty flexible in terms of attaching extra information to items. It's not hierarchical but I suspect that's overrated anyway. It does support a very flexible tagging and 'smart lists' system which is better than a hierarchy in some ways.
comment by lsparrish · 2010-09-22T00:02:50.964Z · LW(p) · GW(p)
Has anyone had much experience with collaborative writing software like EtherPad? I'm currently creating an article using this. While similar to revision-control systems (like github, etc.), it lets you watch people type in real time, and has a chat-bar on the side for authors to communicate.