Posts
Comments
Sorry, should've been more clear.
I've started work on a rudimentary play money binary prediction market using LMSR in django (still very much incomplete, PM me for a link if you'd like), and my present interface is one of buying and selling shares, which isn't very user friendly.
With a "changing the price" interface that Hanson details in his paper, accurate participants can easily lose all their wealth on predictions that they're moderately confident in, depending on their starting wealth. If I have it so agents can always bet, then the wealth accumulation in accurate predictors won't happen and the market won't actually learn which agents are more accurate.
With an automated Kelly interface, it seems that participants should be able to input only their probability estimates, and either change the price to what they believe it to be if the cost is less than Kelly, or it would find a price which matches the Kelly criterion, so that agents with poorer predictive ability can keep playing and learn to do better, and agents with better predictive ability accumulate more wealth and contribute more to the predictions.
However, I'm uncertain as to whether a) the markets would be as accurate as if I used a conventional "changing the price" interface (due to the fact that it seems we're doing log utility twice), and b) whether I can find find the Kelly criterion for this, with a probability estimate being the only user input and the rest calculated from data about the market, the user's balance, etc.
Does it make sense to apply the Kelly Criterion to Hanson's LMSR? It seems to intuitively, but my math skills are too weak.
So I've kind of formulated a possible way to use markets to predict quantiles. It seems quite flawed looking back on it two and a half weeks later, but I still think it might be an interesting line of inquiry.
This doesn't always apply. It can, for example, leave you with an hour to kill at a train station, because you decided it would be really embarrassing to show up late for your ride to a CFAR workshop because of the planning fallacy.
Shorter posts when you're starting is a step in the right direction.
What could you do to make reading alone more pleasant, without a trade-off in productivity?
System 1 is the intuitive one, system 2 is the formal reasoning.
"If it's yellow let it mellow, if it's brown flush it down."
This is one of the first things I remember learning, growing up with tank water.
I'm not sure I can visualize that very well?
I've started a blog, and I'm kind of unreasonably shy about it. Especially given that it's, you know, a blog.
I'm looking for a simple an aesthetic symbol for humanism and humanity, from our ancestors looking at the stars and wondering why, and telling each other stories, and caring for each other in the distant past, to the invention of agriculture, democracy, civilization, the Enlightenment and the Renaissance, the improvement in the human condition, technology and knowledge and truth.
I think some of you know what I mean. Humanism Pt. 3 style chills.
Ideas I've thought of: hands, sails, brains, seeds, eyes, sprouts, flames. I was looking at getting symbols of both Apollo and Dionysus, but Dionysus in particular doesn't have anything particularly minimalist. An outline of human isn't connected strongly enough to the ideal I want to symbolize. The typical Happy Human is ugly. The "h+" thing is too narrow, and not visual enough.
EDIT:
The most appealing idea for me currently is a small sprout with a candle flame in between two or four leaves, inspired by the image and story here, maybe with the roots somehow obviously analogs of neurons. What do you all think of that?
Perhaps digests of the most-upvoted posts in a particular time period? Top from week x, top from month y, top from whichever time period? People can archive-binge to the degree that they find most comfortable.
Stuff I learned at the Melbourne CFAR workshop. Class name was offline habit training, i.e. actually performing your habit multiple times in a row, in response to the trigger. Salient examples: Practicing getting out of bed in response to your alarm, practice walking in the door and putting your keys where they belong, practice putting your hands on your lap when about to bite nails, practice straightening your neck when you notice you're hunched. These are all examples I've implemented, and I have had good results.
Adding associations is a key part, too. For these examples, I imagine the alarm as an air raid siren and my house getting bombed if I don't get out of bed on time. I imagine Butch being shot by Vincent in an alternate version of Pulp Fiction if his father's watch wasn't on the little kangaroo and he had to hunt around for it. For biting my nails, I imagine Mia Wallace being stabbed in the heart . The connection here is biting nails can make you sick. The vividness and intensity makes up for how tenuous that is. For posture, I imagine Gandalf the Grey compared to Gandalf the White (plus triumphant LoTR music).
Since I made that comment, I got about a third of the way through Moonwalking With Einstein, and practiced the Memory Palace/method of loci a couple of times. I've lived in a bunch of different houses, so that works pretty well for me. Some of the stuff that was mentioned sounds a lot like spacing techniques. ""[...] if you revisit the journey through your memory palace later this evening, and again tomorrow afternoon, and perhaps again a week from now, this list will leave a truly lasting impression."
This is another bit of evidence suggesting that spaced repetition would be powerful in combination with mnemonics. What Anki provides, which is far more important than the flashcard thing, is testing. I've been thinking about applying some of the ideas from test-driven development to self-programming, and Anki cards would be a core part of that.
Sorry, I realize most of that isn't relevant, but I hope the parts that were are useful.
Anki is very extensible. I think writing easy-to-use Anki plugins would be a great way to practice coding and get some useful stuff out there. In fact, I'm adding that to my list of things to look into.
Anki is good for trigger -> response sorts of memorization, but requires a bit of hacking for other things. Combining mnemonics with spaced repetition, I've heard, is ridiculously powerful. I've got a card with three sides, Trigger, Association, and Response, to try and strengthen the trigger -> response bond. I've set it up so I've got Trigger -> Response, Association -> Response, Trigger -> Association and Trigger -> Association and Response cards. If anyone wants me to share this format, I'm happy to do so.
ETA: Combining this with habit-training techniques is, I predict, potentially powerful.
How well does operant conditioning work where there's a perceived causal link compared to when there is not?
I have a Big List of Things To Try, or BLoTTT, because everything I do has to have a tacky self-helpy name even if I make it up myself. Lately I've just been, you know, trying them. It seems obvious, but it's easy to make this list and not do anything with it because you're always too busy or focused on something else or whatever. But really, it took two minutes to install f.lux and f.lux is awesome.
So is:
Boomerang
Anki
Evernote
Pomodoro
Sunlight
IFTTT
Not so awesome (for me):
Rails
Napping
Large amounts of caffeine
But I learned!
The Rails tutorial I started introduced me to TDD. TDD is great, so I'm learning to apply it to Django.
Easier to appreciate proper sleep now.
Low doses of caffeine are also great, and as yet it's nowhere near as addictive for me as it seems to be for other people. Still on a 1 day on, 2 days off cycle, to be safe.
Erm, the monetary system is generally a pretty efficient way to get anything done. Things like division of labour and comparative advantage are pretty handy when it comes to charity too.
What are the options for free MOOC platforms these days? Moodle's the only one that comes to mind, and it's not optimized for MOOCs.
How do you plan to measure focus? Just subjective effects, or are you using QuantifiedMind, or pomodoro success rate, or something?
More meetup posts clutter Discussion (which is kinda bad) but mean that people are actually going to meetup groups (which is kinda awesome). Maybe frame a meetup post not as a trivial inconvenience, but evidence that rationalists are meeting in person and having cool discussions and working on their lives instead of hanging around in Less Wrong.
When there's a lot of interesting content here, sometimes people ask why we're all sticking around talking about talking about rationality instead of doing stuff out in the world.
Point, but I did suggest several ways in which this could be encouraged (pinned threads, different stated lifespans, shared use of Latest Open Thread feed)
Reducing the visibility of the new threads could help too.
How about overlapping thread lifespans? This way when a new thread is created, recent comments on the previous thread won't go unread, and discussion can still happen there. A thread on Monday that lasts a week and a thread on Thursday does too, for example, with both threads pinned to the top and included under the Latest Open Thread feed on the side. I suspect this would be easier to implement than your second option. It's more difficult to implement than your first and third options, though.
If I live forever, through cryonics or a positive intelligence explosion before my death, I'd like to have a lot of people to hang around with. Additionally, the people you'd be helping through EA aren't the people who are fucking up the world at the moment. Plus there isn't really anything directly important to me outside of humanity.
Parasite removal refers to removing literal parasites from people in the third world, as an example of one of the effective charitable causes you could donate to.
I can't speak for you, but I would hugely prefer for humanity to not wipe itself out, and even if it seems relatively likely at times, I still think it's worth the effort to prevent it.
If you think existential risks are a higher priority than parasite removal, maybe you should focus your efforts on those instead.
Implicit-association tests are handy for identifying things you might not be willing to admit to yourself.
Once EA is a popular enough movement that this begins to become an issue, I expect communication and coordination will be a better answer than treating this like a one-shot problem. Maybe we'll end up with meta-charities as the equivalent of index funds, that diversify altruism to worthy causes without saturating any given one. Maybe the equivalent of GiveWell.org at the time will include estimated funding gaps for their recommended charities, and track the progress, automatically sorting based on which has the largest funding gap and the greatest benefit.
I doubt that at any point it will make sense for individuals should be personally choosing, ranking, and donating their own money to charities as if they're choosing the ratios for everyone TDT-style, not least because of the unnecessary redundancy.
EDIT: Upvoted because it is a valid concern. The AMF reached saturation relatively quickly, and may have exceeded the funding it needed. I just disagree with the efficiency of this particular solution to the problem.
I would assume that it's considered worse than death by some because with death it's easier to ignore the opportunity cost. Wireheading makes that cost clearer, which also explains why it's considered negative compared to potential alternatives.
I used to read a lot in class, and the teachers didn't care because they were focused on teaching students that needed more help. I had a calculator I played with, and found things like 1111^2 = 1234321, and tried to understand these patterns. I discovered the Collatz Conjecture this way, began to learn about exponential functions, etc.
I also learned to draw probability trees from an explanation of the Monty Hall problem I read once, and I think learning that at a young age helped Bayesianism feel intuitive later on, and it was a fun thing to learn.
Second the Anki recommendation, but I'm not sure it's the most fun thing.
Writing fiction was something I enjoyed too, and improved my communication skills.
It's highly relevant to your second point.
Newcomb-like problems are the ones where TDT outperforms CDT. If you consider these problems to be impossible, and won't change your mind, then you can't believe that TDT satisfies your requirements.
Currently working on a Django app to create directed acyclic graphs, intended to be used as dependency graphs. It should be accessible enough to regular consumers, and I plan to extend it to support to-do lists and curriculum mapping.
I need to work on my JavaScript skills. The back-end structure is easy enough, but organising how the graphs are displayed and such is proving more challenging, as well as trying to make a responsive interface for editing graphs.
TDT performs exactly as well as CDT on the class of problems CDT can deal with, because for those problems it essentially is CDT. So in practice you just use normal CDT algorithms except for when counterfactual copies of yourself are involved. Which is what TDT does.
Yes, it's a Newcomb-like problem. Anything where one agent predicts another is. People predict other people, with varying degrees of success, in the real world. Ignoring that when looking at decision theories seems silly to me.
Didn't the paper show TDT performing better than CDT in Parfit's Hitchhiker?
This is essentially what the TDT paper argues. It's been a while since I've read it, but at the time I remember being sufficiently convinced that it was strictly superior to both CDT and EDT in the class of problems that those theories work with, including problems that reflect real life.
Can blackmail kinds of information be compared to things like NashX or Mutually Assured Destruction usefully?
Most of my friends have information on me which I wouldn't want to get out, and vice versa. This means we can do favours for each other that pay off asynchronously, or trust each other with other things that seem less valuable than that information . Building a friendship seems to be based on gradually getting this information on each other, without either of us having significantly more on one than the other.
I don't think this is particularly original, but it seems a pretty elegant idea and might have some clues for blackmail resolution.
This is a very double-edged sword, for me at least. I'm inclined to change options so many times I never actually complete a solution.
Foc.us is a commercially available tDCS system marketed to gamers, and at a price that is almost affordable, depending on the actual benefits of the device. Does anyone here have experience, expertise, our any other insight with regards to this?
EY, you are one thousand times worse than Joss Whedon.
The theoretical microeconomics view is the one that claims:
After all, if there is unemployment, wages should fall, making it more attractive to hire workers. Therefore the equilibrium should be that everyone who wanted to work at the wages available should work. And this is not only an equilibrium, but an attractor: free-floating wages should move the economy towards the equilibrium.
Point. I imagine that increased speed will not be the most cost-effective way to turn money into political influence, however. There are plenty of ways to do that already, and unless it's cheaper than other alternatives it won't make much of a difference.
If an em is running at 10x speed, do they get 10x the voting power, since someone being in power for the next 4 years will be 40 subjective years for them?
One vote for one person already seems suboptimal, given that not everybody has equal decision-making capabilities, or will experience the costs and benefits of a policy to the same degree. Of course, if we started discriminating with voting power incautiously it could easily lead to greater levels of corruption.
Solving the decision-making balance could be done with prediction markets on the effects of different policies, a la futarchy, but that doesn't solve the other part of the problem. If we're assuming prediction markets will be used for policy selection, the "voting on values" part still needs fixing. I don't have any ideas on that, so we're kind of back where we started.
Is there any particular reason an AI wouldn't be able to self-modify with regards to its prior/algorithm for deciding prior probabilities? A basic Solomonoff prior should include a non-negligible chance that it itself isn't perfect for finding priors, if I'm not mistaken. That doesn't answer the question as such, but it isn't obvious to me that it's necessary to answer this one to develop a Friendly AI.
Eliezer's first post on Overcoming Bias was, as far as I know, The Martial Art of Rationality. I think that title works well to set the tone.
Not the right term for what's happening. Deflationary spiral refers to low demand reducing prices, which reduces production, which reduces the employment rate/average wage, which reduces demand. The bitcoin economy is not large enough for this to be the case. Rather, it appears to be a speculative bubble, where people predict the price will go up, so more people buy it, and so the price goes up, etc. Then enough people at once go "this is as far as the train's going" and everybody panics and tries to sell and the price crashes.
Since bitcoin is a currency experiencing deflation due to a cyclic process, "deflationary spiral" would sort of make sense if it didn't already refer to another specific phenomenon.
This sounds reasonable. I'm guessing bodybuilding programs are more controversial than Starting Strength. Or is there a clear winner there too?
Thanks for the informative comment.
Is SS for looking good, or for practical strength? I know they correlate, but optimizing for one doesn't necessarily mean optimizing for the other.