Posts
Comments
One thing that surprised me when looking at the data, is it appears that omnivores did slightly better at getting the answers 'right' (as determined by a simple greater or less than 50% comparison). I would have thought the vegetarians would do better, as they would be more familiar with the in-group terminology. That said, I have no clue if the numbers are even significant given the size of the group, so I wouldn't read too much into it. (Apologize in advance for awful formatting)
Number 'correct' - 1 2 3 4 5 6 7 8 9 10 Grand Total
Omnivore---------- 1 0 1 5 3 8 7 3 0 1 29
Vegetarian-------- 0 0 2 1 5 4 2 0 0 0 14
Agreed. I really wish that there was a site like webMD that actually included rates of the diseases and the symptoms. I don't think it would be a big step to go from there to something that would actually propose cost-effective tests for you based on your symptoms.
e.g. You select sore-throat and fever as symptoms and it says that out of people with those symptoms, 70% have a cold, 25% have a strep infection and 5% have something else (these numbers are completely made up). An even better system would then look at which tests you could do to better nail down the probabilities, which could be as simple as asking some questions like "Do you have any visible rashes?" or asking for test results like a quick strep test.
I'm probably way late to this thread, but I was thinking about this the other day in the response to a different thread, and considered using the Kelly Criterion to address something like Pascal's Mugging.
Trying to figure out your current 'bankroll' in terms of utility is probably open to intepretation, but for some broad estimates, you could probably use your assets, or your expected free time, or some utility function that included those plus whatever else.
When calculating optimal bet size using the Kelly criterion, you end up with a percentage of your current bankroll you should bet. This percentage will never exceed the probability of the event occurring, regardless of the size of the reward. This basically means that if I'm using my current net worth as an approximation for my 'bankroll', I shouldn't even consider betting a dollar on something I think has a one-in-a-million chance, unless my net worth is at least a million dollars.
I think this could be a bit more formalized, but might help serve as a rule-of-thumb for evaluating Pascal's Wager type scenarios.
One easy way I can think of gaming such a test is to figure out ahead of time that those questions will be the ones on the test, then look up an answer for just that question, and parrot it on the actual test.
I know at my college, there were databases of just about every professor's exams for the past several years. Most of them re-used enough questions that you could get a pretty good idea of what was going to be on the exams, just by looking at past exams. A lot of people would spend a lot of time studying old exams to game this process instead of actually learning the material.
Sounds somewhat like the 'gay uncle' theory, where having 4 of your siblings kids pass on their genes is equivalent to having 2 of your own pass on their genes, but with future pairings included, which is interesting.
Stephen Baxter wrote a couple of novels that explored the first theory a bit Destiny's Children series, where gur pbybal riraghnyyl ribyirq vagb n uvir, jvgu rirelbar fhccbegvat n tebhc bs dhrraf gung gurl jrer eryngrq gb.
The addition of future contributors to the bloodline as part of your utility function could make this really interesting if set in a society that has arranged marriages and/or engagement contracts, as one arranged marriage could completely change the outcome of some deal. Though I guess this is how a ton of history played out anyway, just not quite as explicitly.
For story purposes, using a multi-tiered variant of utilitarianism based on social distance could lead to some interesting results. If the character were to calculate his utility function for a given being by something Calculated Utility = Utility / (Degrees of Separation from me)^2, it would be really easy to calculate, yet come close to what people really use. The interesting part from a fictional standpoint could be if your character rigidly adheres to this function, such that you can manipulate your utility in their eyes by becoming friends with their friends. (e.g. The utility for me to give a random stranger $10 is 0 (assuming infinite degrees of separation), but if they told me they were my sister's friend, it may have a utility of $10/(2)^2, or $2.50) It could be fun to play around with the hero's mind by manipulating the social web.
I didn't look for an extension, but there are definitely a few webpages that will do it for you. For example, your post:
ðə ɪntərnæʃənəl fənɛtɪk ælfəbɛt wəz ərɪdʒənəli mɛnt tu bi juzd æz ə nætʃərəl læŋgwədʒ rajtɪŋ sɪstəm ( fɔr ɪgzæmpəl, ðə dʒərnəl əv ðə ɪntərnæʃənəl fənɛtɪk əsosieʃən wəz ərɪdʒənəli rɪtən ɪn ajpie: èʧtitipí:// fənɛtɪk- blɒg. blogspot. kɑm/ 2012/ 06/ 100- jɪrz- əgo. eʧtiɛmɛl). bətwin ipa|s θiərɛtɪkəl ( fɪziəlɑdʒɪkəl) grawndɪŋ, ɪts wajd jus baj lɪŋgwəsts, ænd ɪts nɪr- lɛdʒəbɪləti baj əntrend ɪŋglɪʃ lɪtərɑti, ajpie ɪz ovər- dətərmənd æz ðə ɑbviəs tʃɔjs fɔr ə rəfɔrmd ɒrθɑgrəfi, ɪf ɪŋglɪʃ wər ɛvəri med tu kənfɔrm fənɛtɪkli tu ə stændərd pronənsieʃən. ðæt sɛd, ɪts nɑt goɪŋ tu hæpən, bɪkɒz spɛlɪŋ rəfɔrm ɪz nɑt ərdʒənt tu ɛniwən wɪθ kæpətəl tu traj ɪt. lajk, səmwən kʊd mek ə brawzər ɪkstɛnʃən ðæt wʊd riples wərdz ðɛr ajpie spɛlɪŋz, so ðæt æn ɒnlɑjn kəmjunəti kʊd fəmɪljərɑjz ðɛmsɛlvz wɪθ ðə nu spɛlɪŋ, bət no wən hæz med ðæt, ɔr ped fɔr ɪt tu bi med, ænd ðɪs plesəz ə strɒŋ əpər bawnd ɑn haw mətʃ ɛniwən kɛrz əbawt spɛlɪŋ rəfɔrm.
(Though the url got really garbled.)
Since this is a crazy ideas thread, I'll tag on the following thought. If you believe that in the future, if we are able to make ems, and we should include them in our moral calculus, should we also be careful not to imagine people in bad situations? Since by doing so, we may be making a very low-level simulation in our own mind of that person, that may or may not have some consciousness. If you don't believe that is the case now, how does that scale, if we start augmenting our minds with ever-more-powerful computer interfaces. Is there ever a point where it becomes immoral just to think of something?
Depends a ton on where you go and what you major in. PayScale has a ranking of a ton of colleges based on their 20-year average incomes compared to 24 years of average income for people with a high-school degree. There probably are some special cases at the tails that would benefit more from not going to college, but for the average college-goer, it is still probably a halfway decent investment.
Sure, the first point is why I think it will work. As for the second, sure, it may not be 100% accurate, but it would be better than nothing, and even negative information could be useful. (e.g. Person X did not have their phone on during the robbery, but otherwise normally has it on them 100% of the time.) I agree it's not an ideal solution, just something that might help a little.
I'm surprised no one has pushed through a cell-phone tracking app as a replacement for the ankle monitors. Sure, its not as secure, and may be left somewhere/forgotten/etc. but if you included it as a condition for parole/probation, you could probably get pretty high usage rates, with little added cost and annoyance.
For more clarification, I was thinking this over when considering rental properties in my area. A lot of people have complained that it is near impossible to make a profit on a rental property where I live. I think a lot of that is because there is a huge chunk of people who have bought property as an investment based on potential appreciation instead of based on cash flow. If your model is using only cash flow, but another model has a 5% appreciation of principle built in to it, it is going to be near impossible to be able to compete with them on rates. However, what you also see is a ton of people looking to sell their rental properties after the appreciation they were expecting never occurs (potentially due to the glut of properties on the market from other people doing the same thing). The people exiting the market take a loss, and the people who could have actually made a profit based on a cash-flow model never can get a renter to begin with, due to overly competitive rates. However, since new people are willing to invest under the same assumptions as the old investors (imperfect information), the market keeps going strong, and renters can take advantage of cheap rates, though no one is really profiting from it. That's the type of scenario you can get with an overly competitive marketplace filled with imperfect actors.
Edit: I'm sure people are making money from rentals in my area, or there wouldn't be so much of it. I'm just also sure that a lot of people are losing a ton of money from it, and driving down prices for everyone else.
No, I'm saying that capitalism is never purely implemented (with no barriers to entry/perfect information/etc.), and there are cases where due to these inefficiencies, increased competition can cause a poor business model to outcompete a sound business model, leaving nobody standing at the end. This doesn't always happen (hence capitalism mostly works).
This ties in with a thought chain I had this morning. While we may have an incredibly competitive environment, it is also populated by imperfect actors. You can't effectively compete against people who are not accurately evaluating the consequences of their decisions. This can be seen in sports, where illegal performance enhancers are the norm in many sports, and non-dopers can't really keep up (despite the achievements of the dopers being annulled later). This can be seen in business where someone who is willing to sell at a loss and make up for it in volume, will steal all of the customers from a legitimately profitable operation (though will soon go bankrupt as well). I'm sure there are examples in all sorts of industry where imperfect actors make decisions based on poor analysis that can potentially ruin better plans due to an overly competitive marketplace.
Exactly. Any observations you make on the AI, essentially give it a communications channel to the outside world. The original AI Box experiment chooses a simple text interface as the lowest bandwidth method of making those observations, as it is the least likely to be exploitable by the AI.
I don't know about varying the amount of water. But if you want to eat fewer calories of rice, there was an article that came out recently saying that the method you use to prepare it could affect the amount of calories your body actually absorbed from it.
I think it is not very hard writing something which will encode a hidden phrase using odd and even counts. (But length is key).
Using the time it takes Earth to rotate one degree gives you 86400 seconds in a day /360 degrees = 240 seconds. But the length of the day has been getting larger as the Earth slows at a rate of about 1.7 ms/century wiki
To find when one degree was equal to 234 seconds, we can find when a day was approximately 234*360 degrees = 84240 seconds, or approximately 127 million years ago. Putting the creation of the stone right in the middle of the Cretaceous Period.
Coincidentally, this also solves the issue of how the T Rex got away with such tiny arms. They had wands!
Yes, but who called the Dark Mark, and pointed out the transfigured mask. It could all be a ruse by LV. Constant Vigilance!
Of course, that would count as losing as well. I just think he needs to explicitly acknowledge that he is losing, so that Voldemort doesn't think he is secretly plotting something else.
I'm just worried that this is all a big setup, and the 37 "Death Eaters" are really Harry's allies in disguise and Imperiused, so any attempt to get out will cause Harry to end up killing all of his friends and put him on the true path towards destroying the stars. There was enough potential foreshadowing for this to be true.
-They aren't wearing the correct battle armor, only a hastily transfigured replica.
-LV explicitly said he expected Harry's friends to show up later than they did (which could mean they were supposed to be there for this ritual).
-The two main ones I've heard people talk about seem to be Lucius Malfoy and Sirius Black, both of whom are arguably now Harry's allies.
Harry needs to lose. He needs to drop his wand, kneel down, and say in Parseltongue, "I loosssse." Quirrel has already set up several tests that Harry has failed by refusing to lose. By proving that he can indeed lose, instead of continuing to escalate the conflict until the stars themselves are at risk, he may be able to pass LVs final test.
I thought it was more of a hint as to how he's going to bring Hermione back. Seems to me like surgery gets a lot easier when you can just partially un-transfigure the injured part and fix it, while leaving all the vitals transfigured into something unchanging, like a rock.
I'd be interested to see how the 'goal' category in the survey aligned with the tradeoff coefficient. I can see people looking for a lot different things depending on whether they are looking for a quick fun date, or a long-term relationship.
I also am the father of 3yo and 1yo daughters. One of the things I try to do is let their critical thinking or rationality actually have a payoff in the real world. I think a lot of times critical thinking skills can be squashed by overly strict authority figures who do not take the child's reasoning into account when they make decisions. I try to give my daughters a chance to reason with me when we disagree on something, and will change my mind if they make a good point.
Another thing I try to do, is intentionally inject errors into what I say sometime, to make sure they are listening and paying attention. (e.g. This apple is purple, right? ) I think this helps to avoid them just automatically agreeing with parents/teachers and critically thinking through on their own what makes sense. Now my oldest is quick to call me out on any errors I may make when reading her stories, or talking in general, even when I didn't intentionally inject them.
Lastly, to help them learn in general, make their learning applicable to the real world. As an example, both of my daughters, when learning to count, got stuck at around 4. To help get them over that hurdle, I started asking them questions like, "How many fruit snacks do you want?" and then giving them that number. That quickly inspired them to learn bigger numbers.
I think you may be missing a time factor. I'd agree with your statement if it was "A system can only simulate a less complex system in real-time." As an example, designing the next generation of microprocessors can be done on current microprocessors, but simulation time often takes minutes or even hours to run a simulation of microseconds.
I was thinking last night of how vote trading would work in a completely rational parliamentary system. To simplify things a bit, lets assume that each issue is binary, each delegate holds a position on every issue, and that position can be normalized to a 0.0 - 1.0 ranking. (e.g. If I have a 60% belief that I will gain 10 utility from this issue being approved, it may have a normalized score of .6, if it is a 100% belief that I will gain 10 utility it may be a .7, while a 40% chance of -1000 utility may be a .1) The mapping function doesn't really matter too much, as long as it can map to the 0-1 scale for simplification.
The first point that seems relatively obvious to me is that all rational agents will intentionally mis-state their utility functions as extremes for bargaining purposes. In a trade, you should be able to get a much better exchange by offering to update from 0 to 1 than you would for updating from 0.45 to 1, and as such, I would expect all utility function outputs to be reported to others as either 1 or 0, which simplifies things even further, though internally, each delegate would keep their true utlity function values. (As a sanity check, compare this to the current parliamentary models in the real world, where most politicians represent their ideals publicly as either strongly for or strongly against)
The second interesting point I noticed is that with the voting system as proposed, where every additional vote grants additional probability of the measure being enacted, every vote counts. This means it is always a good trade for me to exchange votes when my expected value of the issue you are changing position on is higher than my expected value of the position I am changing position on. This leads to a situation, where I am better off changing positions on every issue except the one that brings me the most utility in exchange for votes on the issue that brings me the most utility. Essentially, this means that the only issue that matters to an individual delegate is the issue that potentially brings them the most utility, and the rest of the issues are just fodder for trading.
Given the first point I mentioned, that all values should be externally represented as either 1 or 0, it seems that any vote trade will be a straight 1 for 1 trade. I haven't exactly worked out the math here, but I'm pretty sure that for an arbitrarily large parliament with an arbitrarily large number of issues (to be used for trading), the result of any given vote will be determined by the proportion of delegates holding that issue as either their highest or lowest utility issue, with the rest of the delegates trading their votes on that issue for votes on another issue they find to be higher utility. (As a second sanity check, this also seems to conform closely to reality with the way lobbyist groups push single issues and politicians trade votes to push their pet issues through the vote.)
This is probably an oversimplified case, but I thought I'd throw it for discussion to see if it sparked any new ideas.
Good point about the medical costs being a relatively recent development. However, I still think they are a huge hurdle to overcome if wealth staying in a family is to become widespread. Using the number you supplied of $50k/year, the median American at retirement age could afford about 3 years of care. (Not an expert on this, just used numbers from a google search link. This only applies for the middle class though, but essentially it means that you can't earn a little bit more than average and pass it on to your kids to build up dynastic wealth, since for the middle classes at least, at end-of-life you pretty much hit a reset button.
In American society in particular, I would assume a large reason that wealth is not passed from generation to generation currently is the enormous costs associated with end-of-life medical care. You've got to be in the top few percent of Americans to be able to have anything left after medical costs (or die early/unexpectedly which also tends to work against estate planning efforts.)
But the net average quality of life is increased overall.
I'm not sure this necessarily holds true. In very broad strokes, if the quality of life is increased by X for a single immigrant, but having that immigrant present in the country decreases the quality of life for the existing population by more than X/population, then even if a specific immigrants quality of life is improved, it doesn't mean that the net average quality of life is increased overall.
The Culture series by Iain M. Banks has a lot of different examples of mega-structures, and they tend to feature somewhat prominently in his stories. The books themselves are on the hard-SF side of things, but a few of them delve closer to fantasy when they pull a Star Trek, and have an encounter with a less-developed race.
Your vote redirection idea is interesting, but a simple 1 to 1 mapping may make it just as difficult to find a surrogate voter as it is to research a valid candidate. I've tossed around the idea of a learning system where you could log your preference for multiple issues, then based on those preferences, the system could predict your preference on future issues based on the logged preferences of others. I think a system like that would be a great aid for representatives to use to visualize the current thoughts of their constituents, and could be an intermediate step towards the system you propose here.
As a baseline estimate for just the muscular system, the worlds faster drummer can play at about 20 beats per second. That's probably an upper limit on twitch speeds of human muscles, even with a arbitrarily fast mind running in the body. Assuming you had a system on the receiving end that could detect arbitrary muscle contractions, and could control each muscle in your body independently (again, this is an arbitrarily fast mind, so I'd think it should be able to), there are about 650 muscle groups in the body according to wikipedia, so I would say a good estimate for just the muscular system would be 650 x 20bits/s or about 13 Kb/s.
Once you get into things like EKGs, I think it all depends on how much control the mind actually has over processes that are largely subconscious, as well as how sensitive your receiving devices are. That could make the bandwidth much higher, but I don't know a good way to estimate that.
Economies of scale come into play here too. If you can get to the point where 2n is a typical job, then having two part-time jobs is likely to not offer as many benefits or long term opportunities as a single full time job. Even if n is a full time job, depending on the job, having one person work massive amounts of hours is probably better for long term promotion potential than two people putting in the bare minimum and constantly having to take time off to take care of children.
Also, as others have noted, a stay-at-home parent is not someone who "doesn't work at all." Most stay-at-home parents tend to be responsible for raising children, cleaning, money management, shopping, general home repair, and a host of other things that if you outsourced so that the partner could traditionally work, could potentially cost more than the partner's earnings.
I would second Eleusis as a great game for training logical thinking. If you haven't played, at it's core, its basically the 2-4-6 game, with one of the players allowed to make up more complex rules. I've played several times with my friends, and you would be amazed at how difficult it is to tease out even some of the simpler rules. For instance, I once played a game, where a player went through almost 2 decks of cards before realizing the rule was "Alternate Red/Black".
The estimation game you describe sounds a lot like the party game Wits and Wagers, though with the added challenge of predicting what the other players may predict as well.
I like the idea behind your game though. One way you may be able to help it teach to calibrate your own confidence intervals is to have everyone also guess an X% confidence range for their guess (whatever you decide is a good range). Then, each time the answer falls within that range, award the player P points, and whenever it is outside the range, penalize them P/(1-X) points (e.g. confidence of 90%, give 1 point for correct range and -10 for incorrect). To keep the ranges tight, offer another bonus to whoever had the tightest correct bounds.
The first one I can think of definitely knows her reasoning: she doesn't like the taste of meat (though she is more of a vegetarian). For her to eat meat, it basically has to be overly processed to the point where it really doesn't taste like meat anymore (e.g. she will eat hot dogs and pepperoni occasionally).
For the 'true' vegans I know, I'm pretty sure they don't know what their reasoning is, and the only thing that would need to change for them to consider eating meat OK would be for it to stop being trendy for them to be vegan. At least, they've never been able to clearly articulate a position to me.
As one of the people who submitted a CoopBot. I had three primary motivations.
The first came from a simulation of the competition I created in Python, where I created copies of a lot of the strategies mentioned in the thread comments (excluding of course the ones with a nebulous perfectlyPredictOpponent() function). Given that a lot of the strategies discussed involved checking for cooperation with a coopBot, playing an actual coopBot ranked relatively highly in my test competitions, failing primarily against random or defectbots, and it seemed less exploitable by higher level bots, if they still wanted to be able to trick the lower level bots.
The second motivation was to use the coopBot as a baseline to see how well the other bots really did perform.
The third motivation was simple: I don't really know Scheme, so most of my more complex bots from my Python simulation proved too difficult to implement in the time I allocated to work on this.
Assuming that you want to keep the exercise somewhat entertaining, modifying a game like balderdash to start with dilbert-esque executive speak and move towards ever more specific levels could provide a fun, easy to understand method to practice moving up and down in levels of specificity.
As to how this would work in practice, have everyone present get into groups of 5-10 (or however many are sitting at a table) and give everyone a key phrase of execu-speak, such as "Energize the end-user experience." Then, everyone writes down a short, but more specific description of what the phrase really means. All of the player-generated statements are read aloud to the table (which will also display how vague the initial phrase was, since everyone should have vastly different answers), and the table votes on which is the best written response, and authors get a point for each vote they receive. Then, the exercise is repeated with the phrase the group selected as the best, for several rounds, or until everyone's responses are approximately the same, showing that a undisputable level of specificity has been reached.
For fun, you could run the exercise in reverse as well, giving a very concrete example and challenging people to go the opposite way, and give a less specific example phrase.