Posts
Comments
[...] how do we get civilizations with a sufficiently long attention span?
I heard Ritalin has a solution. Couldn't pay attention long enough to verify. ba-dum tish
On a serious note, isn't the whole killing-the-Earth-for-our-children thing a rather interesting scenario? I've never seen it mentioned in my game theory-related reading, and I find that to be somewhat sad. I'm pretty sure a proper modeling of the game scenario would cover both climate change and eaten-by-red-giant.
I'm curious about the thought process that led to this being asked in the "stupid questions" thread rather than the "very advanced theoretical speculation of future technology" thread. =P
As a more serious answer: Anything that would effectively give us a means to alter mass and/or the effects of gravity in some way (if there turns out to be a difference) would help a lot.
As ZankerH said, it leaves out the "required to make" part. Also, gjm's particular formulation of 2' makes a statement about comparisons between two given decisions, not a statement about the entire search space of possible decisions.
Thanks for making it way clearer than I did. And yes, I forgot the 1:1 edge case.
As for modifying, a minor edit or bug similar to this is always 60% formulation and specification, 10% code modification, and 30% testing and making sure you're not breaking half the project. It sounds like you've already done around 75% of the work.
(deployment not included in above pseudo-figures, since the proportional deployment hurdles varies enormously by setup, environment, etc.)
This sounds more like a conflation between the "availability" of S&T versus the "presence" of S&T.
Technology being in the public domain does not mean the remote-savannah nomad knows how to use wikipedia, has been trained in the habit of looking for more efficient production methods, is being incentivized by markets or other factors to raising his productivity, or has at his disposal an internet-connected, modern computer, another business nearby that also optimizes production of one of his raw materials / business requirements, and all the tools and practical manuals and human resources and expertise to use them.
Long story short, there's a huge difference between "Someone invented these automated farming tools and techniques, and I know they exist" and "I have the practical ability to obtain an automated farming vehicle, construct or obtain a facility complete with tools and materials for adjustment so I can raise livestock, contacts who also have resources like trucks (who in turn have contacts with means to sell them fuel), and contacts who can transform and distribute my products."
The former is what you have when something is "public domain" and you take the time to propagate all the information about it. The latter, and all the infrastructure and step-by-step work required to get there, is what you need before the economic growth kicks in.
I believe the latter was being referred to by "advances in science and technology".
Here's a data point, do your own bayes accordingly:
I've frequently been able to solve mind or brain-related problems by doing actions conceptually similar to, or sometimes literally by, praying to God. I'm not a believer in any way, but the simple attempt to convince myself that I was communicating with some higher outside entity that had the power to solve my problem did solve my problem.
Here's the other evidence I have at my disposal, all of which I am confident above 90%:
- My subconscious knows and understands everything - everything - that I think consciously, or even feel in passing.
- My subconscious is much more powerful than my conscious with regards to such issues, with "power" corresponding here to having more input channels and more output channels for the same problem-solving ability.
- My subconscious probably can figure out technical neurological or psychological solutions for things that aren't even in my (conscious) power to solve (either because I don't have the input to identify the properties of the problem, or to be aware of the exact nature of the problem, or don't have the output to affect the specific things in my brain / thoughts that need to be affected to undo the pattern causing the problem).
So by those assumptions, and a few other assumptions about base rates, it seems normal for me to conclude that my subconscious fixes problems for me when I "pray", as opposed to some deitic entity. But since you may not share my confidence in the above crucial beliefs, or my assumptions about base rates, the data point of my problems being solved by "prayer" might lead you to a different conclusion.
It's pretty much already provided, there's just that minor inconvenience of algebra between you and the article's vote counts, which IMO is a good thing.
As of 10/15, the article sits at -13, 24% positive (hover mouse over the karma score to see %).
That's 24x-76x = -13 -> 4x = 1:
6 upvotes, 19 downvotes, net -13.
And no, the consequences of talking about politics are not that grave. I mean you seem to blog about politics all the time and you have not yet imploded.
The consequences of talking about politics have historically made empire-sweeping changes about religion, slavery, gender, warfare, welfare, culture, honor, social stigma, social divide, economics, prosperity, technology, and even politics itself!
Talking about politics has also started wars and made people start involving themselves in the slave trade and other such unhappy things.
And because the Internet Law calls for it: Talking about politics is what caused Hitler to become propped up by other people to the authority he had and what caused other people to listen to him and do those things I don't need to mention.
Every political fanatic you've ever heard of, who showed up in a newspaper because he burned down a preschool in the name of [insert ideology], got to the point of doing that because of people talking about politics (or sufficiently politics-like topics).
I think the consequences are grave enough to warrant Yvain's level of concern.
It's ironic in the same way that adding the text "DEFACING STOP SIGNS" under the main text of a stop sign is ironic.
The method used is the very one which is being condemned / warned against, and the fact that it works better than other methods (in both examples) only adds to the irony, as one should assume that something that preaches not doing exactly what it's doing would invalidate itself, rather than its actual effect of producing greater results due to a quirk of humans.
Yes, of course. These particular traits you have deigned to consider for your worthy evaluation do seem, to me as well, perfectly sane.
I think you forgot to activate your Real World Logic coprocessor before replying, and I'm being sarcastic and offensive in this response.
In more serious words, these particular selected characteristics do not comprise the entirety of "the system" aforementioned. I've said that the system is /unlikely/ to be sane, as I do not have complete information on the entire logic and processes in it. I also think we're working off of different definitions of "sane" - here, IIRC, I was using a technical version that could be better expressed as "close to perfectly rational, in the same way perfect logicians can be in theoretical formal logic puzzles".
Yes, as long as we're using the definition E.Y. shared/mentioned in his 2008 paper.
By charitable reading, it's not what ze's saying.
From the standpoint of a person making discoveries, it is known from many observations that Bob the Particle will always Wag. Thus, "Bob Wags" is stated as a Natural Law, and assumed true in all calculations, and said with force of conviction, and if some math implies that Bob didn't Wag, the first thing to look for is errors in the math.
However, still from the same standpoint, if some day we discover in some experiment that Bob didn't Wag, and despite looking and looking they can't find any errors in the math (or the experiment, etc.), then they have to conclude that maybe "Bob Wags" is not fully true. Maybe then they'll discover that in this experiment, it just so happens that Julie the Particle was Hopping. Thus, our hypothetical discoverer rewrites the "law" as: "Bob Wags unless Julie Hops"
Maybe, in some "ultimate" computation of the universe, the "True" rule is that "Bob Wags unless someone else Leers, and no one Leers when Julie Hops". How do we know? How will we know when we've discovered the "true" rules? Right now, we don't know. As far as we know, we'll never know any "true" rules.
But it all boils down to: The universe has been following one set of (unknown) rules since the end of time and forever 'till the end of time (ha! there's probably no such thing as "time" in those rules, mind you!), and maybe those rules are such that Bob will Wink when we make John Laugh, and then we'll invent turbines and build computers and discuss the nature of natural laws on internet forums. And maybe in our "natural laws" it's impossible for Bob to Wink, and we think turbines work because we make Julie Hop and have Cody Scribble when Bob doesn't Wag to stop him. And some day, we'll stumble on some case where Cody Scribbles, Bob doesn't Wag, but Bob doesn't Wink either, and we'll figure out that, oh no!, the natural laws changed and now turbines function on Bob Winks instead of Cody Scribbles, and we have to rethink everything!
The universe doesn't care. Bob was Winking all along, and we just assumed it was the Cody Scribbles because we didn't know about Annie. And never there was a case where Bob Wagged and Winked at the same time, or where Bob failed to Wag when Julie Hopped. We just thought the wrong things.
And if in the future we'll discover other such cases, it's only because the universe has been doing those things all along, but we just don't see them yet.
And it's even possible that in the future Bob will marry Julie and then never again Wink... but all that means is that the rules were in fact "Bob Winks when Annie Nods unless Bob is Married to Julie", rather than "Bob Winks when Annie Nods", and yet our scientists will cry "The laws of physics have changed!" while everyone else panics about our precious turbines no longer working all of a sudden.
I don't see this contradiction. In a timeless decision theory, the diagram and parameters are not the same when X is in control of resource A (at "time" T) and when X is not in control of resource A (at time T+1).
The "timeless" of the decision theory doesn't mean that the decision theory ignores the effects of time and past decisions. Rather, it refers to a more technical (and definitely more confusing) abstraction about predictions and kind of subtly hints at a reference to the (also technical) concept of symmetry in physics.
Mainly, the point is to deflect naive reasoning in problems involving predictions or similar "time-defying" situations. The classic example is newcomblike problems, specifically Newcomb's Problem. In these situations, acting as if your current decision were a partial cause of the past prediction, and thus of whether or not Omega/The Predictor put a reward in a box, leads to better subjective chances of finding a reward in said box. The "timeless" aspect here is that a phenomenon (the decision you make) is almost looks like it's a cause of another (the prediction of your decision) that happened "in the past".
In fact, however, they have a common prior cause: the state of the universe and, particularly, of the brain / processor / information of the entity making the decision, prior to the prediction. Treating it as, and calling it, "timeless" helps avoid issues where this will turn into a debate about free will and determinism.
In newcomblike problems, an event B happenes where Omega predicts whether A1 or A2 will happen, based on whether C1 or C2 is true (two possible states of the brain of the player, or outcomes of a simulation). Then, either A1 or A2 happens, based on whether C1 or C2 is true, as predicted by Omega. Since the player doesn't have the same means as Omega to know C or B, he must decide as if A caused C which caused B, which could be roughly described as a decision causing the result of a prediction of this decision in the past.
So, back to the timeless VS sunk costs "contradiction": In a sunk costs situation, there is no Omega, there is no C, there is no prediction (B). At the moment of decision, the state of the game in abstract is something more like: "Decision A caused Resource B to go from 5 to 3, 1B can be paid to obtain 2 utilons by making decision C1, 2B can be paid to obtain 5 utilons by making decision C2". There's no predictions or fancy delusions of affecting events that caused the current state. A caused B(5->3) caused (NOW) caused C. C has no causal effect on (NOW), which has no causal effect on B, which has no causal effect on A. No amount of removing the timestamps and pretending that your future decision will change how it was predicted is going to change the (NOW) state.
I could go on at length and depth, but let's see how much of this makes sense first (i.e. that you understand and/or that I mis-explained).
First, the most reliable solution is to save the page manually, yourself, to a local hard drive of your preference, provided you keep good file hygiene and backups and so on. If there are multiple-page articles, you have to save each page of the article, though. You also can run into some issues with the more "interactive" websites and articles, particularly if they use flash or java apps (which means the html you save will only contain a link to some flash or java file elsewhere on their server, which means you're back to square one). You can also get all kinds of gibberish from broken links anyway if the pictures suffer link rot or the page refers to an external style sheet or any other of some large number of possible other problems.
I think the only way to avoid this in an efficient, one-click manner is to use a pre-processor that detects the relevant content and saves only that for you. I use Readability to do this for slightly different goals and purposes.
Pocket is a more popular choice usually, but I've had many negative experiences with it not syncing to the android app, not working well with certain browsers, not processing articles properly, not processing the whole article, or sometimes not processing articles at all and keeping only the link (which helps you diddly-squat since that's the whole problem you're having).
Fair warning: Readability does a complete pre-processing of the article for, well, readability. It will remove ads, sidebars, top-bars, often comments to articles, and once in a while it'll remove too much. It usually successfully detects multiple-page articles, but not always.
(available on Hulu if you're into two minutes of ads every ten minutes)
Should read:
(available on Hulu for US residents with a local ISP contract if you're into two minutes of ads every ten minutes, and for devious tricksters with access to a reliable US proxy who are too impatient to just torrent things - since both would be considered just as illegal by a completely impartial US court of law)
It'll be Valve Soon™ before everyone understands that one...
I've retracted my (epistemically unhealthy) previous responses about great physics discoveries. I'd say "oops" as per the LW tradition, but when I look back on what I wrote all I see is a rather shameful display of cognitive dissonance. There's no mere "oops" there, but plain old full-blown contrarian, academic-hipster biases. Sorry.
An extra note:
Crossing an inferential gap is harder in a short post, unless you are an amazing writer.
In the quote, the qualification is unnecessary. Ceteris paribus, it's usually harder in a short post, regardless of general writing skill.
My best take on the thing is that, historically, most great physics discoveries were made by generalist, wide-branching natural philosophers. Granted, "natural philosophy" is arguably the direct ancestor of physics from which spawned the bastards of "chemistry" and "biology", but even regardless, the key point is that they were generalists and that, if we were going to solve the current problem simply by throwing more specialized physicists and gamma ray guns at it, this is not the evidence I'd expect to see.
Given historical base rates of generalists vs specialists in physics, and the ratio of Great Discoveries made by the former rather than the latter, it feels as if generalists have a net advantage in "consolidating" recent research into a Great Discovery.
I do have to agree, though, that all of them came from physicists, if not necessarily formally trained, although in most cases they were. Good knowledge of physics is necessary, that I won't argue. But what I'll point out is that I've personally met many more game developers and programmers with a much better grasp of (basic) physics (i.e. first volume of Feynman's Lectures) than college physics department members, on a purely absolute count. It doesn't seem that far-fetched, to me, to assume there's a comparable difference in base rates of people within and outside physics departments with a solid enough grasp of physics for the Next Great Discovery, whatever that threshold may be (and obviously, the lower the actual threshold, the more likely it is that it will come from outside Physics Departments).
On the other hand, my confidence that the ultimately correct and most useful Next Great Discovery (e.g. any method to control gravity) will not come from a physics department is above 50%.
Philosophy simply happens to be one of the more likely departments where it might come from, though still quite a ways behind "unaffiliated" and "engineering".
Meanwhile, I'd also pounce on the "Ontological Alternatives" chapter there to ask a slightly unrelated question: Regarding the "fourth option" there, has anyone ever tried to analyze a world ontology where, unlike here, particles can belong to multiple different worlds according to some kind of rule or per-particle basis? e.g. Instead of having a particle belong to World # 872 as an elementary property, which lets it only interact with other W-872 particles, it would have a set of "keys" where any other particle that also has at least one of those keys can be interacted with, while that other particle might have a slightly different keyset and thus be able to interact with a third particle "located" right next to the first one (insofar as position of two non-interacting particles is relevant to the second one in question)?
I realize I'm throwing ideas around while having no idea at all what I'm talking about, but at the same time from where I'm sitting it feels like all the "sides" of the QM interpretation debates always share a humongous bag of uncontested assumptions. Namely, assumptions about pesky details like "position" being a necessary, elemental and fundamental property of particles.
Quantum Mechanics as Classical Physics, by Charles Sebens. It's described as yet another new QM interpretation, firmly many-worlds and no collapse, with no gooey "the wave function is real" and some sort of effort, if I read correctly, to put back the wave-function in its place as a description rather than a mysterious fundamental essence. Not in quite those exact words, but that does seem to be the author's attitude IMO.
Sounds interesting and very much in line with LW-style reductionist thinking, and agrees a bit too much with my own worldviews and preconceptions. Which is why I'm very much craving a harsh batch of criticism and analysis on this from someone who can actually read and understand the thing, unlike me. If anyone knows where I could find such, or would be kind enough to the world at large to produce one, that'd be appreciated.
From my understanding, the LW community doesn't have any cohesive view, appreciation or even level of understanding in rhetoric (or so many other skillsets and fields), beyond the general idea that it's a useful social skills but that some areas contain a lot of Dark Arts and must be approached with caution by those with moral reservations towards manipulation and anti-epistemics.
I've heard "bias" and "conflict of interest" used as interchangeable synonyms in the same sentence before. I've also seen it often used to refer to partisanship.
Might want to specifically defuse those two preconceptions before any sort of course on biases can be taught.
I actually don't think that game theory helps with winning friends. It's useful to prevent other people from bullying yourself but it doesn't make people like you.
Game Theory per-se won't help with winning friends, but it does wonders at helping one analyze and plan strategies about political landscapes in the general sense, including the tribal and clique networks of highschool in the specific.
Dealing with negative shenanigans is definitely its primary strongpoint, but that in itself can be counted as removing obstacles or negative influences on winning friends. Which, in my interpretation, is equivalent to pouncing on those opportunity costs and making a profit.
Problem is, most high school denizens don't have the slightest idea what a "serious attempt to learn social skills" even remotely looks like, let alone know how to go about it.
Hindsight says studying politics, monkey tribes, evpsych and game theory together with occasional experimentation outside of the main / high school community are probably the better way to go if you're not socially gifted but at least moderately smart.
However, my first thoughts about politics and monkeys in high school were most definitely not "Yay better ways to make people help me!". And I wasn't aware at all that I didn't even know about the existence of the field of game theory, and only peripherally aware that some evolution research might touch on psychological and social issues.
None of which is intended as a counterargument, mind you. It's just that dropping "learn social skills" without something to support it, preferably a whole coursework guide including the above material, seems to me like it would only do more harm than good by way of wasting the student's time they could spend studying other, easier things, while they'd learn good skills more easily later once they became more aware of things. Or, at least, that's what seems to me to be happening most often.
I think if it comes naturally, widespread popularity is an incredibly helpful quality, and a very important one to nurture.
Is it? I think "popularity" is being conflated with "influence".
I wasn't popular at all with high school. I was the guy you suddenly want to be very friendly with and then stay far far away from for a few weeks when he started dropping names and pointed hints. And I was also the guy whom people came to tell what they saw in corridor E-2 so they could work in some good will or hopefully even make me owe them a few favors.
And all without the disadvantages of being publicly visible! Like having to maintain appearances to a much higher standards! Or the whole community turning against you once you cross one of its many invisible lines of unacceptability!
(note: The above examples were not the widespread thing I've portrayed them to be, but rather rare and isolated cases I've fished out as salient images. Still, I find the advantages I enjoyed much better than outright "popularity".)
And the rest of being rational is making sure that the future likelihood of making the same kind of mistake is as low as possible!
You can't just start from the assumption that society would be more rational if rationality was taught at school. You'd also need evidence that rationality can be taught to a lot of average people. I don't think such evidence exists. Whatever taken out from the curriculum might be replaced by something completely ineffective.
Can't specific rationality techniques be effectively taught to a large amount of average people, though? I vaguely recall that there might be some examples of that in studies where the researchers taught participants a trick or two before submitting them a test of some sort, but my ability to recall specific examples is almost geometrically inverse to gwern's, so that certainly takes out of my point.
He is a rationalist (...)
He had rationalised (...)
(...) despite being informed that a previous partner had been infected (...)
So uh, let's run down the checklist...
[ X ] Proclaims rationality and keeps it as part of their identity.
[ X ] Underdog / against-society / revolution mentality.
[ X ] Fails to credit or fairly evaluate accepted wisdom.
[ ] Fails to produce results and is not "successful" in practice.
[ X ] Argues for bottom-lines.
[ X ] Rationalizes past beliefs.
[ X ] Fails to update when run over by a train of overwhelming critical evidence.
Well, at least, there's that, huh? From all evidence, they do seem to at least succeed in making money and stuff. And hold together a relationship somehow. Oh wait, after reading original link, looks like even that might not actually be working!
A lot of the role of managers seems to be best explained as ape behavior, not agent behavior.
Localized context warning needed missing here.
There's also other warnings that need to be thrown in:
People who only care about the social-ape aspects are more likely to seek the position. People in general do social-ape stuff, at every level, not just manager level, with the aforementioned selection effect only increasing the apparent ratio. On top of that, instances of social-ape behavior are more salient and, usually, more narratively impactful, both because of how "special" they seem and because the human brain is fine-tuned to pick up on them.
Another unstudied aspect, which I suspect is significant but don't have much solid evidence about, is that IMO good exec and managerial types seem to snatch up and keep all the "decent" non-ape managers, which would make all the remaining ape dregs look even more predominant in the places that don't have those snatchers.
But anyway, if you model the "team" as an independent unit acting "against" outside forces or "other tribes" which exert social-ape-type pressures and requirements on the Team's "tribe", then the manager's behavior is much more logical in agent terms: One member of the team is sacrificed to "social-ape concerns", a maintenance or upkeep cost to pay of sorts, for the rest of the team to do useful and productive things without having the entire group's productivity smashed to bits by external social-ape pressures.
I find that in relatively-sane (i.e. no VPs coming to look over the shoulder of individual employees or poring over Internet logs and demanding answers and justifications for every little thing) environments with above-average managers, this is usually the case.
In practice, this is relevant once you've already bought a chair and want to maximize the comfort you can get from it, balanced against the difference of comfort you could buy & chance of getting that comfort (or some lower value, or some higher) & money you'd need to spend.
When purchasing a new chair, I don't think this will be an important factor in the overwhelming majority of situations.
This seems like it falls face-first, hands-tied-behind-back right in the giant pit of the Repugnant Conclusion and all of its corollaries, including sentience and intelligence and ability-to-enjoy and ability-to-value.
For instance, if I'm a life-maximizer and I don't care about whether the life I create even has the ability to care about anything, and just lives, but has no values or desires or anything even remotely like what humans think of (whatever they do think of) when they think about "values" or "utility"... does that still make me more altruistically ideal and worthy of destroying all humanity?
What about intelligence? If the universe is filled to the planck with life, but not a single being is intelligent enough to even do anything more than be, is that simply not an issue? What about consciousness?
And, as so troubling in the repugnant conclusion, what if the number of lives is inversely proportional to the maximum quality of each?
In an attempt to simplify the various details of the cost-benefit calculations here:
If you spend:
1-2 hours on this chair per day: Might be worth spending some time shopping for a decent seat at Staples, but once you find something that fits and feels comfortable (with some warnings to take in consideration), pretty much go with that. You should find something below 100$ for sure, and can probably get away with <60$ spent if you get good sales.
3-4 hours / day: If you're shopping at Staples, be more careful and check the engineering of the chair if you've got any knowledge there. Stuff below 60$ will probably break down and bend and become all other sorts of uncomfortable after a few months of use. If your body mass is high, you might need to go for solidity over comfort, or accept the unfair hand you're dealt and spend more than 150$ for something that mixes enough comfort, ergonomy and solid reliability.
More than 4 hours / day on average: This is where the gains become nonlinear, and you will want to seriously test and examine anything you're buying under 150$. At this point, you need to consider ergonomics, long-term comfort (which can't be reliably "tested in store" at all, IME), reliability, a very solid frame for extended use that can handle the body's natural jiggling and squirming without deforming itself (this includes checking the "frame" itself, but also any cushions, since those can "deflate" very rapidly if the manufacturer skimped there, and therefore become hard and just as uncomfortable as a bent chair), and so on. At this point, the same advice applies as shopping for mattresses, work boots, or any other sort of tool that you're using all day every day. It's only at this point where the differences between more relaxed postures, "work" postures and "gaming" postures starts really mattering, and I'd say if you actually spend 6-8 hours per day on average on this chair, you definitely want to go for the best you can get. How much that needs to cost, unfortunately, isn't a known quantity; it depends very heavily on your body size, shape, mass, leg/torso ratio, how you normally move and a bunch of other things... so there's a lot of hit-and-miss, unfortunately, unless you have access to the services of a professional in office ergonomics. Even then, I can't myself speak for how much a professional would help.
Beware that you need to "try" these chairs, and you need to pay attention to clothing when you try them too. A chair that's super comfortable with jeans and a winter coat might turn out to be an absolutely horrible back-twisting wedge of slipperiness once you're back home in sweatpants and a hoodie. Or in various more advanced states of undress.
That was an awesome breakdown of things, thank you!
I've learned way more from this than from all my previous reading, without even including the data about what I didn't know I don't know and other meta.
Usually, in person (either as a tag-team or "I'll be right over here, call me when you're stumped" approach; I've experimentally confirmed that behind-the-shoulder teaching has horrible success rates, at least for this subject), though a few times by chat / IM while passing the code back and forth (or better yet, having one of those rare setups where it's live-synch'ed).
TL;DR: Look at examples of wildly successful teaching recipes, take cues from them and from LW techniques and personal experience at learning, fiddle a little with it all, and bam, you've got a plan for teaching someone to program! Now you just need pedagogical ability.
My general approach is to feel out what dumb-basics they know by looking at it as if we were inventing programming piecemeal, naturally with my genius insight letting us work out most of the kinks on the spot. I also go straight for my list of Things I Wish Someone Would Have Told Me Sooner, the list of Things That Should Be In Every Single So-Called "Beginner's Tutorial To Programming" Ever, and the list of Kindergarden Concepts You Need To Know To Create Computer Programs -- written versions pending.
For instance, every "Beginner's Tutorial to Programming" I've ever seen fails to mention early enough that all this code and fancy stuff they're showing is nice and all, but to actually have meaningful user interactions and outputs from your program to other things (like the user's screen, such as making windows appear and put text and buttons in them!) you have to learn to find the right APIs, the right handles and calls to make, and I've yet to see a single tutorial, guide, textbook, handbook, "crash course" or anything that isn't trial-and-error or a human looking at what you did that actually teaches how to do that. So this is among the first things I hammer into them -
"You want to display a popup with yes/no buttons? Open up the Reference here, search for "prompt", "popup", "window", "input" or anything else that seems related, and swim around until you find something that looks like it does what you're doing, copy the examples given as much as possible in your own code, making changes only to things you've already mastered, and try it!"
...somewhat like this, though that's only for illustration. In a real setting, I'd be double-checking every step of the way there that they remember and understand what I told them about Developer References earlier on, that their face doesn't scrunch up at any of the terms I suggest for their search, that they can follow the visual display / UI of this particular reference I'm showing them (I'm glaring at you, javadoc! You're horribly cruel to newbies.) and find their way around it after a bit of poking around, and so on.
Obviously, that's nowhere near the first things to tackle, though. Most tutorials devote approximately twelve words to the entire idea of variables, which is rather ridiculous when contrasted with the fact that most people barely remember their math classes from high school, and never had the need or chance to wrap their head around the concept of variables as it stands in programming. Just making sure a newbie can wrap their mind comfortably around the idea that a variable won't have a set value (I pointedly ignore constants at that point, because it's utterly, completely unnecessary and utterly confusing to mention them until they have an actual need for them, which is way way way waaaaaaaay later - they can just straight-up leave raw values right in the source code until then), that a variable will probably change as the program works, that it won't change on its own but since programs get big and you can't be sure you won't have anything else ever changing it you should always assume it could change somewhere else, etc. etc. etc. There are so many concepts that already-programmers and geeks and math-savvy people just gloss right over that obviously those not part of those elites aren't going to understand a thing when you start playing guerilla on their brain with return values, mutable vs immutable, variable data types, privates and scopes, classes vs instances, statics, and all that good stuff.
Buuut I'm rambling here. I suppose I just approach this as a philosophical "blend" between facilitating a child's wonder-induced discovery of the world and its possibilities, and a drill sergeant teaching raw recruits which fingers to bend how in what order and at what speed to best tie their army boot shoelaces and YOU THERE, DON'T FOLD IT LIKE THAT! DO YOU WANT YOUR FINGERS TO SLIP AND DROP THE LACE AND GIVE YOUR ENEMY TIME TO COME UP BEHIND YOU? START OVER!
Of course, it might be my perspective that's different. I was forewarned both by my trudging, crawly, slow learning of programming and by others about the difficulty of teaching programming, and as silly as it might sound, I have a lot more experience than the average expert self-taught wiz programmer at learning how to program, since I took such a sinuous, intermittent, unassisted and uncrunched road through it.
Anecdotally, I think I've re-learned what classes and objects were (after forgetting it from stopping my self-teaching for months) at least eight times. So I have at least eight different, internal, fully-modeled experiences of the whole process of learning those things and figuring out what I'm missing and so on, without anyone ever telling me what I was doing or thinking wrong, to draw from as I try to imagine all the things that might be packed and obfuscated in all the abstracts and concepts in there.
If you speak the words fast enough and with enough conviction, your audience's brain will fill in the gap with whatever pleases them while you retain full plausible deniability. Win!
Bahahah. Your current neurochemical high will wear off in 2 days.
This should be written in bold red letters on the fourth cover of every motivation and productivity book or guide.
Thanks for the response! This puts several misunderstandings I had to rest.
P.S. Why programing of Azathoth? In my mind it makes it sound as if desire to have children was something intristically bad.
Programming of Azathoth because Azathoth doesn't give a shit about what you wish your own values were. Therefore what you want has no impact whatsoever on what your body and brain are programmed to do, such as make some humans want to have children even when every single aspect of it is negative (e.g. painful sex, painful pregnancy, painful birthing, hell to raise children, hellish economic conditions, absolutely horrible life for the child, etc. etc. such as we've seen some examples of in slave populations historically)
Good catch. Didn't notice that one sneaking in there. That kind of invalidates most of my reasoning, so I'll retract it willingly unless someone has an insight that saves the idea.
I've occasionally tried teaching programming to novices, which is one incredible lesson in illusion of transparency, maybe even better than playing Zendo.
How typical do you think your experience has been in this regard? IME, teaching programming to complete novice has been cruise-control stuff and one of the relatively few things where I know exactly what's going on and where I'm going within minutes of starting.
For context: I've had success in teaching a complete novice with vague memory of high-school-math usage of variables how to go from that to writing his own VB6 scripts to automate simple tasks, of retrieving and sending data to fields on a screen using predetermined native functions in the scripting engine (which I taught him how to search and learn-to-use from the available and comprehensive reference files). This was on maybe my third or fourth attempt at doing so.
What I actually want to know is how typical my experience is, and whether or not there's value in analyzing what I did in order to share it. I suspect I may have a relatively rare mental footing, perspective and interaction of skillsets in regards to this, but I may be wrong and/or this may be more common than I think, invalidating it as evidence for the former.
How stable is gene-to-protein translation in a relatively identical medium? I.e. if we abstract away all the issues with RNA and somehow neutralize any interfering products from elsewhere, will a gene sequence always produce the same protein, and always produce it, whenever encountered at as specific place? Or is there something deeper where changes to the logic in some other, unrelated part of the DNA could directly affect the way this gene is expressed (i.e. not through their protein interfering with this one)?
Or maybe I don't understand enough to even formulate the right question here. Or perhaps this subject simply hasn't been researched and analyzed enough to give an answer to the above yet?
If the answer is simple, are there any known ratios and reliability rates?
There's no particular hidden question; I'm not asking about designer babies or gengineered foodstuffs or anything like that. I'm academically curious about the fundamentals of DNA and genetic expression (and any comparison between this and programming, which I understand better, would be very nice), but hopelessly out of my depth and under-informed, to the point where I can't even understand research papers or the ones they cite or the ones that those cite, and the only things I understand properly are by-order-of-historical-discovery-style textbooks (like traditional physics textbooks) that teach things that were obsolete long before my parents were born.
I've always been curious to see the response of someone with this view to the question:
What if you knew, as much as any things about the events of the world are known, that there will be circumstances in X years that make it impossible for any child you conceive to possibly take care of you when you are older?
In such a hypothetical, is the executive drive to have children still present, still being enforced by the programming of Azathoth, merely disconnected from the original trigger that made you specifically have this drive? Or does the desire go away? Or something else, maybe something I haven't thought of (I hope it is!)?
(This might seem obviously stupid to someone who's thought about the issue more in-depth, but if so there's no better place for it than the Stupid Questions Thread, is there?):
and I don't know what evidence I could reasonably expect for or against #3.
I think some tangential evidence could be gleaned, as long as it's understood as a very noisy signal, from what other humans in your society consider as signals of social involvement and productivity. Namely, how well your daughter is doing at school, how engaged she gets with her peers, her results in tests, etc. These things are known, or at least thought, to be correlated with social 'success' and 'benefit'.
Basically, if your daughter is raising the averages or other scores that comprise the yardsticks of teachers and other institutions, then this is information correlated with what others consider being beneficial to society later in life. (the exact details of the correlation, including its direction, depend on the specific environment she lives in)
Agree with the rest, so not much further to add, except for:
What seems to particularly rub people the wrong way is my suggestion that this is morally obligatory. While my views have not shifted greatly I've learned enough from this trainwreck of a post to argue this position less stridently next time around.
Yes. The mostly-utilitarian environment around LW already doesn't support moral obligations, but on top of that due to the various issues surrounding moral systems it's frowned upon, partially due to the large risk of inducing conflict and confusion, to directly assert a claim like this that results from an assumed moral system.
Even though it seems like the majority of LW would "support" it, a post made entirely about encouraging people and justifying a case for the point that it should be morally obligatory for everyone to make expected utility calculations in a trolley problem and push down the fat man would not be that well received, I think.
An approach that, I think, would be much easier on this same subject with intellectual communities, particularly LessWrong, would be to claim that your point of argument (People X "should" have children!) contributes more towards some goal (Higher ratio of quality humans?) than alternatives, and is thus closer to optimal in that regards (if you claim something as truly optimal without any caveats and an extremely high probability, you damn well have the durasteel-solid math to prove it, or you deserve every criticism and tomato thrown your way! not that I'm guiltless of this myself).
EDIT: And to complete the last thougth above, which I thought I had written: And in most intellectual communities, the gap between "closer to optimal" and "moral obligation" is then easier to cross if one really wants to insist on this point. Arguments could be made that any sub-optimal is harm by opportunity costs, or about the relations of individuals' utility functions to social factors and thus to their behavior towards these "moral obligations", or various other ethics thinghies. Basically, it's just a more stable platform and a better meeting point for launching into a pitch on this subject.
This seems worth adding to a list somewhere or making a more elaborate article about. Anyone?
At least, the label "equal treatment fallacy" seems like it represents well enough most cases and, with those examples, evokes a clear picture. It doesn't seem to refer to all "variable vs constant" issues following this pattern, but close enough.
Adjusted for the rates of decline of human population if only a subset of the population ceases creating new humans and the time this gives us until we dip past the civilization-sustaining threshold, then yes, there exists a relatively large subset of humans where the equations balance out to the researchers having enough time to develop anti-aging technology before we reach the deadline.
How large is "large", and exactly how much time that represents, and which exact conditions define the subset of humans, are all yet to be determined (if I knew, I'd take over the world and make it happen!), but I'm rather confident that the number of such humans we could affort to put on Deathward duty is significantly higher than you've previously assumed.
(Partially leaning on the knowledge that human brains tend to fart out and underestimate severely when estimating the impact of large numbers like "6 billion", which you seem to have currently placed on the side where it would increase the number of quality-adjusted humans we produce and, thereby, the time we have until humanfall. )
.That was an awesome answer, which leaves me with very little to add. I'll merely say that—as you've already implicitly predicted—what seems to be going on is that my nature/nurture priors are significantly different from yours and this leads us to such different conclusions.
And there's the satisfying conclusion. Our priors are uneven, but we agree on the evidence and our predictions. We can now safely adjourn or move on to a more elaborate discussion about our respective priors.
As an important data point, my wordgaming experiments rarely work out this well, but so far have retained net positive expected utility (as do, unsurprisingly, most efforts at improving social skills). I'll bump up this tactic a few notches on my mental list.
I cannot.
My time is limited by way of requiring to spend >50 hours / wk on a "self-sustainment" job, a restriction which would only be emboldened by the additional monetary requirements of human-making. The rest of my time can only be alotted to cool projects or human-making; I can not achieve both in sufficient quality to go past the treshold of a failed effort if my available time and resources are divided between the two. One or the other will fail, and probably both if I attempt a standard distribution of resources.
I suspect that many are in similar situations.
(Your point might still stand in a more general case; I've simply attempted to turn it from a discussion of arguments and options to a discussion about ratios of numbers of people matching categories of life situations.)