Posts
Comments
There's a (very reasonable at this time, I think) emphasis in this high-preformance day and evening lighting on trying to match natual brightness and spectrum.
Have you come accross any good data or models for late-day/early dusk?
I ask because while there's a lot of good data and models for mid-day solar illumination, I don't know enough to know how much to trust them towards the end of the day (which I suspect is not really the designed use case), nevermind stuff after sunset—and there's over an hour between sunset and astronomical dusk.
(But yes, for midday... lots of models to choose from. The SMARTS model seems like a good choice. Good-enough accuracy, covers spectrum from 280–4000nm, easy setup, and has relatively few inputs and outputs to worry about. Big comprimise is that the diffuse illumination is treated as uniform, and it doesn't do clouds.)
I don’t think I’ve ever seen a community which did more poorly (than it might do otherwise) because it’s members understood each other’s points too well.
So I don’t recommend worrying about writing too clearly, but rather to either balance your time against the group’s (per considerations like my above points) or to run yourself against some sort of constraint(s), like time pressure or the upper bound of how much you can care.
Even if there was no reluctance to asking questions on the part of the readership, the cost of the question-and-response loop would still be very high. For those who write due to a desire to move a group forwards, the following observations of mine may be motivating:
-
Each question consumes some of a limited supply of (something like) discussion threadcount and bandwidth, decreasing the range and depth of the consideration given the other aspects of a topic.
-
The resolution of each loop takes time; the full version of the original topic is not completely elaborated until later, in turn delaying the development of discussion based on the fully elaborated original topic.
-
The resolution of each loop takes time. In some cases, this means that people following a discussion ought to check in multiple times to stay up-to-date with an evolving explanation.
-
A question-and-explanation is (nearly) invariably longer and (usually) more time consuming to write than a (moderately) artful initial explanation... so a quickly written initial missive is usually false economy, even selfishly.
In a group with multiple productive members, #3 and #4, by increasing the time cost of staying abreast of the topic, may tend to decrease productivity.
(Ironically, my above explanations are too terse. My apologies.)
With silver trading at $230/lb and steel going for somewhere in the neighborhood of .90 per pound (the range representing Chinese bulk commodity thru US small commercial quantity structural shapes), it would appear that the price ratio hasn’t changed much. (They are both metals...)
I am not a historian or an economist, but it seems better to compare steel to food: in 1500ish England, 8 pennies gets you 2 bushels of grain (i.e. 100 lbs; 3 month’s porridge)... or one axe head (1-2 lbs steel). (Though note that 16th century food prices are “weird” - it’s prior to refrigeraton, mechanization, chemical fertilizer, barbed wire...) http://faculty.econ.ucdavis.edu/faculty/gclark/papers/Agprice.pdf http://medieval.ucdavis.edu/120D/Money.html
If the actual utility you receive as a function of the total "payout" of your "investments" has diminishing marginal returns, then the character of the portfolio to maximize expected utility depends upon the failure correlations between the investment options.
IE, in the case that the utility function is sufficiently convex to payout and the various investments all fail independently of each other, a strategy of investing in only the highest yield and lowest risk choices is not optimal: a small investment in a middling investment decreases the risk of total failure (and corresponding hit to expected utility) enough to be worth the hit to expected payout.
I haven't run the analysis, but my intuition is that advocacy for a barbell strategy limited to just high-risk stocks and T-bills is an empirical claim about the risks along the following lines:
1) The failure of middling risk stock is well correlated with the failure of high risk stock
2) The failure of less-risky investments than T-bills (paper currency, rice & beans under nitrogen, solar panels, etc.) are well-correlated with the failure of T-bills and have lower annual yields.
3) Utility is a moderately convex function of payout. (If it were very convex, you'd want most or all of your funds in T-bills, not just a bit; if it were linear or concave, "risk" isn't a thing and all funds would be in the stocks.)
I'll trust Taleb on #1, and #3 seems reasonable most of the time, but on #2, it would seem to me that while a good portfolio would be based around the high-risk stocks backed up with a small portion of cash-equivalents, the "insurance" against failure of both of those things is cheap enough that it should be included early on.
(As an aside, I'm pretty sure that Taleb suggests "mostly" high-yield/high-risk holdings, with only enough T-bill stuff to keep you off the streets if the stock fails. That's not what I'd pick out as a strategy that is likely to cause bad outcomes because you didn't take enough risk.)
There's some concern in the other comments about the aesthetics of this solution, and some call for a pre-built solution from an installation-labor perspective.
For those people, I suggest getting a "High Bay LED light". These are really bright hanging light fixtures... most rooms would be well served by 1-2 them, which come in two shapes: round "UFO" and "linear". I think they look pretty good, as the need to have good heat-sinking capabilities makes them one of the few products where even the budget producers have to use "quality construction".
These are cost-competitive with the lumenator build suggested by the OP:
The lumenator ($80 of bulb sockets, $40 of command hooks, $200 lightblubs) totals $320, consumes 380W, and produces about 40000 lumens.
Modern high bay lighting solutions generally cost about .9$/watt and produce about 130-140 lumens/watt, so two 150W high bay light will cost about $300 and produce 39000 lumens. (As an added bonus, the higher bulb efficiency saves $4/year @ 150 days x 3hrs/day per year)
Disadvantages:
The CRI is generally a bit lower, around 85. The linked bulbs have a CRI of 92.
The availability of 2700K fixtures is very poor. Most high bay bulbs are 5000K, with good availability of 4000K lights.
Dimmer switch wiring is by a 0-10v logic voltage. This can be left unconnected to run at full brightness, or for one light, a 100k-ohm potentiometer works... for a single control operating two or more lights, the hydroponics industry seems to have produced a large number of inexpensive controllers.
These are largely marketed at an industrial market, so be careful to buy one that already has a cord installed, or be prepared to do some minor wiring.
Motivation means the same thing as "tactile ambition", so using the new phrase is a bad idea.
We hear self-reports - or at least legends - of people "motivated" by far-mode concerns, so I think it can be credibly said that the public conception of "motivation" allows for both the visceral and immediate "motivated not to touch the stove again, lest they get burnt" and the abstract and far-off "motivated to increase revenues in the coming decade".
Lionhearted's term expressly forbids far-mode concerns - it picks out a subset of motivation.
However, I cannot endorse the phrase, since it seems that building the concept out as "Near mode motivation(s)" is more expressive (incorporates the entire near/far concept), less jargony, and nearly as short as "tactile ambition" (And probably can be trimmed to "near motivations" - which is shorter than "tactile ambition" - in contexts where it's used often.)
I'm pretty sure you are correct that honesty is a sort of signaling thing, but I do not find it possible to "join in the signaling when it is useful" - it seems to me that evidence as to the honesty / dishonesty of a person usually accumulates slowly, so you more-or-less have to pick a strategy and stick to it. (My personal experience is that I have a hard time getting people to believe the things I say even when I'm ~100% honest, and that my persuasiveness goes down hill rapidly if I dial that back.)
Finally, I think you're not winning if you [do anything but] directly lie ... to the gestapo
In the usual situation where the gestapo questions you, I think you are correct. However, the hypothetical was unusual in that:
1) The gestapo agent is fluent in meta-honesty
2) The gestapo agent knows that you firmly adhere to a code of meta-honesty
3) #1 and #2 are common knowledge between you and the gestapo agent
Together, these (as Eliezer notes, very unlikely) requirements mean that not "playing the meta-honesty game" by directly lying is in fact a strong tell of object-level dishonesty - why would you break your code of protecting your counterfactual selves if you were not hiding *actual* jews? (Or at least nervous because of the proximal authority figure.)
Again, I agree that in reality, this falls apart - for instance, without #1 your response reads as prevarication, and without #3 you'd likely lie and be caught lying.
(It is interesting that, unless I'm missing something, you don't have to assume #2 - if the agent doesn't know that you're meta-honest, you don't get punished for that strategy; you just don't get the benefit from your long history of honest meta-honest conduct.)
To paraphrase adamzerner...
My impression is that the expected cost of using this technique online - the probability of it backfiring multiplied by the average cost in the case that it does - is low.
While most of my communication experience is from my past role as a moderator of a youth-dominated engineering forum, and so is somewhat unusual, I believe that the expected value is in fact highly positive.
I think this is mostly because:
It's a pretty cheap technique to implement - you can simply paraphrase the person you are responding to, rather than directly quoting. (As I did in this post)
In the case that you, in good faith, misunderstand the other member, they are going to have to re-explain their position anyways; it is far better to catch this early on, before anyone gets frustrated and before any more time is wasted.
Same function and justification as checksums, I suppose...
On the other hand, if you are only 50% sure what the other person meant, I found it was better to simply let them know that they were obscure.
If anyone else is interested in them, I'm willing to score, count, and/or categorize the responses to the "Changes in Routine" and "What Different" questions.
However, I've started to try and develop a scheme for the former... and I've hit twenty different categories (counting subcategories) and will probably end up with 5-10 more if I don't prune them down.
What sort of things do you think might be interesting to look for?
(Though I haven't started to do work on paper, the latter seems like a much simpler problem. However, if you have thoughts on the selection of bins, please share them.)
(As a note: I would be able to modify the .xls or such, but some one else would have to do the stats; I haven't developed practical skills in that field yet, so the turnaround time would be awful.)
As a possibility, buying current beach-front property is consistent with believing in global warming if you also believe that it is hard enough to predict where the new beach-front will be that it is cheaper (say, per future-discounted year of residence) to buy property on the current beach and then at the new location of the beach, than it is to buy any combination of properties today.
The inheritance question is actually rather different, as it is about buying beach-front-property-futures in the present.
I suspect that, while it is a legitimate distinction, dividing these skill-rankings into life domains:
A) Confuses what I feel to be your main (or at least, old) idea of agency, which focuses on the habit of intentionally improving situations, with the domain-specific knowledge required to be successful in improving a situation.
Mostly, I don't like the idea of redefining the word agency to be the product of domain-skills, generic rationality skills, and the habit of using rationality in that domain... because that's the same thing as succeeding in that domain (what we call winning) - well, minus situation effects, anyways. It seems far better to me to use "agency" to refer only to the habitual application of rationality.
You still find that agency is domain specific, but now it is separate from domain skills; give someone who is an agent in a domain some knowledge about the operative principles of that domain, and they start improving their situation; give a non-agent better information and you have the average Lifehacker reader: they read all this advice and don't implement any of it.
B) Isn't near fine-grained enough.
Besides the usual psych 100 stuff about people remembering things better in the same environment they learned them in (How many environments can you think of, now, how many life domains; what's the ratio between those numbers? In the hundreds?), an anecdote which really drove the point home for me:
I have a game I'm familiar with (Echoes), which requires concurrent joystick and mouse input, and I like to train myself to use various messed-up control schemes (for instance, axis inversion). For several days I have to make my movements using a very attention-hungry, slow, deliberate process; over time this process gets faster and less attention hungry, reducing the frequency and severity of slip-ups until I am once again good at the game. I feel the parallels to a rationality practice are obvious.
Relevantly, the preference for the new control scheme then persists for some time... but, for instance, the last one only activated when some deep pattern matching hardware noticed that I had my hand on the joystick AND was playing that game AND was dodging (menus were no problem)... if I withdrew any of those conditions, mouse control was again fluent; but put your hand back on the joystick, and three seconds later...
So, I suppose my point in this subsection is that you cannot safely assume that because you've observed yourself being "agenty" in (say) several relationship situations, you are acting with agency in any particular relationship, topic, time, place, or situation.
(Also, I expect, the above game-learning situation would provide a really good way to screen substances and other interventions for rationality effects, but I haven't done enough experimentation with that to draw any conclusions about the technique or any specific substances.)