Posts
Comments
Sorry for the delay, but here's the follow-up survey to share feedback. Please fill it out if you can! This helps us improve the events!
https://airtable.com/appEBNqFVvAGqyOeJ/pagoqFjtAjfcvknh8/form
FYI there’s some backup on westbound William Cannon just before the turn into thr neighborhood at James Ranch Rd.
Extremely late follow-up, but [here's](https://xzrmev.clicks.mlsend.com/tl/c/eyJ2Ijoie1wiYVwiOjU3NzkyNixcImxcIjoxMjI1MDA1MzIwMzUxMjcwNDcsXCJyXCI6MTIyNTAwNTMyMTMwNTQ2OTYyfSIsInMiOiJiNmI1NzgwZjJmMmNiNGYxIn0) the post-event survey to give your feedback.
Alright, we’re set up at table 15. Excited to see all of you!
Oh wow, thanks. I think at the time I was overconfident that some more educated Bayesian had worked through the details of what I was describing. But the causality-related stuff is definitely covered by Judea Pearl (the Pearl I was referring to) in his book *Causality* (2000).
Austin, TX
LW event link: https://www.lesswrong.com/events/YQrPgwGaqqvpDPmZx/austin-lw-ssc-winter-solstice-2023
Thanks to everyone for attending! Please complete this survey about your experience so we know what went right or wrong:
https://docs.google.com/forms/d/e/1FAIpQLSf0W90v7dTWQ-DGstGZOspZnBRZ4IF52b5qmyxRTRak0FG1Sg/viewform
Looking forward to seeing all of you today! For parking, you can use the street or the neighboring Mother’s Milk Bank. If it’s crowded, try the neighborhood to the west off Justin Ln, but in all cases, watch for signs about where it is legal to park!
Austin, Texas Winter Solstice.
December 20th (Tuesday), 6 pm doors open.
Location: TBD
LW event link (details to be added).
Just added a spreadsheet to help coordinate carpooling.
Yes it is! (But don't give them alcohol.)
Final update for anyone who RSVP'd: You can park on the street in the area around Moontower. We will be providing lunch as well: breakfast tacos, including vegetarian and vegan options.
We have the whole venue reserved for us.
It's overridden, we've rented out the place.
Yes, we've rented out the venue for 12-8, so the normal hours don't apply.
Note, venue was changed to:
Moontower Cider Company 1916 Tillery St Austin, TX 78702 United States
PM me for directions if you didn't get them.
Late to post this, but another resource:
Why did we wait so long for the bicycle?
And the HN discussion about it, with me mentioning high-karma poster John Salvatier.
FYI, we're aware of the predictions of rain for Saturday and will be bring tent coverings to provide some protection. It's still on!
For anyone who still follows this, no one pressed the button.
For anyone who found the event here, this is the mobile-friendly version of the program.
Good news! We'll be coordinating with the Ottawa Petrov Day to do Hardcore Mode A-minus -- we'll get a button that destroys the other party's Petrov cake.
And 22! Great turnout!
And also 23 but no second sign :-(
We’re there at table 13 now! Hope to see you!
Had to move to Jitsi. If anyone's still trying to join, go here.
My favorite one: burning wood for heat. Better than fossil fuels for the GW problem, but really bad for local air quality.
To your alternative approaches I would also add Bruce Schneier's advice in Cryptographic Engineering, where he talks a little about the human element in dealing with clients. It's similar to the Socratic approach, in that you ask about a possible flaw rather than argue it exists.
Bad: "that doesn't work. Someone can just replay the messages."
Good: "what defenses does this system have against replay attacks?"
Case in point: the weather.
Why is a mere statement of contradiction voted up to five? Something I'm missing here? I could understand if it was Clippy and there was some paperclip related subtext that took a minute to "get" but ...
Admittedly no one's ever been charged under the ADA, but there are plenty of examples of people being disciplined for violating it.
Thinking about your experiments does not (in itself) involve expenditure of government money, so I don't see how they would prosecute you under the ADA for that. Yes, managers have to be very clear to workers not to use resources, just to keep them away from edge cases, but even with that level of overcaution, managers can't actually stop you.
Even if you came back and (for some reason) said, "Hey boss, I totally thought about this experiment from the couch when the shutdown was going on", they still don't have grounds unless you were using up resources. Now, they could fire you just for the defiance (maybe), but if they're that trigger-happy in the first place, then ...
... and that is what being a big fish in a small pond feels like ;-) That is, most of them there won't even make it that far. At least, that was my experience.
(My approach was the cruder one of just taking a remainder modulo max size after each operation.)
C-style integers = integers with a fixed possible range of values and the corresponding rollover -- that is, if you get a result too big to be stored in that fixed size, it rolls over from the lowest possible value.
Ruby doesn't implement that limitation. It implements integers through Fixnum and Bignum. The latter is unbounded. The former is bounded but (per the linked doc) automatically converted to the latter if it would exceed its bounds.
Even if it did, it's still useful as an exercise: get a class to respond to addition, etc operations the same way that a C integer would. (And still something most participants will have trouble with.)
+1 for a (+1 for acknowledging the inconvenient) on a subject you dislike discussion of.
Depends on what you intend to get out of it, but you can go to an amateur hack night ("we're going to implement C-style integers in Ruby", "we're going to implement simulated annealing)", where almost everyone but you will have trouble conceptualizing the problem.
Non-thinking-of-customers-as-fish is not a business plan.
Work expands to fill the available time.
It's bad if they're systematically underestimating the urgency (and thus placing the deadline too far out) which seems to be the rule with humans rather than the exception.
Maybe we should have a prisoner's dilemma meta-tournament in which we define a tournament-goodness metric and then everyone submits a tournament design that is run using the bot participants in the previous tournament, and then use the rankings of those designs.
Wait: does anyone know a good way to design the meta-tournament?
Very well said! I would only add that your point generalizes: the differences between the two cases is the extent to which it has implications for future interaction ("moving the Schelling point"): blackmail-like situations are those where we intuit an unfavorable movement of the point (per the blackmailed) while we generally don't have such intuitions in he case of trade.
Somewhat OT:
Does it really help the exposition to have all the elaborate packaging (the such-and-such vase, the jester and description of the punishment, etc)? For me it just makes it harder to read: is the vase just a perspicuous example of a valuable, or is it important that it's subject to random catastrophe (from errant jesters)? Does the presence of the makeup sex have any relevance that should affect my intuition on this?
But then, a lot of people seem to like that kind of thing, even in non-fiction and when they no longer need explanations via fairy tale metaphors, so perhaps I'm alone on this.
EDIT: Sorry, I missed that you linked a fluff-free version. Much appreciated!
Are you saying that no one expected the printing money (bidding up gold) before it happened, or is there a more subtle reason why the only relevant comparison is from the moment the policy called QE started?
Someone who bought gold after the Lehman fiasco (08), but before any of those QE milestones would have had several options since then to redeem for 2-3x gain.
It's an even bigger gap if you compare to any year before, back to ~98. S&P has had a horrible volatility/return performance going back to at least then.
Well, it was a pretty safe bet in '08 given typical reactions to economic crises, and the prevalence of advice like this P/S/A that "oh, there totally won't be inflation from the printing money".
P/S/A: The people telling you to expect above-trend inflation when the Federal Reserve started printing money a few years back, disagreed with the market forecasts, disagreed with standard economics, turned out to be actually wrong in reality, and were wrong for reasonably fundamental reasons so don't buy gold when they tell you to.
You would have missed out on doubling or tripling your money if you hadn't bought gold when those same people had made the predictions.
Many treatments of this issue use "observer moments" as a fundamental unit over which the selection occurs, expecting themselves to be in the class of observer-moments most common in the space of all observer moments.
"Causality is based on entropy increase, so it can only make sense to draw causal arrows “backwards in time,” in those rare situations where entropy is not increasing with time. [...] where physical systems are allowed to evolve reversibly, free from contact with their external environments." E.g. the normal causal arrows break down for, say, CMB photons. -- Not sure how Scott jumps from reversible evolution to backward causality.
It's a few paragraphs up, where he says:
Now, the creation of reliable memories and records is essentially always associated with an increase in entropy (some would argue by definition). And in order for us, as observers, to speak sensibly about “A causing B,” we must be able to create records of A happening and then B happening. But by the above, this will essentially never be possible unless A is closer in time than B to the Big Bang.
That is, we are only capable of remembering (by any means) things closer to the Big Bang, because memories require entropy increase; and furthermore, memories are necessary for drawing a causal arrow that orders past vs future. But if there is a system that stays isentropic, it needn't have such a ordering.
Note: this is actually very close to Drescher's resolution of Loschmidt's paradox ("why is physics time-symmetric but entropy isn't?") in Good and Real: since entropy determines what we (or any observers) regard as pastward, we will necessarily observe only those time histories of increasing entropy.
Just thought I'd throw this out there:
TabooBot: Return D if opponent's source code contains a D; C otherwise.
To avoid mutual defection with other bots, it must (like with real prudish societies!) indirectly reference the output D. But then other kinds of bots can avoid explicit reference to D, requiring a more advanced TabooBot to have other checks, like defecting if the opponent's source code calls a modifier on a string literal.
Eh, I don't think I count as a luminary, but thanks :-)
Aaronson's crediting me is mostly due to our exchanges on the blog for his paper/class about philosophy and theoretical computer science.
One of them, about Newcomb's problem where my main criticisms were
a) he's overstating the level and kind of precision you would need when measuring a human for prediction; and
b) that the interesting philosophical implications of Newcomb's problem follow from already-achievable predictor accuracies.
The other, about average-human performance on 3SAT, where I was skeptical the average person actually notices global symmetries like the pigeonhole principle. (And, to a lesser extent, whether the other in which you stack objects affects their height...)
I see that now. It didn't help that Luke_A_Somers, in defending what he did as steelmanning, kept insisting that he was "making the original argument worse".
(In any case, I don't think TB was the "steelest" man you could make here, nor the mother's real rejection.)
Sure, but I don't think EEG quality (in terms of lab vs. consumer grade) is the real bottleneck; I think it's minimizing the amount of input that must be provided at all by exploiting the regularity of the input that will be provided. The techniques available here may have been overlooked.
One character is not the same as one byte of (maximally compressed) information. The whole point of programs like Dasher (and word suggestion features in general) is to take advantage of the low entropy of text data relative to its uncompressed representation. Characteristic screenshot
Were you using a static, non-adaptive, on-screen keyboard? If so, that's why I would think connecting it to Dasher should result in a speed greater than one char per second, at least after the training period (both human training, and character-probability-distribution training).