Posts
Comments
- In case people want to get to know each other better outside the meetup, you might want to mention reciprocity.io, the rationalist friend-finder/dating site.
Unfortunately that requires Facebook =/ and most of my friends avoid / don't have Facebook for privacy reasons.
Alternatives:
- In case you're on one of the Fletcher-enabled rat Discord servers (like discord.gg/acx), Fletcher also has a built-in reciprocity feature at https://fletcher.fun/?view=reciprocity (you'll need to log in to your Discord account.)
- …?
Technology Connections viewers already know this somewhat related bit: Consider switching to loose powder instead of tabs, or having both. The dishwasher runs three cleaning cycles (pre-wash, main, rinse), and the tab only goes in for the second phase. The first phase tries to get all the food and grease off using just water… which isn't ideal. Adding like 1/2 a teaspoon of the loose powder directly onto the door / into the tub at the bottom will greatly support the pre-wash phase and should deal with most things.
Since I started doing that, I don't bother scraping at all (obviously? still discarding loose food remains in the bin first) and basically never get stuck bits. (Every couple of months stuff like strongly baked-on cheese from e.g. a gratin may stick to the dish, but that's it.)
The way I approach situations like that is to write code in Lua and only push stuff that really has to be fast down to C. (Even C+liblua / using a Lua state just as a calling convention is IMHO often nicer than "plain" C. I can't claim the same for Python...) End result is that most of the code is readable, and usually (i.e. unless I stopped keeping them in sync) the "fast" functions still have a Lua version that permits differential testing.
Fundamentally agree with the C not C++/Rust/... theme though, C is great for this because it doesn't have tons of checks. (And that's coming from someone who's using Coq / dependent types regularly, including at work.) Compilers generally want to see that the code is safe under all possible uses, whereas you only care about the specific constellations that you actually use when prototyping. Convincing the compiler, disabling checks, and/or adding extra boilerplate adds overhead that seriously slows down the process of exploration, which is not something that you want to deal with in that mode.
Sabine Hossenfelder's assessment (quickly) summarized (and possibly somewhat distorted by that):
- Uranium 235 is currently used at about 60K tons per year. World reserves are estimated to be 8M tons. Increasing the number of NPPs of current designs by a factor of ~10 means it's about 15-20 years until it'd no longer be economically viable to mine U235. Combined with the time scales & costs of building & mothballing NPPs, that's pretty useless. So while some new constructions might make sense, it's not good as a central pillar of a strategy.
- Due to the cost of NPP construction etc., nuclear power is way more expensive than all other options. Price of renewables is likely to continue to fall, widening the gap even further. So nuclear is economically very unappealing, and that's most likely just getting worse with time.
- Research into new tech takes time (e.g. designs that could use the other 99.3% of available Uranium that's not U235), and the currently available or soon-to-be available candidates aren't looking much better, they're unreliable and/or likely cost even more (at least initially).
This seems to be another case of "reverse advice" for me. I seem to be too formal instead of too lax with these spatial metaphors. I immediately read the birds example as talking about the relative positions and distances along branches of the Phylogenetic tree, your orthogonality description as referring to actual logical independence / verifiable orthogonality, and it's my job to notice hidden interaction and stuff like weird machines and so I'm usually also very aware of that, just by habits kicking in.
Your post made me realize that instead of people's models being hard to understand, there simply may not be a model that would admit talking in distances or directions, so I shouldn't infer too much from what they say. Same for picking out one or more vectors, for me that doesn't imply that you can move along them (they're just convenient for describing the space), but others might automatically assume that's possible.
As others already brought up, once you've gotten rid of the "false" metaphors, try deliberately using the words precisely. If you practice, it becomes pretty easy and automatic over time. Only talk about distances if you actually have a metric space (doesn't have to be euclidean, sphere surfaces are fine). Only talk about directions that actually make sense (a tree has "up" and "down", but there's no inherent order to the branches that would get you something like "left" or "right" until you impose extra structure). And so on... (Also: Spatial thinking is incredibly efficient. If you don't need time, you can use it as a separate dimension that changes the "landscape" as you move forward/backward, and you might even manage 2-3 separate "time dimensions" that do different things, giving you fairly intuitive navigation of a 5- or 6-dimensional space. Don't lightly give up on that.)
Nitpick: "It makes sense to use 'continuum' language" - bad word choice. You're not talking about the continuum (as in real numbers) but about something like linearity or the ability to repeatedly take small steps and get predictable results. With quantized lengths and energy levels, color isn't actually a continuous thing, so that's not the important property. (The continuum is a really really really strange thing that I think a lot of people don't really understand and casually bring up. Almost all "real numbers" are entirely inaccessible! Because all descriptions of numbers that we can use are finite, you can only ever refer to a countable subset of them, the others are "dark" and for almost all purposes might as well not exist. So usually rational numbers (plus a handful of named constants) are sufficient, especially for practical / real world purposes.)
Main constraint you're not modeling is how increasing margin size increases total pages and thus cost.
That's why I'm saying it probably won't need that for the footers. There's ~10mm between running footer and text block, if that's reduced to ~8 or 9mm and those 1-2mm go below the footer instead, that's still plenty of space to clearly separate the two, while greatly reducing the "falling off the page" feeling. (And the colored bars that mark chapters are fine, no need to touch those.)
Design feedback: Alignment is hard, even when it's just printing. Consider bumping up the running footer by 1-2mm next time, it ended up uncomfortably close to the bottom edge at times. (Also the chapter end note / references pages were a mess.) More details:
variance: For reference, in the books that I have, the width of the colored bars along the page edge at each chapter (they're easy to measure) varies between ~4.25mm and ~0.75mm, and sometimes there's a ~2mm width difference between top and bottom. (No complaints here. The thin / rotated ones look a bit awkward if you really look at them, but you'll likely be distracted by the nice art on the facing page anyway. So who cares, and they do their job.)
footers: Technically, the footer was always at least 2mm away from the edge (so it didn't really run the risk of getting cut off), but occasionally it felt so close that it was hard not to notice. That distracted from reading, and made those pages feel uncomfortable… giving it just 1 or 2mm more should take out the tension. (While I didn't experiment with it, my gut feeling says the text block probably won't have to move to make more space.)
end notes/references: These just looked weird to me. Rambling train of thought style notes:
- Choice of a different font that looked thin and spindly compared to the main text, worsened by the all-italic choice. (That also made it look fidgety/restless, thanks to constant kerning weirdness – the font is clearly not meant to be used this way, worst in URLs like Incentives p. 77 item 8.)
- Usual citation formats are something like 'Author (Year). "Title." More stuff.' or 'Author (Year). Title. More stuff.', so there's either sufficient punctuation to create noticeable points, or the emphasis creates a texture difference. The chosen 'Author. Title. Year' just runs together with no strongly noticeable points, making it hard to pick out the individual fields. It seems to be close to the Bluebook style, but that really relies on the texture contrast in the "Author, Title. Year" to function.
- The parenthesized item numbering (23) also looked weird in that font, maybe dotted 23. or 23: would have looked better; combined with the numbers being the only thing not italicized, it looked more like a mistake than intention to me.
- Also lots of typos / inconsistencies, e.g. occasional missing years/dates, sometimes it's just "Wikipedia. Article name." (good IMHO) then sometimes it's "Article. Wikipedia" (why is the "author" not first here?) and sometimes it's a badly-formatted link (whyyy), occasionally it looks like there's two spaces instead of one (or just more kerning weirdness?), and e.g. Incentives p. 206 has items 4 + 5 running into each other.
- Some indication for where the text can be found would be helpful. (Also whether it's a text or something else.) Most seem to be on LW, some can be guessed, but for others you have to search. Shorthands like e.g. a ° after the title to mark "this is on LW" or maybe a small ˣ for "can be found in arXiv" would be enough for the usual sources. But try finding Incentives p 133 item (10) by searching! The only way to find it is to look up the original article, locate the link, and then facepalm hard at the incomplete information.
- It all seemed more like an afterthought than a planned part of the book, and the look of the whole really encouraged me to quickly turn to the next page, instead of looking for more things to read.
- (Also: Consider reducing the line height and item spacing? It's not text that you read continuously, so less leading both reduces the required space and makes it stand apart from the main text without using questionable fonts.)
Apart from that, I loved the design! Thanks to everyone involved for making the books, they're lovely! <3
Sounds great so far, some questions:
- How does travel work? Do you get to Prague on your own and then there's organized transport for the last leg, or do you have to do the whole journey yourself? (I don't drive / only use public transport. Car-less travel to "a village about 90 mins [by car?] from Prague" could be anywhere between slightly annoying and near-impossible.)
- How does accommodation & food work?
And (different category)
- Are some of you at LWCW to chat in person?
Re solanine poisoning, just based on what's written in Wikipedia:
Solanine Poisoning / Symptoms
[...] One study suggests that doses of 2 to 5 mg/kg of body weight can cause toxic symptoms, and doses of 3 to 6 mg/kg of body weight can be fatal.[5][...]
Safety / Suggested limits on consumption of solanine
The average consumption of potatoes in the U.S. is estimated to be about 167 g of potatoes per day per person.[11] There is variation in glycoalkaloid levels in different types of potatoes, but potato farmers aim to keep solanine levels below 0.2 mg/g.[18] Signs of solanine poisoning have been linked to eating potatoes with solanine concentrations of between 0.1 and 0.4 mg per gram of potato.[18] The average potato has 0.075 mg solanine/g potato, which is equal to about 0.18 mg/kg based on average daily potato consumption.[19]
Calculations have shown that 2 to 5 mg/kg of body weight is the likely toxic dose of glycoalkaloids like solanine in humans, with 3 to 6 mg/kg constituting the fatal dose.[20] Other studies have shown that symptoms of toxicity were observed with consumption of even 1 mg/kg.[11]
If 0.18 mg/kg = 167 g of Potatoes, then 1 g/kg is reached at 927g of potatoes, which equals about 800 calories. So if you "eat as much as you want", I'm not surprised at all if people show solanine poisoning symptoms.
(And that's still ignoring probable accumulation over prolonged time of high consumption.)
My gut feeling (no pun intended) says the mythical "super-donor" is a very good excuse to keep looking / trying without having to present better results, and may never be found. Doing the search directly in the "microbiome composition space" instead of doing it on people (thereby indirectly sampling the space) feels way more efficient, assuming it is tractable at all.
If some people are already looking into synthesis, is there anything happening in the direction of "extrapolating" towards better samples? (I.e. take several good-but-not-great donors that fall short in different ways, look at what's same / different between their microbiome, then experiment with compositions that ought to be better according the current understanding, and repeat.)
I have something in the pipeline, but it'll take a while... if it's trying to be "actually" alien, it's kinda important that it's internally consistent. "Add some arbitrary bytes to [...represent] metadata" is exactly what you don't want to do. Because if you do, sure, it'll be hard, it'll (probably) be eventually solvable, but it'll be... somewhat dissatisfying. Same for using stuff like NTSC, it's just... why would they come up with exactly that? It just doesn't make any sense!
So, in case anyone else wants to also make a good challenge in this style, here's some guidelines. The first and most important one:
- Ensure that every. single. layer. has some clearly recognizable artifact / regularity. Blocks + checksums, pilot tones, carrier waves, whatever. It can occasionally be rather obscure / hard to identify, but never completely absent. If you have to get through ≈10 layers of stuff to solve the whole thing, getting stuck for a few hours (or even days) with no indication of whether you're even heading in the right direction is very unsexy and will likely kill the mood. (It's also internally consistent w.r.t. the back story of an alien message: For an intentional transmission, they want you to understand what they're sending. Even an accidental or amateur message would likely use (their) standard protocols / file formats, which will likely have some regularity – headers, blocks, trailers, ... – because that makes it easier to debug problems.)
The rest falls into the "do some world building" and "ensure internal consistency" buckets:
-
Weird senses are cool, not everyone needs to have a focus on vision, especially not on exactly 3 types of receptors focusing on exactly the same 400–700 nm range as we do – the electromagnetic spectrum is way larger. (Even animals already have pretty crazy stuff going on!) Vision doesn't have to be the primary sense (with low-resolution "eyes", other senses may be preferable.) Having a two-dimensional arrangement of identical building blocks (a.k.a. "images" made of "pixels") isn't necessarily ideal for all senses. Different senses likely also result in other things being obvious, considered likely or unlikely, or being missed entirely when trying to find message formats that are most widely understood. (Beings whose senses revolve entirely around sound and movements may not have static things like books or writing at all, a cave-dwelling culture near a very aggressive star may consider all EM / "light" exposure as dangerous or deadly (compare: the popular image of radioactivity for us) and strongly avoid developments in that direction.)
-
Try not to copy "historical accidents" blindly. Most existing media formats are harshly optimized towards being barely good enough to fool human senses. (Especially lossy compression generally exploits shortcomings of human senses to remove stuff that we would be unable to sense in the presence of stronger surrounding stimuli, but even simple "lossless" formats are generally human-centric. Historically, memory/storage was not cheap at all, and you really wanted to get the most out of your bits.) An alien message will likely try to avoid optimizing in the wrong way, but may well fail at that in interesting ways.
RGB bakes material properties and lighting into just three numbers and ignores all other things, even though we are able to see e.g. metamerism effects in the real world. Stereo audio bakes sound waves into two channels, even though we can detect much more finely where sounds are coming from in the real world. (Your very personal ear shape influences how things sound coming from different directions, so while you can shove approximate directions and distances into two channels, it sounds quite different from what you're used to in the real world.) Different priorities will result in different trade-offs.
Even for computers, binary isn't required. (There were decimal computers, that's just way more complex than 2 states and so eventually that won out. If the prevailing number base is 3 or 4 instead of 10, that's much more doable.) Same goes for 8-bit groups as the next building block. (16 is "more square" and behaves nicer in some contexts. But we also had what's basically 9-bit computers, that were using a 36-bit word size, so the next level of grouping doesn't have to cleanly divide at all. It also provides space for e.g. a parity bit if you do want to stick to "base-powered" block sizes.) -
Look at how things would actually transmit – you don't get a convenient blob of bits. (That would already lock in a binary base, potentially bytes as the next level too, making it much harder to do actual fun stuff with weird senses.) Actually, you'd get some noisy recording (likely with burst errors blotting out information here and there) from which you have to recover the frequency at which symbols are coded, then recover that sequence and try to recognize and fix errors, and so on... beyond parity bits, there's full error correction codes, interleaving, and all the rest of coding theory. You may want to repeat the same message in a loop, or have a sequence of messages to make it easier to decode the main message (e.g. stylized geometric shapes, simple but anisotropic patterns, or chirps to help fix directions and scales before the "messy" recorded data like images or sound.)
...and so on like that, but I hope that's enough to send anyone considering to make such a challenge at the "actual" alien message level in approximately the right direction. Eagerly anticipating silly alien selfies! ;)
I can't recall specific names / specific treatments of this, but I'm also relatively sure that it's kinda familiar, so I suspect that something exists there. Problem is that, in some sense, it falls right between areas.
On the practical side, people don't really care where the problem originates, they just want to fix it. (And people are swimming in abstraction layers and navigate them intuitively, so a "this is one layer up" doesn't really stand out, as it's still the bottom layer of some other abstraction.) So from the perspective of computer security, it just falls into the big bucket of what's called a "denial of service" vulnerability (among lots of other things like just temporarily slowing things down, or making a system do 2^51 compute steps (i.e. locking it up effectively forever but not "truly" forever) or making it drown in traffic from outside), but there's generally no category separation between "possible bugs" (that just happen to be in one specific implementation) and "necessary bugs" (the "logic bugs" that are in the spec, where every conforming program will end up with the same problem.)
On the theoretical side, you mostly work with bounded recursion instead of possibly unbounded loops, so the problem is more-or-less impossible. (And while Turing machines exist on the theory side too, working with them sucks so it doesn't really happen a lot, so people don't really notice problems that occur when working with them. And so it's mostly bounded recursion.) As an example, if you write your program in Coq, you're forced to prove that every recursive computation terminates in order to be allowed to compile (and run) your program. So as soon as you care even a little about verification, the problem basically disappears / becomes invisible. In theory, the path to reaching a fixed point by reacting to each input in exactly the same way is still open (by messing up your state), but that's very unlikely to hit by chance (especially if you write in the typical stateless style), so people might not stumble on that ever, so the chances of thorough exploration are rather low too. (Still feels like there was something, somewhere...)
In other areas:
- From the perspective of dynamical systems, it's just a fixed point.
- In temporal logic, it's something that would be happening "eventually forever".
If you can't find anything after using that to find more keywords, I'd suggest you ask someone at the Santa Fe Institute, they do lots of dynamical systems stuff, and if it exists at all, there's a pretty good chance that someone there has heard of it. (I've seen them mention kinda nearby concepts in one of the "Complexity Explorer" courses. According to that page, Cristopher Moore does a computability etc. course soon-ish, so he should either be the right person to ask or know who to forward it to.)
Some very quick notes, don't have time for more:
-
It looks to me as if you fail to separate formal models from concrete implementations. Most exploitability results from mismatches between the two. As a concrete example, the PDF spec says (said?) that a PDF starts with a "this is a PDF" comment, e.g.
%PDF-1.4
. (Right at the start, with nothing before it.) In practice, most readers are happy if that signature sits anywhere in the first 4KiB or something like that. End result are benign things like polyglot files on the one end – look for Ange Albertini and/or the International Journal of PoC||GTFO ("Proof-of-Concept or Get The F*** Out") – or malicious results like your [guard mechanism – firewall/antivirus/…] parsing things one way and considering them safe, and [target program] parsing them in another way and [blowing up – crashing / running embedded code / …]. As long as you only look at specs / formal models, you cannot talk about the actual effects. You have to actually look at the code / program, have to get down to the base level, to see these problems. -
For more stuff on that look at LANGSEC (very short summary, but the papers quite are readable too) or the stuff that Halvar Flake does. Briefly and slightly over-simplified, the model here is that you have a universal machine (your computer) but want it to do a very specific limited thing, so you write an emulator (your program) for a finite state machine or similar. There are certain state transitions implemented by your code, and the program starts out in a "sane" state. Certain events (weird inputs, memory manipulation, …) might move the program from a sane state into a "weird state", and after that you can still trigger the transitions intended for sane states, but instead they are run on weird states – you now have a "weird machine" that does unintended things. Another key insight from the LANGSEC stuff is that you write the program, so you probably think you decide what's going to happen, but actually the program is effectively an interpreter for the input that behaves like a program running on your "interpreter", so whoever provides the input (files, network packets, …) actually decides what happens. (Within the bounds set by your code, but those are often uncomfortably loose.) Not only is code data (as taught by Lisp), but data is also code.
-
Formal models can't be exploitable in this sense, as they are the specification. What can happen is that there's logic bugs in the spec, but that is a very different class of problems. So an abstract Turing machine can't really be hackable as it will behave exactly according to specification (as it is the spec), the program implemented by the TM might be faulty and exploitable, and a concrete implementation of the TM might be too. (Of course, logic bugs can also be very very bad.)
On the topic of grayscale: Love it, strongly recommend trying to everyone too.
What I'd really like to see is a way to selectively grayscale applications, or exclude some from the grayscale filter. (So e.g. if I'm running Gimp/PS/..., keep just that one non-grayscale while still desaturating everything else.) If anyone knows anything, all pointers are strongly appreciated!
(I think it ought to be possible with the X server permissions model on Linux, but I'm not sure if a program could even have the permissions to do this on Windows.)
I've done both – asparagus and lettuce – and it works. (Especially for the more bitter kinds of leafy greens, it somewhat reduces bitterness. It can also soften the stems / bottom bits and make them more usable. So e.g. finely chopping the stem and sauteing with some onion and mushrooms can be a good way to use what you'd otherwise discard.)
There's even salad-based soups (both with something like e.g. romaine added just before the end, and also with e. g. iceberg shredded, cooked, and blended), and while it may seem strange initially and mess with your expectations, they work just fine. (The end result doesn't match "what salad is supposed to be like", so the gut reaction for many is to reject it. But actually it's just leafy greens.)
Typo: "Prediction markets require liquidity. Suppose you seeded your prediction market with $10,000 of liquidity such that an investor can invest $10,000 into the market without noticeably moving the prices."
That number is wrong, and I'm not entirely sure which one was intended.
They're actually quite different from how our computers work (just on the surface already, the program is unmodifiable and separate from the code, and the whole walking around is also a very important difference[1]), but I agree that they feel "electronics-y" / like a big ball of wires.
Don't forget that most of the complexity theoretic definitions are based on Turing machines, because that happened to become the dominant model for the study of these kinds of things. Similar-but-slightly-different constructions would perhaps be more "natural" on lambda calculus, so if that had been the dominant model, then the Turing machine versions would have been slightly awkward instead.
But memory isn't actually a problem – see Lisp. Represent everything as a cons cell (a storage unit with two slots) or a "symbol" (basically an unmodifiable string that compares in O(1)), where cons cells nest arbitrarily. (Simplest cell representation is two pointers and maybe a few tag bits to decide what it represents… optionally plus garbage collector stuff. Very close to how our computers operate!) Variable names become symbols, application becomes a cell, and lambdas can either become a specially tagged cell, or a nested construction like (lambda ('var body))
(doesn't really make a difference memory-wise, just a constant factor.) Time is also easy – just count the number of reductions, or maybe include the number of substitutions if you think that's important. (But again, constant factor, doesn't actually matter.)
You could even argue that the problem goes in the other direction: For Turing machines, you largely don't care about the states (often you don't even explicitly construct them, just argue that they could be constructed), whereas lambda calculus programs are routinely written out in full. So had lambda calculus become the basis, then program size / "problem description complexity", the "cost of abstractions", reusability and other related things might have been considered more important. (You get a bit of that now, under the name of "code quality", but it could have happened much earlier and way more thoroughly.)
An average normal program is extremely pointer-heavy, but you don't even have the "alphabet" to efficiently represent pointers. (Unless you make "bytes" from bits and then multiply your state count to deal with that construction.) You also don't have absolute locations, you can't really do variables (especially not arbitrary numbers of them), and so on. So if you look deeper, both have a very different approach to state and, by extension, functions. ↩︎
I'll focus on the gears-level elaboration of why all those computational mechanisms are equivalent. In short: If you want to actually get anything done, Turing machines suck! They're painful to use, and that makes it hard to get insight by experimenting with them. Lambda calculus / combinator calculi are way better for actually seeing why this general equivalence is probably true, and then afterwards you can link that back to TMs. (But, if at all possible, don't start with them!)
Sure, in some sense Turing machines are "natural" or "obvious": If you start from finite state machines and add a stack, you get a push-down automaton. Do it again, and you essentially have a Turing machine. (Each stack is one end of the infinite tape.) There's more bookkeeping involved to properly handle "underflows" (when you reach the bottom of a stack and need to extend it) yadda yadda, so every "logical state" would require a whole group of states to actually implement, but that's a boring mechanical transformation. So just slap another layer of abstraction on it and don't actually work with 2-stack PDAs, and you're golden, right? Right…?
Well… actually the same problem repeats again and again. TMs are just extremely low-level and not amenable to abstraction. By default, you get two tape symbols. If you need more, you can emulate those, which will require multiple "implementation states" to implement a single "logical state", so you add another layer of abstraction… Same with multiple tapes or whatever else you need. But you're still stuck in ugly implementation details, because you need to move the head left and right to skip across stuff, and you can't really abstract that away. (Likewise, chaining TMs or using one as a "subroutine" from another isn't easily possible and involves lots of manual "rewiring".) So except for trivial examples, actually working with Turing machines is extremely painful. It's not surprising that it's really hard to see why they should be able to run all effectively computable functions.
Enter lambda calculus. It's basically just functions, on a purely syntactic level. You have binders or (anonymous) function definitions – the eponymous lambdas. You have variables, and you have nested application of variables and terms (just like you're used to from normal functions.) There's exactly one rule: If a function is applied to an argument, you substitute that argument into the body wherever the placeholder variable was used. So e.g. (λx.((fx)y)(gx))z → ((fz)y)(gz) by substituting z for x. (Actual use of lambda calculus needs a few more rules: You have to make sure that you're not aliasing variables and changing the meaning, but that's a relatively minor (if tricky) bookkeeping problem that does abstract away fairly cleanly.)
Now, on top of that, you can quickly build a comfortable working environment. Want to actually name things? let x := y in t
is just a ((λx.t)y). (The assignment being split around the whole code is a bit ugly, but you don't even need "actual" extensions, it's just syntax sugar.) Data can also be emulated quite comfortably, if you understand one key idea: Data is only useful because you eventually do something with it. If all you have is abstraction, just abstract out the "actually doing" part with a placeholder in the most general scheme that you could possibly need. (Generally an induction scheme or "fold".) [Note: I'll mostly use actual names instead of one-letter variables going forward, and may drop extra parentheses: f x y z = (((f x) y) z)
]
- In a way, Booleans are binary decisions.
true := λtt.λff.tt
,false := λtt.λff.ff
(Now yourif c then x else y
just becomes ac x y
… or you add some syntax sugar.) - In a way, natural numbers are iteration.
zero := λze.λsu.ze
,succ := λn.λze.λsu.su(n ze su)
. (If you look closely, that's just the induction principle. And all that the number "does" is:(succ (succ (succ zero)) x f) = f (f (f x))
) - Lists are just natural numbers "with ornaments", a value dangling on every node.
nil := λni.λco.ni
,cons := λx.λxs.λni.λco.co x (xs ni co)
The usual list functions are easy too:map := λf.λxs.(xs nil (λx.λxs.cons (f x) xs))
,fold := λxs.xs
(yep, the list – as represented – is the fold)
And so on… so I think you'll believe me that you basically have a full functional programming language, and that lambda calculus is equivalent to all those more usual computational substrates. (…and maybe Turing machines too?)
Still… even if lambda calculus is extremely simple in some ways, it's also disturbingly complex in others. Substitution everywhere inside a term is "spooky action at a distance", a priori it's quite plausible that this does a lot of "heavy lifting". So: Let's turn to combinator calculi. Let's keep the term structure and application of things, but get rid of all (object-level) variables and binders. Instead, we define three substitution rules: (using variables only at the meta-level)
I x = x
K v x = v
(or, fully parenthesized:(K v) x = v
)S a b x = (a x) (b x)
(or, again more precisely:((S a) b) x = (a x) (b x)
)
(Strictly speaking, we don't even need I
, as SKKx = (Kx)(Kx) = x
, so if we say I := SKK
that's equivalent.) The interesting thing about these rules is that all lambda-terms can be translated into SKI
-terms. (To get rid of a binder / λ, you recursively transform the body: Pretend the intended argument x
is already applied to the body, how do you get it where it needs to go? If you have an application, you prefix it with S
such that the argument is passed down into both branches. If a branch doesn't use the argument (that includes other variables), prefix it with K
so it remains unchanged. Otherwise, it is the bound variable that should be substituted, so put an I
and you get the value. Done.) Worst case, this will cause an exponential(!) increase of the term size, but it'll get the job done. After that, there's no more "looking inside terms" or dealing with variables, but extremely simple rewrite rules. To revisit the examples above:
true := λtt.λff.tt → λtt.(K tt) → S(KK)I
,false := λtt.λff.ff → λtt.(I) → KI
zero := λze.λsu.ze → S(KK)I
,succ := λn.λze.λsu.su(n ze su) → λn.λze.SI(S(K(n ze))I) → λn.S(K(SI))(S(S(KS)(S(KK)(S(KK)(S(K n)I)))(KI)) → (S(K(S(K(SI))))(S(S(KS)(S(K(S(KS)))(S(K(S(KK)))(S(S(KS)(S(KK)I))(KI)))))(K(KI))))
[…except I think I messed up somewhere.]- (And yes, I did that by hand… so I hope you'll accept that I stop there and don't translate the other examples.)
So actually you don't need lambda calculus (or variables, for that matter), just these two rules. You can compile your Haskell to this SKI-stuff.
Take a moment to process that.
Now I hope it's also pretty obvious that you can implement a Turing machine in Haskell? Same for imperative programming languages – look at e.g. the Whitespace interpreter (Haskell version in the archive, Idris version on Github). So at this point I think you have a pretty good intuition as for why all existing computational mechanisms can be reduced down to SKI-calculus.
As for the other direction…with a basic TM model, dealing with all those dynamically resizing parts would be extremely painful, but if you use an extended Turing machine that has lots of different symbols and also more than one band, it's actually not that hard? Detecting expandable locations isn't that hard if you use one band as scratch space for counting nested parentheses. Doing the "reductions" (which can actually greatly expand the space needed because S
duplicates the argument) can also happen on a separate band, and then afterwards you copy over the stuff back onto the "main band", after adjusting the placement of the unchanged stuff on the side. (I hope that's sufficiently clear? This is basically using multiple bands as a fixed number of temporary helper variables.)
And that's it. Now you can run your Turing machine in your Haskell on your Turing machine, or whatever… Hope you're having fun!
Update: Amazon Germany now also has the books listed, for €36 (which is fine.) Since I haven't received the "Notify me when the UK books are available" mail yet, I assume this is further downstream propagation from the Amazon US listing.
If that is accurate, then there should be no need at all to manually ship books to other regions?! I guess that's very good news for future books!
Defaults matter: Opt-in may be better than opt-out.
For opt-out, you only know that people who disabled it care enough about not wanting it to explicitly disable it. If it's enabled, that could be either because they're interested or because they don't care at all.
For opt-in, you know that they explicitly expended a tiny bit of effort to manually enable it. And those who don't care sit in the group with those who don't want it. That means it's much more likely that your feedback is actually appreciated and not wasted. Additionally, comments with extended voting enabled will stand out, making them more likely to catch feedback. (And there will probably still be plenty of comments with extended votes for passive learning.)
"Rationed breaks" could also work and is a bit "rounder". It's less mathematical, but the "ratio" root is still there, plus a hint of scarcity / frugality due to "rationing". Also "to ration one's time" is (I think? - non-native speaker here) a moderately common phrase?
As a one-off, sure. Long term, it may be. I'm currently restructuring my todo list(s) to tag stuff by brain state. (Most of it requires considerable brain capacity, so if I'm exhausted/tired, I tend to scroll Discord or watch Twitch because "I can't do anything in this state anyway", which is neither productive nor particularly relaxing.)
Lots of things like watering plants, cleaning the bathroom walls, throwing some cleaner into the sinks / tub / ..., taking out the trash, properly archiving last quarter's stack of records, making backups, etc. are all ~zero-brain activities that (1) I can do when I'm too tired to work productively on other "important" stuff, (2) don't truly have a fixed schedule and (3) they all have to happen eventually (often recurrently.) Searching for open tasks tagged #brainstate/amoeba
is easy once you have the list, explicitly keeping a separate list works too.
If wasted half-hours are actually a common situation, then the overhead of tagging or maintaining separate lists may quickly pay off.
I largely agree with this. Multi-axis voting is probably more annoying than useful for the regulars who have a good model of what is considered "good style" in this place. However, I think it'd be great for newbies. It's rare that your comment is so bad (or good) that someone bothers to reply, so mostly you get no votes at all or occasional down votes, plus the rare comment that gets lots of upvotes. Learning from so little feedback is hard, and this system has the potential to get you much more information.
So I'd suggest yet another mode of use for this: Offer newbies to enable this on their comments and posts (everywhere). If the presence of extended voting is visible even if no votes were cast yet, then that's a clear signal that this person is soliciting feedback. That may encourage some people to provide some, and just clicking a bunch of vote buttons is way less work than writing a comment, so it might actually happen.
Berlin's numbers show about 20% Omicron for last week and about 3% for all of Germany. So at least in Berlin, it's already there (and numbers should be >50% omicron with new year's eve.)
I just noticed (again) that link previews on (only?) old posts are sometimes broken, e. g. when I open this in a new window/tab. At first I suspected it's something about the old link formats but more testing made me more and more confused, and now it seems more and more again like maybe that is relevant?
- Opening a link of the form
/s/.../p/...
in a new window/tab (i.e. not in an existing LW context) always(? - ~10 tests done) breaks the previews on links in the text, and they don't get the ° decoration. (Tags, pingbacks etc. still work.) The same seems to sometimes also break/posts/.../...
, but sometimes not? (Very weakly tested hypothesis: It breaks very old posts that still use the old/lw/XX/...
link format?) - Opening the same article in an existing tab (by following an internal link) correctly goes through the text, adds the °s and then previews work.
Can someone reproduce this? Is this a bug with timing or initialization order or something like that?
It says the UK Amazon doesn't ship to Germany [at least for the auto-generated listing], and from the US it'd be ~$45 incl. shipping + taxes... =/
And since it's above the magic number of 1kg (around 1.2kg), even a bulk order with local distribution would have to add about €5 for the last leg of shipping, which (adding packaging etc.) makes that just not worth it.
Since Amazon UK is happy to ship other books, I subscribed to the UK availability notification -- maybe it'll work once it's "really" there. I'll update this once the notification comes and I have time to check.
(If that doesn't fix it: If it were available at some endpoint within the EU, Amazon's shipping would probably be around €6 (based on looking at comparable book boxes from ES or FR.) So if UK refuses to ship and it's not too terrible (or would add extra costs that make it useless), then I'd appreciate having some EU endpoint next year again... but first let's see what'll happen.)
A related observation that might help some: I'm fairly nocturnal because I can work better at night. (Less noise, less light, no interruptions from others, etc.) My default strategy to achieve that was to stay up very late and sleep until the early afternoon.
But at some point I noticed that getting up really early (like 1-3am) also gets you the time at night to work, except now you're going to bed around 6pm instead of staying up until 6am. Both work, with different tradeoffs. (And different friend groups being accessible at different times.)
I know now that I'm not forced to stick to the "staying up late" schedule to get the effect that I want.
Well if we've fallen to the level of influencing other people's votes by directly stating what the votes ought to say (ugh =/), then let me argue the opposite: This post – at least in its current state – should not have a positive rating.
I agree that the topic is interesting and important, but – as written – this could well be an example of what an AI with a twisted/incomplete understanding of suffering, entropy, and a bunch of other things has come up with. The text conjures several hells, both explicitly (Billions of years of suffering are the right choice!) and implicitly (We make our perfect world by re-writing people to conform! We know what the best version of you was, we know better than you and make your choices!) and the author seems to be completely unaware of that. We get surprising, unsettling conclusions with very little evidence or reasoning to support it (instead there's "reassuring" parentheticals like "(the answer is yes)".) As a "What could alignment failure look like?" case study this would be disturbingly convincing. As a serious post, the way it glosses over lots of important details and confidently presents it conclusions, combined with the "for easy referencing" in the intro is just terrifying.
Hence: I don't want anyone to make decisions based directly on this post's claims that might affect me even in the slightest. One of the clearest ways to signal that is with a negative karma score. (Doesn't have to be multi-digit, but shouldn't be zero or greater.) Keep in mind that anyone on the internet (including GPT-5) can read this post, and they might interpret a positive score as endorsement / approval of the content as written. (They're not guaranteed to know what the votes are supposed to mean, and it's even plausible that someone uses the karma score as a filter criterion for some ML data collection.) Low positive scores can be rationalized away easily (e.g. the content is too advanced for most, other important stuff happening in parallel stole the show, ...) or are likely to pass a filter cutoff, zero is unstable and could accidentally flip into the positive numbers, so negative scores it is.
Your writing feels comically-disturbingly wrong to me, I think the most likely cause is that your model of "suffering" is very different from mine. It's possible that you "went off to infinity" in some direction that I can't follow, and over there the landscape really does look like that, but from where I am it just looks like you have very little experience with serious suffering and ignore a whole lot of what looks to me to be essential complexity.
When you say that all types of suffering can be eliminated / reversed, this feels wrong because people change in response to trauma and other extreme suffering, it (and the resulting growth) tends to become a core part of their personality. There is no easy way back from there, in a way this is also non-reversible. Removing a core part of their personality would effectively destroy them, replacing them with a past version of themselves feels equivalent to killing them, except it also devalues their struggle and their very existence.
Getting the decision on whether (and how far) to reset from anything other than their current self takes away their agency. The different versions along time are (potentially very) different persons, singling out any one of them and valuing it higher than the others is bound to be problematic. I doubt that you could find a consistent valuation within the strands of the person over time, and imposing some external choice just creates a hell (tho it may not be apparent from inside.) I don't think that this is something that you can "magic away" by sufficiently advanced AI / singularity. (And if you think that arbitrary editing of the person has already taken away their agency…? Well then you still have the same problem of identifying the point where they cease to be mostly self-determined, where they cease to be "them", and the "torture meta-game" will have shifted to make that maximally hard.)
So the best that you could achieve is probably something like having multiple independent versions of a person exist in parallel, but at that point you're not erasing/undoing the suffering anymore, and some versions may well want to cease to exist – for them, this will have been a fate worse than death. (At this point the "best" strategy for the 100 billion year torture meta-game is probably to allow just enough space for recovery and growth that there's a high chance that the person wants to continue existing, not sure that's better…)
By this time we're dealing with multiple versions of possibly arbitrarily rewritten copies of a person… and at this point, we're basically in situations that other commenters described. It would be no harder to resurrect from traces of physical remains… (If you can recover the original process after billions of years of deliberate, directed reshaping, then surely recovering it after a mere thousands of years subject to mostly random bio/physical processes should be trivial in comparison, right?) …or outright create new persons from scratch… (Who could tell the difference, hm? And if anyone could, would they feel safe enough to point it out?) …than to deal with all this "undoing of suffering". Now look again at your question:
If you press the button, you save 1 life. But 7 billion humans will suffer from the worst possible torture for 100 billion years. […]
I don't think the answer is yes.
The latter. If you have 8 or 16 cores, it'd be really sad if only one thing was happening at a time.
Isn't this false nowadays that everyone has multi-core GPUs?
Nope, still applies. Even if you have more cores than running threads (remember programs are multi-threaded nowadays) and your OS could just hand one or more cores over indefinitely, it'll generally still do a regular context switch to the OS and back several times per second.
And another thing that's not worth its own comment but puts some numbers on the fuzzy "rapidly" from the article:
It's just that [the process switching] happens rapidly.
For Windows, that's traditionally 100 Hz, i. e. 100 context switches per second. For Linux it's a kernel config parameter and you can choose a bunch of options between 100 and 1000 Hz. And "system calls" (e.g. talking to other programs or hardware like network / disk / sound card) can make those switches happen much more often.
I'd adjust the "breadth over depth" maxim in one particular way: Pick one (maybe two or three, but few) small-ish sub-fields / topics to go through in depth, taking them to an extreme. Past a certain point, something funny tends to happen, where what's normally perceived as boundaries starts to warp and the whole space suddenly looks completely different.
When doing this, the goal is to observe that "funny shift" and the "shape" of that change as good as you can, to identify the signs of it and get as good a feeling for it as you can. I believe that being able to (at least sometimes) notice when that's about to happen has been quite valuable for me, and I suspect it would be useful for AI and general rat topics too.
As a relatively detailed example: grammars, languages, and complexity classes are normally a topic of theoretical computer science. But if you actually look at all inputs through that lens, it gives you a good guesstimate for how exploitable parsers for certain file formats will be. If something is context-free but not regular, you know that you'll have indirect access to some kind of stack. If it's context sensitive, it's basically freely programmable. For every file format (/ protocol / ...), there's a latent abstract machine that's going to run your input, so your input will essentially be a program and - within the boundaries set by the creator of that machine - you decide what it's going to do. (Turns out those boundaries are often uncomfortably loose...)
Some other less detailed examples: Working extensively with Coq / dependently typed programming languages shifted my view of axioms as something vaguely mystical/dangerous/special to a much more mundane "eh if it's inconsistent it'll crash", I'm much more happy to just experiment with stuff and see what happens. Lambda calculus made me realize how data can be seen as "suspended computations", how different data types have different "computational potential". (Lisp teaches "code is data", this is sorta-kinda the opposite.) More generally, "going off to infinity" in "theory land" for me often leads to "overflows" that wrap around into arcane deeply practical stuff. (E.g. using the algebra of algebraic data types to manually compress an ASM function by splitting it up into a lookup/jump table of small functions indexed by another simple outer function, thereby reducing total byte count and barely squeezing it into the available space.)
You're unlikely to get these kinds of perspective shifts if you look only at the basics. So every once in a while, dare to just run with it, and see what happens.
(Another aspect of that which I noticed only after posting: If you always look only at the basics / broad strokes, to some degree you learn/reinforce not looking at the details. This may not be a thing that you want to learn.)
Other failure modes could be to fail to have properties of probabilty distributions. Negative numbers, imaginary amounts? Not an unknown probability distribuiton because its not a probabilty distribution[...]
Not every probability distribution has to result in real numbers. A distribution that gets me complex numbers or letters from the set { A, B, C } is still a distribution. And while some things may be vastly easier to describe when using quasiprobability distributions (involving "negative probabilities"), that is a choice of the specific approach to modeling, not a necessity. You still get observations for which you can form at least an empirical distribution. A full model using only "real probabilities" may be much more complex, but it's not fundamentally impossible.
If your random pull is aleph-null does it have any sensible distributions to be drawn from? Dealing with other "exotic" numbers systematically might get tricksy.
Use a mixture distribution, combining e.g. a discrete part (specific huge numbers outside of the reals) with a continuous part (describing observable real numbers).
Questions involving infidesimals like "Given a uniformly random number between 0 and 1 is it twice more likely it to be in the interval from epsilon to 3*epsilon than in the interval from 3*epsilon to 4*epsilon"[...]
That doesn't typecheck. Epsilon isn't a number, it's a symbol for a limit process ("for epsilon approaching zero, ...") If you actually treat it as the limit process, then that question makes sense and you can just compute the answer. If you treat it as an infinitesimal number and throw it into the reals without extending the rest of your theoretical building blocks, then of course you can't get meaningful answers.
If I really wanted to, I could probably force myself to eat a pack of dates for about 2-3 days before having enough of them.
Actually, I tried that too now. 8 was more than enough, don't really want to eat more. (Wolfram estimates a single dried date to weigh about 16 g and contain roughly 10 g sugar.) So if that's right, this was about 80g of sugar. That's less than half of what I estimated. (Even adding the (tea)spoon of sugar from before as 1-2 extra dates doesn't make much of a difference.)
Approximate amount: 50-60g maybe? I like to add juuust a little to tea and other drinks, about 1-2g / 100ml. Completely unsweetened (and I count nut milks as sweetener too) irritates my stomach for some reason. (And plain unflavored water causes nausea, so teas it is.) There's also often a teaspoon or two in some meals to balance acidity or bring out spices. Rarely some chocolate (80-99% cocoa) or a slice of home-baked cake. (I tend to halve the amount of sugar in recipes.) Fruits (fresh or dried) also contain non-negligible amounts.
How I feel about sugar: A little is somewhere between fine and awesome (seriously, add a sprinkle to your carrots when boiling/frying them!), a lot is just disgusting. (Most sodas are too sweet for my tastes. I still have a pack of gummy bears sitting in the sweets drawer that's been there for almost two years by now... still untouched.) I can absentmindedly absorb about half a bar of (dark) chocolate of the course of a few hours, but then it just stops - I won't eat the rest within the next 2 or 3 days. Dried fruits are more "dangerous", here I can fairly easily eat serious amounts before having enough of them. (But even that saturates.)
Change your habits today: Nah, would be very unpleasant. Less would mostly mean stomach ache, which this isn't worth it for me. More... nah, just no.
If I really wanted to, I could probably force myself to eat a pack of dates for about 2-3 days before having enough of them. That would probably get me about 150-200g extra sugar on each of those days. (Oof!) More refined versions of sugar don't really work for me. (Even eating a spoon of raw cane sugar isn't pleasant, I just tried. It tastes nice, has a lot of complexity, but overall it's still unpleasant and not something that I want to repeat. Plus nearly all of the appeal is in the complex aromas, not the sweetness.) With sweets, eating just 2 or 3 gummy bears is usually enough to really not want more for the next couple of days.
I plan to stick to Hy, but I'll make the versioning clearer in the future.
If there's two weeks, that should leave enough time for making & checking alternate implementations, as well as clarifying any unclear parts. (I never fully understood the details of the selection algorithm (and it seems there were bugs in it until quite late), but given a week for focusing just on that, I hope that should work out alright.)
I'm optimizing for features, not speed.
No complaints here, that's the only sane approach for research and other software like this.
However there's other bits of the code too (like random selection from sets) which might vary from one implementation to another.
Compared to e.g. trying to get the Mersenne Twister (which I think is still Python's default RNG?) either linked to or re-implemented in some other obscure language, that's a trivial task. I don't expect problems there, as long as those functions actually use the specified RNG and not internally refer to the language's default.
Oh also, on genome formats: I've been doing quite a bit of stuff with domain specific languages / PLT stuff, as well as genetic algorithms and other swarm optimization approaches before. If you have any ideas that you want to bounce off someone or have some cool idea for some complex genome format that looks like it might be too much work, feel free to ping me - I might be able to help! (Ideally in a sufficiently abstract / altered way that I can still participate, but if need be, I'd give that up for a year too.)
Feedback on the game so far:
Genome Format: Even though I'm a long time programmer, I vastly preferred this year's version where no one (except you) had to write any code. This was awesome!
Implementation/Spec: I would have preferred a clear spec accompanied by a reference implementation. Hy may be fun to use, but it's incredibly slow. (Also the version differences causing various problems was no fun at all.)
The only big thing to watch out for is to not use the built-in RNG of whatever language you'll end up using, but instead a relatively simple to implement (but still good quality) one like e.g. PCG where you can probably find a library, but if not then you have just a couple of lines to translate. (Different RNGs would mean different results. If everyone can use exactly the same, then all re-implementations are able to behave exactly the same, and it's easy to test that you got your implementation correct.)
Time: If you can, avoid the very last/first week of a quarter year - that's when companies and people who are self-employed have to finish all the bookkeeping. (I had several requests for clarification & planning the next quarter coming at me, that took away even more time...)
Duration: A week felt quite short, two weeks would have left a week to sort out infrastructure problems and then another one to solve the actual problem. Maybe next time you could split it in two, first week releases the spec and a sample environment (like the one that you used throughout the post), second week releases the actual parameters for the game. (I.e. the big 10-biome list and final feature costs.) Leaving a 48 hour window for changes (before submission opens) would be fine at that point; given that many would already understand the base setting, any quirks like grassland having less grass than rain forest would likely be noticed quickly and could still be fixed.)
Submission: Google forms sucked. Maybe there's a browser extension that makes it easier to use, but I had to click around a lot and for some reason the browser built-in form completion was disabled for the fields, so I had to manually paste email / user etc. every. single. time. Ugh. There was also no way to see which ones I had already submitted. I'm pretty sure that I'm not the one with 11 species (and even if, two would have been exact duplicates), but an alternative "or just send me a CSV file formatted like the example data" would have been much more usable for me.
Still, in all it was a lot of fun and I'm curious what the results will be. Thank you very much for running another round this year, and I hope there will be another one next year!
In the Markdown editor, surround your text with
:::spoiler
at the beginning, and:::
at the end.
This (or at least my interpretation of it) seems to not work.
I read it as anywhere inline (i.e. surrounded by other text) putting :::spoiler
(without the backticks), followed by your text to be spoilered, followed by :::
(no space required and again without backticks.)
That ended up producing the unspoilered text surrounded by the :::spoiler ... :::
construction and making me slightly sad. Here is a :::spoiler not really ::: spoilered example of the failure.
It seems that it has to be used as a standalone block instead, i.e. you put an empty line, :::spoiler
on a line by itself, then your paragraph (probably must not include empty lines, but didn't check), then :::
.
So this works.
This should probably be clarified?
Is (or will there be) an inline form as well? Discord uses ||spoiler|| and I think I've seen the same in other places too.
I didn't trusty myself to reimplement the simulator - any subtle change would likely have invalidated all results. So simulations were real slow... I still somehow went through about 0.1% of the search space (25K of about 27M possible different species), and I hope it was the better part of the space / largely excluding "obviously" bad ideas. (Carefully tweaking random generation to bias it towards preferring saner choices, while never making weird things too unlikely.) Of course, the pairings matter a lot so I'm not at all certain that I didn't accidentally discard the best ones just because they repeatedly ended up in very unfortunate circumstances.
There certainly were some kinda non-intuitive choices found, for example: A Benthic creature that can (also) eat grass -- it can't start in the river, but that's where it wants to go; and travel-wise, Ocean/Benthic are equivalent! (Also, for some reason, others trying the same strategy in the ocean performed way worse... absolutely no idea why yet.)
I'd have loved for this to happen in a less-busy week (not exactly the end of the quarter year with all the bookkeeping) and to have about 2-3x as much time to get the infrastructure working... managed to barely get simple mutation working, but didn't have time for the full genetic algorithm or other fancy stuff. :(
Seconding this, does 'by Sep 30th' mean start or end of the day? I'm currently assuming 'end of', in some unspecified time zone.
My computer's still crunching numbers and I'm about to head to bed… would be sad to miss the deadline.
I've got it mostly working now... problem is that the default plot size is unusable, the legend overlaps, etc. etc. -- when run interactively, you can just resize the window and it'll redraw, and then you save it once you're happy. So now I'm also setting plot size, font size, legend position, and then it's "just" a (plt.savefig "plot.png")
added before the (plt.show)
.
I might also add a more easily parseable log output, but for now tee
and a small Lua script mangling it into CSV is enough.
I'll probably clean up all of that in a couple of hours and send another patch / merge request. Tho it's probably a good idea to get someone else to review it given that I have near-zero prior experience w/ Python.
Another question: where does the script save the data / graphs when running it? Or does it do that at all?
It looks like it might try to open a plot window, but I'm running it on a headless server... so nothing will happen. Is the (hard-to-parse) text scrolling by all that I'll get at the end of a run?
Thanks!
Turns out it needs a newer hy
, then it works. (And in case anyone else has a similar problem and is also a Python noob, pip
's package is probably called python3-pip
or something like that. After that, the rest is explained either in the article or by pip
itself.)
That just gets me an even longer error message:
Python 3.7.3 (default, Jan 22 2021, 20:04:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import hy
>>> import main
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/hy/macros.py", line 238, in reader_macroexpand
reader_macro = _hy_reader[None][char]
KeyError: '*'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 668, in _load_unlocked
File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible
File "/usr/lib/python3/dist-packages/hy/importer.py", line 197, in load_module
self.path)
[...(*snip*)...]
File "/usr/lib/python3/dist-packages/hy/compiler.py", line 424, in compile_atom
ret = _compile_table[atom_type](self, atom)
File "/usr/lib/python3/dist-packages/hy/compiler.py", line 365, in checker
return fn(self, expression)
File "/usr/lib/python3/dist-packages/hy/compiler.py", line 2559, in compile_dispatch_reader_macro
expr = reader_macroexpand(str_char, expression.pop(0), self)
File "/usr/lib/python3/dist-packages/hy/macros.py", line 242, in reader_macroexpand
"`{0}' is not a defined reader macro.".format(char)
hy.errors.HyTypeError: File "/home/nobody/projects/darwin/main.hy", line 63, column 1
[and then as before]
Guess I'll try setting things up on another system tomorrow, or else I'll skip.
Do you have a more exact version spec? Because I don't even have pip3
and I don't use Python, so I just installed the hy
that comes with my distro... and then I get
File "./main.hy", line 63, column 1
(defn initial-population [biome]
"Creates a list of initial organisms"
(+ [] #*
(lfor species +species-list+
(if (= (. species ["spawning-zone"])
biome)
(do
(setv organisms [])
(setv energy +seed-population-energy+)
(while (> energy 0)
(.append organisms (.copy species))
(-= energy (energy-of species)))
organisms)
[]))))
HyTypeError: b"`*' is not a defined reader macro."
and I don't even have a clue what the b"``*'
part is supposed to mean...
I experimented some more, no clue what package I'd have to install to get pip
- neither pip
nor pip3
exist. On the systems that I have, I get matplotlib 3.0.2-2 with hy 0.12.1-2.
Once things have stabilized and things like inline annotations are there, I'd love to see the following: (1) An easy way to add and remove yourself from a pool of available feedback providers. (Checkbox in settings?) And (2) a way for anyone (or nearly anyone - e.g. non-negative karma) to request brief / "basic feedback" on their posts, by automatically matching people from the pool to posts based on e.g. post tags and front page tag weights.
On (1): I have proofread a couple thousand pages by now, and while I'm usually pretty busy, in a slow week I'd be happy to proof-read a bunch of drafts. However, if that involves messaging someone to be added to a list and then messaging someone again to be taken off that list, that's quite a lot of overhead - I probably wouldn't bother and just look for other stuff to do. So I suspect automating that part might greatly increase the amount of available reviewers for feedback.
On (2): With some extra capacity from (1), I expect the main bottleneck for providing more reviews is matching reviewers to posts. If that's automated, the only cost is the time spent reviewing. With a scheme like "review two/three posts to get one post reviewed by two/three people", the reviewer pool should be even bigger (so with (1) you might even go "review 2 to get 3") and things should remain relatively fair. With multiple reviews, you should get some decent feedback even if one reviewer writes complete nonsense or doesn't understand anything.
At that point, the human overhead for having this extra "basic feedback" system should be near-zero, apart from from maybe having to manually filter people trying to abuse the system - no clue how prevalent that is. And looking at myself, (a) I probably wouldn't bother manually asking others for reviews, and (b) knowing that I can get guaranteed feedback, no questions asked, would make it more likely to actually start writing. (While I can't say for sure whether that translates into actual posts, I can clearly see that there are lots of other "very important" things, some of which only barely win out because they're less headache-inducing / uncertain.)
Could you imagine the feeling of lying on a carpet without a shirt on (ie the feeling of a carpet on your torso) ?
Somewhat... it's too diffuse. I can imagine the effect at single spots, the whole thing at once doesn't really work. (I get "glitchy partials", brief impressions flickering and jumping around, but it's not forming anything consistent / stable.)
What about a spider crawling across your hand ?
Back of the hand is manageable (it's "only" tracking of 9 points - 8 legs plus occasional abdomen contact) and it can even become "independent" and surprise me with what direction it will move in next, front is basically impossible. (Glitchy partials again.)
I am very jealous of your ability to ignore your thoughts[.]
If there's usually not much happening that reaches the conscious level, that's really not that hard. I've talked at length with people who have a near-constant internal monologue and I get that that's much harder. I just notice that it's also relatively easy for me to ignore other constant or near-constant things like loud buzzing noise (server fans?), most extreme smells (rotting/feces) where others flinch away, etc. but basically impossible to ignore constantly changing impressions. (I currently have a construction site in front of the house and I can't work at all while they're active, even with earplugs.)
[I am very jealous of your ability to] track north.
Again, I have to constantly keep an "inner eye" on that or it comes loose and then it's broken / de-synced. A question from someone that makes me think just a little bit too much can be enough to break it, unless I remember to explicitly save the current heading and reinstate the thing afterwards. (I suspect you can do/imagine something similar and it'll get less costly and more precise over time of using it.)
I wonder if there's any tangible overlap between brain function and fields of interest.
Probably - you're more likely to do stuff that's easier for you, which makes it easier... But I expect that to be fairly weak / to have lots of noise on top. (External expectations, stuff that you don't even know exists, etc. etc.)
I guess what also plays a big role is how you approach your brain / how you model it & yourself / what expectations you place on it. I treat mine as a substrate that can spawn lots of independent processes of varying size and with more or less expressive interfaces, and "I" just happen to be one that's fairly big and stable. So making that arrow pointing north is basically just me spawning yet another small process on that substrate that subsequently does its own thing (unless swapped out because of capacity constraints), likewise the imagined spider can become independent and choose its own direction, because I fully expect it to be an independent process and not something that I have to manually control.
(Treating your brain as some sort of "expectation realizer" seems to be a powerful model/perspective, but totally expect really weird "replies" when you try to apply that to external stuff. (Like weird feelings, sudden anxiety, strange preferences, etc. guiding you in the direction that your brain thinks is best - be careful (and keep track of) what you wish for.) Internal seems to be relatively safe compared to that.)
Also one of my friends struggles with verbal thinking and thinks mostly implicitly, using concepts, if I understood that correctly, and they have a strong preference for non-verbal signs of affection (physical contact, actions, quality time etc.).
Same here. Not thinking in words at all, very strong preference for touch or very simple expressions. Over the years with my SO, we basically formed a language of taps, hugs, noises, licks, sniffs, ... (E. g. shlip tongue noise - Greetings! / I like you. / ... (there are even tonal variations - rising / higher pitch is questioning, flat is affirmative), sniff-sniff noise - What's wrong? Etc. - So a possible exchange could be seeing them sitting on the sofa, looking unhappy - sniff-sniff (What's wrong?) - grumble (Bad mood.) - shlíp? (Affection?) - shlīp. (Yes.) - hug. That's much less exhausting than doing the same in words.)
Some more details on each of the categories in order:
Visual - I don't really see things, I just get some weird topological-ish representation. E.g. if I try to imagine a cube, it's more like the grid of a cube / wireframe instead of a real object, and it's really stretchy-bendy and can sometimes wobble around or deform on its own. And attributes like red / a letter printed on a side etc. are not necessarily part of the face but often just floating "labels" connected by a (different kind of) line that goes "sideways" out of 3-space? o.O Even real objects like a tree are abstracted (and it depends on what I focus on), e. g. it's more like a cylinder with a "fuzzy sphere" on top and a "fuzzy cone" below. Or maybe I focus on the bark, then that gets more detail but everything else becomes even less defined. Or maybe it's just general "tree-ness", then there's just some elongated blob and I can't even query whether it has sharp corners/edges or not without it changing shape to become more defined (and then it could end up either way, so no clue).)
Sound - sometimes, I get runaway "earworms" ("stuck tune" doesn't really fit) that morph over time and evolve away from the source material. Sometimes there's two of them at the same time, fighting each other or working in harmony. (If that's (the illusion of) two full orchestras, it can get quite exhausting...) I lack the music education / knowledge to write down or play what I hear.
Taste - negligible, mostly on the "floating tags" level. "Here be salty."
Smell - was on the same level, but getting a lot better now that I consciously care about smell in real life.
Touch - mostly negligible but sometimes I can cook up scary real stuff. (Like imagining a friend biting my neck and hairs raising up and also noticing how the muscles around that spot would move if that were real, even if I didn't consciously follow that before, and then later validating the memory of that against the real thing. That was so surprising that I still remember it.) I find it a lot harder to imagine touch on extremities than the torso. E.g. I can't really imagine a stubbed toe, the feeling of walking across carpet, grabbing a cabbage with my hand, but I can vividly imagine a drop of water running down my chest or a spider crawling across it. (Lower resolution is easier to fake?)
Internal monologue / thinking in words - I can't imagine how that would be. Even when talking, I don't really know (on the word level) what I'm going to say before I actually say it. I generally don't think in words and have a hard time translating back and forth from words into my internal representation and back out. There's lots of concepts that I use that I can't really describe because they're several abstraction levels up from stuff that maps to known external words, so I'd have to explain/translate several layers of wordless stuff before I could even describe the perspective from which these concepts arise. (And the "floating tags" mentioned above also aren't words but just "nodes" that somehow represent the stuff.)
Memory - no clue how to assess that, I don't have a baseline to compare against. Some stuff gets regularly forgotten (names, anyone? .), other stuff (like small random details) just sticks around far longer than justified. (I also tend to notice lots of weird details.)
Another potentially notable thing about memory: I can't randomly access or scan memories, but if I have a specific query I tend to get results, including context (e.g. why this memory is probably trustworthy, stuff that helps locate it in space/time etc. etc.)
Mind control - I can't stop an ongoing battle of two orchestras or other things that make themselves hard to ignore, but for stuff that's not as overpowering, I can mostly just ignore it and it's gone?
Synesthesia - I think sometimes I get whiffs of it, like 3 smelling slightly cake-y and 7 rather bloody / putting a metallic taste in the back of my mouth, but it's rare / hard to notice. (Might be imagined, but the associations are consistent.) I also get some angle-dependent coloration on narrow grids, not sure what that is. (I know that you can supposedly trigger that effect by looking at training images and it'll persist for a long time, but I didn't (knowingly) look at any similar images, and also it's doing a full 360° / full color cycle for me vs. that effect only getting you like 2 or 3 colors IIRC?)
Some other stuff:
I can feel even pretty small heat sources across fairly large distances, I notice the exaggerated impression / it presents as a tingling sensation on my face, or my hands if I'm searching with them (in which case I also notice it along my arms unless they're covered). E.g. I can locate a small desk lamp that's 8°C above room temperature from the other side of the room with my eyes closed and after the lamp was turned off (so I don't react to any stray light), or notice on what side of a sofa a human has been sitting on.
Several years ago I did an experiment where I tried to do as much as possible with my eyes closed for the better part of a week. That seriously improved my ability to navigate with eyes closed. I unlearned a lot of that, but I still like to e. g. shower with eyes closed because it reduces the amount of sensory data coming in. Sometimes, when I'm stressed / close to sensory overload, closing my eyes doesn't help and the falling drops of water create a fairly stable "image" of the bathtub / wall / curtain. (It's like every drop's impact creates a small gray circle that fades over ~0.2-0.5 seconds and that's approximately aligned with the normal of the surface. Together, from all the drops, I get something resembling a picture of the surroundings. The dots are relative to my head, not the external world, so if I move my head, they move along and I get garbage until the old dots fade and things stabilize again. Aaaaaaaaaa exhausting.)
I generally have trouble navigating unknown areas, but if I reserve a few brain cycles to manually track "north" (or any direction really), that persists and I can navigate relative to that. (The marker that I use is like an "arrowless arrow", a "shapeless shape"? It's really weird trying to describe it. It's pointy and has a clear designated "pointy end" but it's not really "physical" (even in mind space). It generally floats above my head and can turn really quickly - I can jump around, whirl in place, ... and it just stays glued to the target.) However, as soon as I forget to keep it active (even just for a moment) it comes loose and no longer updates relative to my movements. "Keeping it active" isn't "thinking at it" but just observing it do its thing, don't really know how to describe that. It also doesn't work when I'm being moved (cars, trains, ...), it seems like my guts (body location-wise) do the tracking and they don't get (all) slow rotations.
When I do mathematical proofs, I don't really think, I just "feel" my way through the tree of possible derivations, pick a step, see where I end up, repeat. If that doesn't work, I get stuck. I lack the experience with manually/consciously enumerating the top layers (or this combinatorial stuff is just really exhausting for me?), and so I can basically just wait a few days and try again until I get some progress.
The same "either it works intuitively or not at all" applies to a lot of things that I do, from other technical stuff like coding to things like cooking. Usually whatever I cook tastes good to great, but sometimes it comes out weird and then I have no clue how to fix it. (That's getting better, I'm slowly accumulating things to check / try - like aim for 0.5-1% salt (saliva is around 0.4% and anything below tends to taste weird), check if it needs more acid, add a bit of sugar, etc., but there's still a big difference between things just working and something being not quite right.)
I suspect the water content of honey/treacle (estimating 15-20%) will lead to more gluten formation, which risks causing a chewy instead of crumbly texture. (If you're not adding any water at all, you're not getting gluten strands.) Butter also has some water (around 15%), so you generally don't knead these kinds of dough for long. (Same goes for shortbread, scones, ...)
Hence, I guess any flour should do if you know how to handle it / are careful not to overwork the dough.