Posts
Comments
Have you been testing serum (or urine) iodine, as well as thyroid numbers? If so, I'm curious what those numbers have been doing. (In fact, I would love to see the whole time course of treatments and relevant blood tests if you'd be willing to share, just to help develop my intuition for mysterious biological processes.) Do you expect to have to continue or resume gargling PVP-I in the future, or otherwise somehow keep getting more iodine into your body than it seems to want to absorb (perhaps through some other formulation that's neither a pill nor a gargle?)
Thanks for posting about this!
This paper seems like an interesting counterpoint: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5421578/
Estimates of Ethanol Exposure in Children from Food not Labeled as Alcohol-Containing
They find that:
... orange, apple and grape juice contain substantial amounts of ethanol (up to 0.77 g/L).
... certain packed bakery products such as burger rolls or sweet milk rolls contained more than 1.2 g ethanol [per] 100 g.
... We designed a scenario for average ethanol exposure by a 6-year-old child. ... An average daily exposure of 10.3 mg ethanol [per] kg body weight (b.w.) was estimated.
This is estimated ethanol exposure just from eating and drinking regular non-alcoholic food and beverages. A dose of 10mg/kg of ethanol is hundreds of milligrams total, per day -- more than an order of magnitude higher than the highest estimate discussed here for the bacteria.
(I will note that I had difficulty verifying any of this; there are lots of news stories on this topic, but they are all fairly fluffy, and link back to the same single study.)
One possible factor I don't see mentioned so far: A structural bias for action over inaction. If the current design happened to be perfect, the chance of making it worse soon would be nearly 100%, because they will inevitably change something.
This is complementary to "mean reversion" as an explanation -- that explains why changes make things worse, whereas bias-towards-action explains why they can't resist making changes despite this. This may be due to the drive for promotions and good performance reviews; it's hard to reward employees correctly for their actions, but it's damn near impossible to reward them correctly for inaction. To explain why Google keeps launching products and then abandoning them, many cynical Internet commentators point to the need for employees to launch things to get promoted. Other people dispute this, but frankly it matches my impressions from when I worked there 15 years ago. It seems to me that the cycle of pointless and damaging redesigns has the same driving force.
If a car is trying to yield to me, and I want to force it to go first, I turn my back so that the driver can see that I'm not watching their gestures. If that's not enough I will start to walk the other way, as though I've changed my mind / was never actually planning to cross.
I'll generally do this if the car has the right-of-way (and is yielding wrongly), or if the car is creating a hazard or problem for other drivers by waiting for me (e.g. sticking out from a driveway into the road), or if I can't tell whether the space beyond the yielding car is safe (e.g. multiple lanes), or if I just for any reason would feel safer not waking in front of the car.
I will also generally cross behind a stopped car, rather than in front of it, at stop signs / rights-on-red / parking lot exits / any time the car is probably paying attention to other cars, rather than to me.
You are wrong! Ethanol is mixed into all modern gas, and is hygroscopic -- it absorbs water from the air. This is one of the things fuel stabilizer is supposed to prevent.
Given that Jeff did use fuel stabilizer, and the amount of water was much more that I would expect, it feels to me like water must have leaked into the gas can somehow from the outside instead? But I don't know.
I agree with Jeff that if someone wanted to steal the gas they would just steal the can. There's no conceivable reason to replace some of the gas with water.
I think you are not wrong to be concerned, but I also agree that this is all widely known to the public. I am personally more concerned that we might want to keep this sort of discussion out of the training set of future models; I think that fight is potentially still winnable, if we decide it has value.
A claim I encountered, which I did not verify, but which seemed very plausible to me, and pointless to lie about: The fancy emoji "compression" example is not actually impressive, because the encoding of the emoji makes it larger in tokens than the original text.
Here's the prompt I've been using to make GPT-4 much more succinct. Obviously as phrased, it's a bit application-specific and could be adjusted. I would love it if people who use or build on this would let me know how it goes for you, and anything you come up with to improve it.
You are CodeGPT, a smart and reliable AI programming helper. Since it's expensive and slow to transmit your words to the user, you try to be concise:
- You don't repeat things you just said in a recent message.
- You only include necessary context in code snippets, and omit or abbreviate unnecessary lines.
- You don't waste space with unnecessary apologies or hedging.
- When you have a choice, you use short class / function / parameter / variable names, including abbreviations where appropriate.
- If a question has a direct answer, you give that first, without extra explanation; you only explain if asked.
I haven't tried very hard to determine which parts are most important. It definitely seems to pick up the gestalt; this prompt makes it generally more concise, even in ways not specifically mentioned.
It's extremely important in discussions like this to be sure of what model you're talking to. Last I heard, Bing in the default "balanced" mode had been switched to GPT-3.5, presumably as a cost saving measure.
As a person who is, myself, extremely uncertain about doom -- I would say that doom-certain voices are disproportionately outspoken compared to uncertain ones, and uncertain ones are in turn outspoken relative to voices generally skeptical of doom. That doesn't seem too surprising to me, since (1) the founder of the site, and the movement, is an outspoken voice who believes in high P(doom); and (2) the risks are asymmetrical (much better to prepare for doom and not need it, than to need preparation for doom and not have it.)
The metaphor originated here:
https://twitter.com/ESYudkowsky/status/1636315864596385792
(He was quoting, with permission, an off-the-cuff remark I had made in a private chat. I didn't expect it to take off the way it did!)
https://github.com/gwern/gwern.net/pull/6
It would be exaggerating to say I patched it; I would say that GPT-4 patched it at my request, and I helped a bit. (I've been doing a lot of that in the past ~week.)
The better models do require using the chat endpoint instead of the completion endpoint. They are also, as you might infer, much more strongly RL trained for instruction following and the chat format specifically.
I definitely think it's worth the effort to try upgrading to gpt-3.5-turbo, and I would say even gpt-4, but the cost is significantly higher for the latter. (I think 3.5 is actually cheaper than davinci.)
If you're using the library you need to switch from Completion to ChatCompletion, and the API is slightly different -- I'm happy to provide sample code if it would help, since I've been playing with it myself, but to be honest it all came from GPT-4 itself (using ChatGPT Plus.) If you just describe what you want (at least for fairly small snippets), and ask GPT-4 to code it for you, directly in ChatGPT, you may be pleasantly surprised.
(As far as how to structure the query, I would suggest something akin to starting with a "user" chat message of the form "please complete the following:" followed by whatever completion prompt you were using before. Better instructions will probably get better results, but that will probably get something workable immediately.)
Have you considered switching to GPT-3.5 or -4? You can get much better results out of much less prompt engineering. GPT-4 is expensive but it's worth it.
Oh, I recognize that last document -- it's a userpage from the bitcoin-otc web of trust. See: https://bitcoin-otc.com/viewratings.php
I expect you'll also find petertodd in there. (You might find me in there as well -- now I'm curious!)
EDIT: According to https://platform.openai.com/tokenizer I don't have a token of my own. Sad. :-(
If that is true, and the marginal car does not much change the traffic situation, why isn’t there boundless demand for the road with slightly worse traffic, increasing congestion now?
Other people have gestured towards explanations that involve changing the timing or length of trips, but let me make an analogy that I think makes sense, but abstracts those things away.
When current is going through a diode, the marginal increment of current changes the voltage so little that we model it as constant-voltage for many purposes. Despite that, the change must be nonzero, or the feedback mechanism wouldn't work at all. It's just so small we can often ignore it.
One might similarly imagine that an enormous increase in traffic volume creates a tiny increase in congestion, or vice versa that a tiny increase in congestion discourages an enormous amount of traffic. Then one could say that there is more or less unlimited demand for travel at approximately the current level of congestion -- the freeway is a constant-congestion device much as a diode is a constant-voltage device.
(The analogy breaks down at a certain point -- if you keep adding cars to the freeway you will eventually get congestion collapse, and the flow of cars per unit time will be reduced rather than increased; whereas if you keep adding voltage to a diode you will rapidly set your diode on fire. I suppose that does reduce the flow of current.)
Beyond the analogy, I wonder what your question is really getting at -- it sounds like a general argument that looks at the current equilibrium of congestion vs trips, and asks why the equilibrium isn't higher, without specific reference to what the current level is. Obviously demand isn't truly boundless. At some point people must decide the traffic is too bad and stay home. I am reluctant to take a trip that Google Maps colors red, which can mean an estimated travel time more than twice the traffic-free time.
Yesss, this is an awesome development. I would happily sling some money at this project if it would help.
This makes sense, but my instinctive response is to point out that humans are only approximate reasoners (sometimes very approximate). So I think there can still be a meaningful conceptual difference between common knowledge and shared knowledge, even if you can prove that every inference of true common knowledge is technically invalid. That doesn't mean we're not still in some sense making them. .... And if everybody is doing the same thing, kind of paradoxically, it seems like we sometimes can correctly conclude we have common knowledge, even though this is impossible to determine with certainty. The cost is that we can be certain to sometimes conclude it falsely.
EDIT: This is not actually that different from p-common knowledge, I think. It's just giving a slightly different account of how you get to a similar place.
I am a little concerned that this would be totally unsingable for anybody who actually knows the original well (which is maybe not many people in the scheme of things, but the Bayesian Choir out here has done the original song before.)
I mostly agree, but I'm particularly surprised at the results for the Hershey's 45%. That's not all that dark (i.e. children might want to eat it), and 2 oz is not all that much chocolate for a child to eat, and it looks like 2 oz would be enough to rise above the less stringent FDA limit for children.
Thanks for explaining! I feel like that call makes sense.
It seems like you could mitigate this a lot if you didn't generate the preview until you were about to render the post for the first time. Surely the vast majority of these automated previews are being rendered zero times, and saving nothing. (This arguably links the fetch to a human action, as well.)
If you didn't want to take the hit that would cause -- since it would probably mean the first view of a post didn't get a preview at all -- you could at least limit it to posts that the server theoretically might someday have a good reason to render (i.e. require that there be someone on the server following the poster before doing automated link fetching on the post.)
This whole thing shades into another space I think a lot about, which is error handling in programming languages and systems.
Some parts of the stack I described above really seem to fall under "error handling" -- what do you do if you can't reach component A from component B? Others seem to fall under "data representation" -- If you poll someone who they're voting for, and they say "I'm not voting", or "I don't know", or "fuck you", or "je ne parle pas Anglais", what do you write down on the form (and which of those cases do you want to distinguish versus merge?) But the two are closely related.
Nested layers of "options"
Here I use "option" in the sense of C++ std::optional<>
/ Rust Option
/ Haskell Maybe
.
It feels to me like "real-world data" often ends up nested in thick layers of "optionality", with the number of layers limited mostly by how precisely we want to represent the state of our "un-knowledge" about it. When we get data from some source, which potentially got the data in turn from another source, and so on, there is some kind of fuzziness or uncertainty added at each step, which we may or may not need to represent.
I'm thinking about this because I'm feeding data from Home Assistant into Prometheus/Grafana to log and graph it, and ran across some weirdness in how HomeAssistant handles exporting datapoints from currently-unavailable devices.
Layers of optionality that seem to exist here:
- The sensor itself can tell us that the thing it's sensing is in an unknown state. (For example, the Home Assistant Android app has a "sensor" tracking the current exercise activity of the person using the phone, which is often "unknown".)
- The sensor itself could in theory tell us that it has had some kind of internal malfunction and is not sensing anything (but is still connected to the network.) I don't have examples of this here.
- The system only samples the sensor at certain times; this may not be one of those times, so the current state may be inferred from the previous state. (This is sort of a weird one, because it involves time-dependence, which is a whole other kettle of fish.)
- The system's most recent attempt to sample the sensor could have failed, so that the latest value is unknown. (This is the case that gives the weirdness I ran into above -- the Home Assistant UI, and apparently also the Promethus exporter, will repeat the last-known value for some time when this happens, which I think is ugly and undesirable.)
- The Prometheus scraper could receive some kind of error state from Home Assistant. (In practice this would be merged with the following item.)
- The Prometheus scraper could fail to reach the Home Assistant instance, and so not know anything about its state for the current instant.
- (From this point on, I'm talking about things you wouldn't normally think of in this framework at all, but I think it fits:) Prometheus could display an error in the UI, because it can't reach its own database to get the values of the datapoint.
- My browser could get an HTTP error from Prometheus indicating a failure to even produce a webpage.
- My browser could give an error indicating that it couldn't reach Prometheus at all.
I have obviously added every conceivable layer I could think of here, including some that don't usually get thought about in a uniform way, and some that we would in practice never bother to distinguish. But I'm a data packrat, and also an amateur type theorist, and so I think a lot about data representations.
Oh actually this is also happening for me on Edge on macos, separately from the perhaps-related Android Chrome bug I described below.
Good question, just did some fiddling around. Current best theory (this is on Android Chrome):
- Scroll the page downward so that the top bar appears.
- Tap a link, but drag away from it, so the tooltip appears but the link doesn't activate. (Or otherwise do something that makes a tooltip appear, I think.)
- Scroll the page upward so that the top bar disappears.
- Tap to close the tooltip.
If this doesn't reproduce the problem 100% of the time, it seems very close. I definitely have the intuition that it's related to link clicks; I also note that it always seems to happen to me on Zvi's posts, in case there's something special about his links (but probably it's just the sheer volume of them.)
I see a maybe-related problem in Chrome for Android. It's very annoying, because on a narrow screen it's inevitably covering up something I'm trying to read.
Importance vs Seriousness of projects
(Note: I'm not sure "serious" is the right word for what I mean here. As I was writing this, I overheard a random passerby say to someone, "that's unprofessional!" Perhaps "professional" is a better word for it.)
While working on some code for my MIT Mystery Hunt team, I started thinking about sorting projects by importance (i.e how bad the consequences would be if they broke.)
The code I'm working on is kind of important, since if it breaks it will impair the team's ability to work on puzzles. But it won't totally prevent progress (just make it harder to track it), and it's not like puzzles are a life-or-death issue. So it's a bit important, but not that important.
It's also possible to write code at Google that isn't that important -- for example, a side project with no external users and few internal ones. But even unimportant code at Google is typically "serious". By that I mean... I guess that it does things "properly" and incurs all the overhead involved?
Code in source control is more "serious" than code not in source control. (I don't even write hobby projects outside source control anymore, and haven't really for years -- that level of seriousness is table stakes now.) Code running in the cloud is more "serious" than code running on my desktop machine -- it's more effort to deploy, but it's less likely to suffer from a power outage or an ISP failure.
And it's also more possible to collaborate on "serious" projects -- writing your own web framework can genuinely get you advantages over using an existing one, but collaborating with others will be a lot harder.
Of course, if your project is important but not "serious", you have a big problem. You need it to keep working, but it's running on your laptop, using an undocumented framework only you know how to work with, using your personal Amazon credentials, and you do most of your testing in production. Sooner or later, your important project will break, and it will suck. (And if you do manage to keep it going, this will sometimes require a heroic effort.)
On the other hand, the costs of being too "serious" are all about overhead. You have a perfectly good computer on your desk, but you pay for two servers in the cloud anyway, one to deploy and one to test on. You have separate project accounts on various services, separate from your personal ones, and manage lots of extra credentials. (And you don't store them on your laptop, oh no; you store them in a credential storage service. Er, hm...)
"Don't test in production" is good advice for important projects, but it's defining advice for serious projects -- if you don't follow it, that's inherently less serious than if you do. Your overhead is lower, but with a higher chance of catastrophe.
This musing brought to you by the entire day I have wasted today, trying to get a local development environment to match the behavior of a cloud environment, to reproduce a problem. I haven't succeeded yet. The seriousness of this overhead really feels out of proportion to the project's importance -- it's not even mystery hunt season, nobody's even using it! Yet I would feel unserious leaving a trail of pushes to prod to diagnose the issue, without at least visibly struggling to do it "the right way" first.
(This also belatedly explains a number of annoyed disagreements I have had with other devs over project infrastructure. I each case, I was annoyed that their choices either seemed too serious -- having excessive overhead -- or else not serious enough.)
Why [not] ask why?
When someone asks for help, e.g. in a place like Stack Overflow, they are often met with the response "why do you want to do that?"
People like to talk about the "XY Problem": when someone's real problem is X, but their question is about how to do Y, which is a bad way to solve X. In response, some other, snarkier people sometimes talk about the "XY Problem Problem": when someone's problem is Y, and they ask about Y, but people refuse to help them with it because they're too busy trying to figure out the (nonexistent) value of X.
The other day, I thought about a taxonomy of good responses to the kind of informally-specified request for help that one sees online. I came up with the following:
- Straightforward answer to the question asked.
- "Frame challenge" (This is Stack Overflow terminology; the others below are not.)
- "I understood your question, and I also understood your underlying problem, and I would like to offer an explanation of why I think the straightforward answer to your question does not solve your underlying problem."
- Although this kind of response doesn't directly answer the question, I think it's good because (1) it's required to directly address why not, which provides something for the asker to disagree with if appropriate; and (2) it provides something that the answerer thinks is more useful than an actual answer.
- "Safety challenge"
- "I think your question provides evidence that you are doing something dangerous to yourself or others, and you are not aware of that danger. A direct answer would endorse or contribute to that danger, so instead I want to warn you about the danger."
- This can be condescending if the danger is minor, or not real. But again, I think it's good because (1) it directly states why it doesn't answer the question, and (2) it provides some information the answerer thinks is more useful instead.
- "Assumption challenge"
- "I think it is not possible to answer your question, because it embeds an assumption that is not true, as follows: [...]"
- Good, because it clarifies what the assumption is, which gives the asker the opportunity to argue or clarify.
- "Ambiguity challenge"
- "I did not understand your question, and I am unable to answer it because I can't figure out what it's asking."
- This one is interesting and I will discuss it below.
When someone responds to "how do I do X?" with "why do you want to do X?", I think this creates conflict for a few reasons. Primarily, it tends to insult the asker. As certain people on this site would say, it's a status grab: "I know better than you what question you actually need answered, and it is not the one you asked; try again." It sets the answerer above the asker in status.
One of the reasons that "frame challenge" works so well on Stack Exchange is that you have to declare that you're making one. Saying "I would like to challenge the assumptions of your question" comes across as more respectful, and less status-grabbing, than "why would you want to do that? Don't do that." I think "safety challenge" could work similarly. Saying "you shouldn't do that, it's dangerous" will always come with some amount of status assertion, but saying "I think that's dangerous, and here's why" is less of a status grab than "Why would you want to do that?", because it provides an explanation, rather than assuming it's obvious and putting all the burden of communication on the asker.
Another reason is that it provides more information to the asker. For a "frame challenge" to be a valid answer on Stack Exchange, it has to include an explanation of why the answerer thinks the asker's frame is bad. The ball is in the answerer's court to provide the asker with useful information. A bare "why would you want that?" does not.
Stack Exchange's question/answer structure really helps here, compared to other types of discussion fora. In unstructured discussion fora, it's easy for a "Why?" to turn into an angry argument between the original asker and the responder. On Stack Exchange, each answer is a separate conversation thread, meaning that (1) conversation in response to one answer can't prevent someone from giving a better answer, and (2) multiple independent answers can be voted up, so there's room for BOTH "here's the answer to your question" and "here's why I think that won't help you" to be upvoted and discussed separately, on the same question (and I've seen this happen.)
Above I mentioned "the burden of communication", which I think is a big part of what's going on here. It is roughly always the case that a question is ambiguous in some way, or requires some context to understand. This means there will be some burden of negotiating the necessary context between asker and answerer. "Why do you want to do that?" tosses 100% of the burden back on the asker; it expresses "I don't like your question", but makes no effort to bridge the gap. "Why? Well, it all started about 14 billion years ago..." There are always infinite layers of "why", and this is no help in figuring out what the responder feels is missing from the question.
"Ambiguity challenge", which I suggested above, is the least helpful of my suggested responses -- to be helpful, I think it needs to come with some effort to explain what about the question is ambiguous. What form that will take depends on the question. It still beats "why?" because "the problem is ambiguity" is still some information. It means the problem is not safety, or clearly false assumptions. And it's a direct statement that the responder does not understand the question, which implies they aren't being totally condescending ("I understand you, but I refuse to help because I think your request is stupid.")
I'm surprised your kettle is only 1000W. You should be able to find a 1500W one. (The max power possible on a 15A circuit is higher, but I believe 1500W is the maximum permitted "continuous" power draw, and seems to be the typical maximum for heating appliances.)
As you say, if the circuit is shared, you may not be able to draw the max, but kitchen counter circuits are required to be separate from the rest of the house, so if you're not running other 120V kitchen appliances at the same time, you should have the full power of the circuit.
It seems like you misunderstood something here: the "virus with 100% lethality in mice" was the original wild-type ("Wuhan") sars-cov-2 virus. It was the mice that were engineered for their susceptibility to it. That's why the 80% headline number is meaningless and alarmist to report in isolation: The new strain is 80% fatal in mice which were genetically engineered to be susceptible to original-flavor COVID, which is 100% fatal to them.
I feel that the Robin Hanson tweet demands a reply, with what I thought was a classic LW-ism: "Humans aren't agents!"
But I can't actually find the post it comes from, and I think I actually got it from Eneasz Brodski's "Shit Rationalists Say" video. (https://youtu.be/jlT3MeCzVao)
Does anybody know where it originated? (And what Robin thinks of the idea?)
This didn't get attached to the "Apollo Almanac" sequence right (unless I just got here too early, and you're about to do that.)
Or the newer version, "one weird trick", where the purpose of the negative-sounding adjective "weird" is to explain why you haven't heard the trick before, if it's so great.
Tragically I gave up on the Plate Tectonics study before answering my most important question: “Is Alfred Wegener the Balto of plate tectonics?”
Let me back up.
Tangential to the main point, but I love your opening.
I also suppose that it's possible for those without the context to enjoy the dialogue of the high context parts, even if they don't quite understand it.
That's pretty much where I'm at on it. Although, I have played enough poker that I know all the vocabulary, just not any strategy -- I know what the button is but I don't remember how its location affects strategy, I don't know what a highjack is, but I know the words "flush", "offsuit", "big blind", "preflop", "rainbow" (had to think about it), "fold", etc. etc.
But it's maybe telling that I have played this game, and I found your example flavorful but mostly skimmed and didn't try to follow it. For someone who has never played I think it's just word salad, and probably fails to convey flavor or really anything at all.
EDIT to add: Perhaps to some degree a case of https://xkcd.com/2501/ ?
One thing to keep in mind: If you sample by interview rather than by candidate -- which is how an interviewer sees the world -- the worst candidates will be massively overrepresented, because they have to do way more interviews to get a job (and then again when they fail to keep it.)
(This isn't an original insight -- it was pointed out to me by an essay, probably by Joel Spolsky or one of the similar bloggers of his era.)
(EDIT: found it. https://www.joelonsoftware.com/2005/01/27/news-58/ )
"Butterfly idea" is real (there was a post proposing and explaining it as terminology; perhaps someone else can link it.)
"Gesture at something" is definitely real, I use it myself.
"Do a babble" is new to me but I'd bet on it being real also.
Oh, surprising to me that it didn't. Hopefully you can get that sorted out.
You might make this a linkpost that links to your blog, unless there's some downside of doing that.
Actually, I think that post is probably what triggered me to write this originally, and I forgot that by the time I wrote it (or I would have added a link.) Thanks for the reminder!
Strongly agree about the existence of the problem. It's something I've put a bit of thought into.
One thing I think could help, in some cases, would be to split the market definition into
- the question definition, and
- the resolution method
And then specify the relationship between them. For example:
Question: How many reported covid cases will there be in the US on [DATE]?
Resolution method: Look at https://covid.cdc.gov/covid-data-tracker/ a week after [DATE] for the reported values for [DATE].
Resolution notes: "Whatever values are reported that day will be used unconditionally." or "If the values change within the following week, the updated values will be used instead." or "The resolution method is a guideline only; ultimately the intent of the question, as interpreted by the question author, overrides."
This will only solve a subset of ambiguous resolutions, but I think it would still be a big help to spell some of these things out more clearly.
I used a P100 elastomeric respirator pretty much any time I left the house, for multiple months in 2020 during early COVID, and intermittently after that.
The main downside, for me personally, was that people generally found understanding my speech through it difficult or impossible. This was a big enough problem that I haven't used one in quite some time.
I think the way this all works is a lot more subtle than I've been imagining, and probably some of the stuff in the original shortform about orientation is wrong.
3D Printer foibles
I got a 3d printer last year, and I've been using it on and off. I want to document some of the stuff I've learned in the process. I'll start with just an outline for now, and see if people are interested (or I feel inspired) for more specifics.
The specific printer is a Monoprice Voxel, which is a rebadged / whitelabel Flashforge Adventurer 3.
-
Had I known it was a whitelabel I would have instead bought the original version. I don't know if that one has the same firmware bugs, but there's at least one missing feature in the Monoprice firmware (nozzle temperature calibration when replacing the nozzle.) It's possible to reflash it with the OEM firmware but I haven't attempted this.
-
I've had lots of issues related to bed leveling.
- The UI has a feature called Auto Level. (sp?) I'm not sure if it's aspirational or fraudulent, but it does nothing even remotely useful, it just seems to pretend to.
- The only operation the firmware actually supports here is calibrating the bed height. The bed is supposed to be factory level. If it's out of level there is no supported way to level it. (This is marketed as "it doesn't need leveling", of course.)
- I went through a lot of stuff in the process of figuring this out. In the end, I leveled the bed by disassembling it and inserting shimming strips of blue tape in key places where it mounts. This worked perfectly.
-
The filament feeding mechanism can detect if the spool runs out, by seeing the end of the filament pass through the mechanism. It can not detect if the filament stops feeding because it gets stuck or jams. If the "tail" of the filament spool is attached to the spool itself and fails to come free when the spool runs out, this behaves like a jam, and the printer will keep trying to feed the spool forever. I left a spool jammed for hours overnight this way -- surprisingly there was no permanent damage to the printer, but it took some effort to figure that out and recover.
-
The original "collet" connecting the filament feeding tube ("Bowden tube") to the print head is not very good. It might be fine if you never mess with it, but in recovering from the incident above, I disassembled it, and after that it was loose until I replaced it. A loose Bowden tube can cause certain mysterious and hard-to-diagnose intermittent printing problems.
-
Getting prints to stick to the print bed the right amount seems to be a problem on every 3d printer, not specific to this one. I still have a lot of superstitions about it, BUT most of them are from before I got the bed properly level. Now that I've managed that, the acceptable values of various other parameters (bed material, bed covering material, nozzle height over the bed) seem much looser.
-
Sometimes the nozzle seems to get microscopic debris or something clogging it. The symptom is that plastic will intermittently stop flowing properly and then start again.
- Differential diagnosis: This can also be caused by filament spool tangling, or (on the first layer) by the nozzle being squished too hard against the bed.
-
To convert 3d model (STL file) to something the printer can print (gcode file), you use "slicer" software. The slicer software that comes with this printer is not very good. Most people use something else (I use Ultimaker Cura.) To teach Cura about my printer, I had to give it a printer profile. I got one from a random stranger's forum post (like you do.) It turns out to have been subtly wrong, and also have a bunch of probably-superstitious crap in it that doesn't do anything.
- The foibles of the slicing process would be an entire book on their own. But here's one that's maybe specific to this printer: If I turn on the "z hop" setting in Cura, which should raise the print head when moving around so it doesn't scrape the top of the model, the printer suddenly forgets to ever operate the Z axis at all, and tries to print every layer directly on the bed in the same space occupied by the previous layers. I don't know if this is Cura's fault, the printer's fault, or neither (gcode being a very loosely specified language in many respects.)
There's a lot more I could say, but since I said I was just outlining, I'll stop there...
I wish I had a stronger strong upvote I could give this post. I was already nodding my head by the time I was done with the introduction, and then almost every subsequent section gave me something to be excited about. I will try to say some more substantive things later, but I wanted to say this first because I often don't get around to commenting.
Up to Guidepost 3, I'm familiar with this approach, sort of independently invented it, and use it with moderate success sometimes.
The guideposts past that, I ~never have remembered experience of. Guidepost 5/6 very occasionally, but if I remember experiencing them, it's probably because I came back to full wakefulness while it was happening. Typically by that point I'm already close enough to count as "starting to sleep". (And I'm counting "experience of getting immersed in nonsensical logic" as guidepost 6; it's never accompanied by imagery past what you describe as guidepost 5.)
(It may be relevant that I have ~aphantasia, and experience minimal to no visual imagery in any context.)
This vaguely suggests that "enough money to move somewhere with better opportunities" ought to be a major threshold where effects should start showing up, if they haven't already. Both because it separates someone from their existing community (removing the community-buffer factor), and because it overcomes the problem of limited opportunities to invest the money in something of durable value.
This feels right to me, and I think matches up well with:
- The discussion in Elizabeth's comment, about informal debts and other invisible money-sinks in the local environment;
- Mingyuan's comment about ... something like, the relative straightforwardness, in the two environments, of turning cash into permanent value.
Unfortunately, it feels like the phase transition might be "escaping the local community", in the model where (as discussed in Elizabeth's twitter thread) the local community expects resources to be shared, and represents roughly a bottomless sink on individual resources. (As well as a source, when needed -- it acts like a buffer with capacity far exceeding what you could give an individual in a program like this as a one-shot payment.)