Posts

May 2024 Newton meetup??? 2024-04-14T22:28:00.161Z
Newton – ACX Meetups Everywhere Spring 2024 2024-03-30T11:20:47.488Z
Stop being surprised by the passage of time 2024-01-10T00:36:51.598Z
ACX Boston - Petrov Day 2023 2023-09-22T01:13:47.227Z
2023 ACX Meetups Everywhere - Newton, MA 2023-08-09T22:47:40.593Z
Meet Hyperion on Sunday Aug 6? 2023-08-05T04:36:02.462Z
Predicting: Quick Start 2023-07-01T03:43:50.379Z
duck_master's Shortform 2023-04-26T03:21:38.508Z

Comments

Comment by duck_master on Can Current AI-Driven Cars Generate True Random Paths? (or, Forever at the Mercy of the Horde) · 2024-03-30T20:28:23.484Z · LW · GW

To be fair, there is no evidence requirement for upvoting, either.

I could see why someone would want this (eg Reddit's upvote/downvote system seems to be terrible), but I think LW is small and homogenous-ish enough that it works okay here.

Comment by duck_master on Trivial Mathematics as a Path Forward · 2024-01-08T03:59:50.812Z · LW · GW

"AI that can verify itself" seems likely doable for reasons wholly unrelated to metamathematics (unlike what you claim offhandedly) since AIs are finite objects that nevertheless need to handle a combinatorially large space. This has the flavor of "searching a combinatorial explosion based on a small yet well-structured set of criteria" (ie the relatively easy instances of various NP problems), which has had a fair bit of success with SAT/SMT solvers and nonconvex optimizers and evolutionary algorithms and whatnot. I don't think constructing a system that systematically explores the exponentially big input space of a neural network is going to be too hard a challenge.

Also, has anyone really constructed a specific self-verifying theory yet? (It seems like from the English Wikipedia article, if I understand correctly, Dan Willard came up with a system where subtraction and division are primitive operations with addition and multiplication defined in terms of them, with it being impossible to prove that multiplication is a total function.)

Comment by duck_master on Hand-writing MathML · 2023-09-23T18:54:54.418Z · LW · GW

Speaking of MathML are there other ways for one to put mathematical formulas into html? I know Wikipedia uses <math> and its own template {{math}} (here's the help page), but I'm not sure about any others. There's also LaTeX (which I think is the best program for putting mathematical formulas into text in general), as well as some other bespoke things in Google Docs and Microsoft Word that I don't quite understand.

Comment by duck_master on Show LW: Get a phone call if prediction markets predict nuclear war · 2023-09-19T01:10:51.102Z · LW · GW

Thank you for placing the limit orders! (You are "Martin Randall" if I understand correctly? I didn't know you were a LessWronger!)

Comment by duck_master on Show LW: Get a phone call if prediction markets predict nuclear war · 2023-09-18T01:38:55.491Z · LW · GW

Thank you for building this! I have just signed up for it.

I've noticed that two of the three Manifold markets (Will a nuclear weapon detonate in New York City by end of 2023? and Will a nuclear weapon cause over 1,000 deaths in 2023?) could use a few thousand mana in subsidies to reduce the chance of a false alarm, even though both are moderately well-traded already. (I've just bet both of them down, but I personally don't have enough mana to feel comfortable subsidizing both.)

Comment by duck_master on Show LW: Get a phone call if prediction markets predict nuclear war · 2023-09-18T01:33:47.802Z · LW · GW

I think this issue could be fixed by lengthening the message of the phone calls (if it ever gets sent out) to also quote all the comments on the sentinel markets from the last ~week before the trigger time. The reason why is that I expect, if there were to ever be legitimate signs of a impending nuclear war, that people would leave plenty of comments on the relevant markets about these signs.

Comment by duck_master on 2023 ACX Meetups Everywhere - Newton, MA · 2023-09-09T03:13:12.262Z · LW · GW

Update: I have tested negative for COVID-19 twice with self-tests, but since I still feel ill, I recommend that participants mask up anyways (it could be the common cold or flu, for all I know). 

Comment by duck_master on Book Swap · 2023-09-09T00:39:11.132Z · LW · GW

Thank you for the comment! I will not attend this since you stated that they check IDs at the door.

Comment by duck_master on 2023 ACX Meetups Everywhere - Newton, MA · 2023-09-08T20:44:01.949Z · LW · GW

Two recent things that will likely affect this meetup:

  • Firstly, it will rain on Saturday around the time of the meetup, according to the weather forecast, particularly towards the planned end. Please bring umbrellas.
  • Secondly, I might have COVID-19 (which I suspect I caught on Thursday night). As such, I will wear a mask throughout the meetup, and I encourage all of you to do the same.

Thanks for your attention!

Comment by duck_master on Book Swap · 2023-09-07T13:06:58.216Z · LW · GW

Question: I’m not old enough to drink alcohol, and I think this place is a bar - but would I even be allowed in the bar?

Comment by duck_master on ACX Meetups Everywhere 2023: Times & Places · 2023-08-26T01:51:56.191Z · LW · GW

Here's a manually sorted list of meetup places in the USA, somewhat arbitrarily/unscientifically grouped by region for even greater convenience. I spent the past hour on this, so please make good use of it. (Warning: this is a long comment.)

NEW ENGLAND

  • Connecticut: Hartford
  • Massachusetts: Cambridge/Boston, Newton, Northampton
  • Vermont: Burlington

MID-ATLANTIC

  • DC: Washington
  • Maryland: Baltimore, College Park
  • New Jersey: Princeton
  • New York State: Java Village/Buffalo, Manhattan/New York City, Massapequa, Rochester
  • Pennsylvania: Harrisburg, Philadelphia, Pittsburgh
  • Virginia: Charlottesville, Norfolk, Richmond
  • West Virginia: Charlestown

MIDWEST

  • Michigan: Ann Arbor, Jackson
  • Illinois: Chicago, Urbana-Champaign
  • Indiana: South Bend, West Lafayette
  • Ohio: Cincinnati, Cleveland, Columbus, Toledo
  • Wisconsin: La Crosse, Madison, Stone Lake

SOUTHEAST

  • Alabama: Huntsville, Tuscaloosa
  • Florida: Fort Lauderdale, Gulf Breeze, Miami, West Palm Beach
  • Georgia: Atlanta
  • North Carolina: Asheville, Charlotte, Durham
  • Tennessee: Memphis

SOUTHWEST

  • Arizona: Phoenix, Tucson
  • Arkansas: Fayetteville
  • Colorado: Boulder, Carbondale, Denver
  • Louisiana: New Orleans
  • Missouri: Kansas City, St. Louis
  • Nevada: Las Vegas
  • New Mexico: Taos
  • Texas: Austin, College Station, Dallas, Houston, Lubbock, San Antonio, Westlake
  • Utah: Logan, Salt Lake City

NORTHWEST

  • Alaska: Anchorage
  • Minnesota: Minneapolis
  • South Dakota: Sioux Falls
  • Oregon: Corvallis, Eugene, Portland
  • Washington State: Bellingham, Redmond, Seattle

CALIFORNIA (subdivided)

  • Bay Area/Silicon Valley: Berkeley/Oakland, San Francisco, Sunnyvale
  • Central Valley: Davis, Grass Valley, Sacramento
  • Southern California: El Centro, Los Angeles, Newport Beach, San Diego
Comment by duck_master on Walk while you talk: don't balk at "no chalk" · 2023-08-25T02:45:10.194Z · LW · GW

This is a good suggestion! I'll plan on walking in addition to talking during my upcoming meetup.

Comment by duck_master on Double Crux in a Box · 2023-08-21T22:26:32.804Z · LW · GW

I’m in the park now; how can I identify you?

Comment by duck_master on Double Crux in a Box · 2023-08-21T02:03:35.432Z · LW · GW

@Screwtape I can make this but there is a different thing I also want to go to at 7:30pm.

Comment by duck_master on How to decide under low-stakes uncertainty · 2023-08-12T01:15:08.425Z · LW · GW

This is an excellent tip! I plan on using it from now on in my day-to-day life.

Comment by duck_master on Open Thread - July 2023 · 2023-08-01T01:21:14.379Z · LW · GW

I haven't used GPT-4 (I'm no accelerationist, and don't want to bother with subscribing), but I have tried ChatGPT for this use. In my experience it's useful for finding small cosmetic changes to make and fixing typos/small grammar mistakes, but I tend to avoid copy-pasting the result wholesale. Also I tend to work with texts much shorter than posts, since ChatGPT's shortish context window starts becoming an issue for decently long posts.

Comment by duck_master on Open Thread - July 2023 · 2023-08-01T01:16:01.235Z · LW · GW

Hello LessWrong! I'm duck_master. I've lurked around this website since roughly the start of the SARS-CoV-2/COVID-19 pandemic but I have never really been super active as of yet (in fact I wrote my first ever post last month). I've been around on the AstralCodexTen comment section and on Discord, though, among a half-dozen other websites and platforms. Here's my personal website (note: rarely updated) for your perusal.

I am a lifelong mathematics enthusiast and a current MIT student. (I'm majoring in mathematics and computer science; I added the latter part out of peer pressure since computer science is really taking off in these days.) I am particularly interested in axiomatic mathematics, formal theorem provers, and the P vs NP problem, though I typically won't complain about anything mathematical as long as the relevant abstraction tower isn't too high (and I could potentially pivot to applied math in the future). 

During the height of the pandemic in mid-2020, I initially "converted" to rationalism (previously I had been a Christian), but never really followed through and I actually became more irrational over the course of 2021 and 2022 (and not even in a metarational way, but purely in a my-life-is-getting-worse way). This year, I am hoping that I can connect with the rationalist and postrat communities more and be more systematic about my rationality practice.

Comment by duck_master on Neuronpedia · 2023-07-27T02:13:50.800Z · LW · GW

Thank you for creating this website! I’ve signed up and started contributing.

One tip I have for other users: many of the neurons are not about vague sentiments or topics (as in most of the auto-suggested explanations), but are rather about very specific keywords or turns of phrase. I’d even guess that many of the neurons are effectively regexes.

Also apparently Neuronpedia cut me off for the day after I did ~20 neuron puzzles. If this limit could be raised for power users or something like that, it could potentially be beneficial.

Comment by duck_master on Please speak unpredictably · 2023-07-24T02:57:47.184Z · LW · GW

This text shows another key point: not only should your posts be a surprise, but the kind of surprise that causes good actions.

Comment by duck_master on duck_master's Shortform · 2023-04-26T03:21:39.193Z · LW · GW

Exactly what it says on the tin.

Thoughts I want to expand on for later:

  • Rationality/philosophical tip: stop being surprised by the passage of time
  • Possible confusingness of the Sequences?
  • People * infrastructure = organization (both factors need to exist)
  • No "intro to predicting" guide so far; writing a good one would decrease the activation energy to predict well
  • Impurity (as in impure functions) as a source of strength
    • Contrastingly, the ills of becoming too involved (eg internet dramas, head overflowing with thoughts)
  • Writing a personal diary more frequently (which I really want to do)
    • Also, meditating and playing piano more
Comment by duck_master on You are allowed to edit Wikipedia · 2021-07-06T15:05:35.608Z · LW · GW

This is an extremely important point. (I remember thinking a long time ago that Wikipedia just Exists, and that although random people are allowed to edit it, doing it is generally Wrong.) FWIW I'm an editor now - User:Duckmather.

Comment by duck_master on The Neglected Virtue of Scholarship · 2021-06-14T03:21:04.176Z · LW · GW

In fact, organized resources like Wikipedia, LW sequences, SEP, etc. are basically amortized scholarship. (This is particularly true for Wikipedia; its entire point is that we find vaguely-related content from around - or beyond - the web and then paraphrase it into a mildly-coherent article. Source: am wikipedia editor.)

Comment by duck_master on Measure's Shortform · 2021-06-11T00:32:18.989Z · LW · GW

Maybe flow?

Comment by duck_master on Bad names make you open the box · 2021-06-11T00:30:49.946Z · LW · GW

I also agree that, for the purpose of previewing the content, this post is poorly titled (maybe it should be titled something like "Having bad names makes you open the black box of the name", except more concise?), although, for me, I didn't as much stick to a particular wrong interpretation as just view the entire title as unclear.

Comment by duck_master on Problems facing a correspondence theory of knowledge · 2021-06-11T00:10:41.917Z · LW · GW

Thanks for the reply. I take it that not only are you interested in the idea of knowledge, but that you are particularly interested in the idea of actionable knowledge. 

Upon further reflection, I realize that all of the examples and partial definitions I gave in my earlier comment can in fact be summarized in a single, simple definition: a thing X has knowledge of a fact Y iff it contains some (sufficiently simple) representation of Y. (For example, a rock knows about the affairs of humans because it has a representation of those affairs in the form of Fisher information, which is enough simplicity for facts-about-the-world.) Using this definition, it becomes much easier to define actionable knowledge: a thing X has actionable knowledge of a fact Y iff it contains some sufficiently simple representation of Y, and this representation is so simple that an agent with access to this information could (with sufficiently minimal difficulty) make actions that are based on fact Y. (For example, I have actionable knowledge that 1 + 1 = 2, because my internal representation of this fact is so simple that I can literally type up its statement in a comment.) It also becomes clearer that actionable knowledge and knowledge are not the same (since, for example, the knowledge about the world that a computer that records cryptographic hashes of everything it observes could not be acted upon without breaking the hashes, which is presumably infeasible). 

So as for the human psychology/robot vacuum example: If your robot vacuum's internal representation of human psychology is complex (such as in the form of video recordings of humans only), then it's not actionable knowledge and your robot vacuum can't act on it; if it's sufficiently simple, such as a low-complexity-yet-high-fidelity executable simulation of a human psyche, your robot vacuum can. My intuition also suggests in this case that your robot vacuum's knowledge of human psychology is actionable iff it has a succinct representation of the natural abstraction of "human psychology" (I think this might be generalizable; i.e. knowledge is actionable iff it's succinct when described in terms of natural abstractions), and that finding out whether your robot vacuum's knowledge is sufficiently simple is essentially a matter of interpretability. As for the betting thing, the simple unified definition that I gave in the last paragraph should apply as well.

Comment by duck_master on Problems facing a correspondence theory of knowledge · 2021-06-10T22:53:30.479Z · LW · GW

I think knowledge as a whole cannot be absent, but knowledge of a particular fact can definitely be absent (if there's no relationship between the thing-of-discourse and the fact).

Comment by duck_master on Predict responses to the "existential risk from AI" survey · 2021-05-28T22:36:16.382Z · LW · GW

Since this is a literally a question about soliciting predictions, it should have one of those embedded-interactive-predictions-with-histograms gadgets* to make predicting easier. Also, it might be worth it to have two prediction gadgets, since this is basically a prediction: one gadget to predict what Recognized AI Safety Experts (tm) predict about how much damage unsafe AIs will do, and one gadget to predict about how much damage unsafe AIs will actually do (to mitigate weird second-order effects having to do with predicting a prediction). 

*I'm not sure what they're supposed to be called.

Comment by duck_master on Problems facing a correspondence theory of knowledge · 2021-05-28T16:16:40.619Z · LW · GW

Au contraire, I think that "mutual information between the object and the environment" is basically the right definition of "knowledge", at least for knowledge about the world (as it correctly predicts that all four attempted "counterexamples" are in fact forms of knowledge), but that the knowledge of an object also depends on the level of abstraction of the object which you're considering.

For example, for your rock example: A rock, as a quantum object, is continually acquiring mutual information with the affairs of humans by the imprinting of subatomic information onto the surface of rock by photons bouncing off the Earth. This means that, if I was to examine the rock-as-a-quantum-object for a really long time, I would know the affairs of humans (due to the subatomic imprinting of this information on the surface of the rock), and not only that, but also the complete workings of quantum gravity, the exact formation of the rock, the exact proportions of each chemical that went into producing the rock, the crystal structure of the rock, and the exact sequence of (micro-)chips/scratches that went into making this rock into its current shape. I feel perfectly fine counting all this as the knowledge of the rock-as-a-quantum-object, because this information about the world is stored in the rock. 

(Whereas, if I were only allowed to examine the rock-as-a-macroscopic-object, I would still know roughly what chemicals it was made of and how they came to be, and the largest fractures of the rock, but I wouldn't know about the affairs of humans; hence, such is the knowledge held by the rock-as-a-macroscopic-object. This makes sense because the rock-as-a-macroscopic-object is an abstraction of the rock-as-a-quantum-object, and abstractions always throw away information except that which is "useful at a distance".)

For more abstract kinds of knowledge, my intuition defaults to question-answering/epistemic-probability/bet-type definitions, at least for sufficiently agent-y things. For example, I know that 1+1=2. If you were to ask me, "What is 1+1?", I would respond "2". If you were to ask me to bet on what 1+1 was, in such a way that the bet would be instantly decided by Omega, the omniscient alien, I would bet with very high probability (maybe 40:1odds in favor, if I had to come up with concrete numbers?) that it would be 2 (not 1, because of Cromwell's law, and also because maybe my brain's mental arithmetic functions are having a bad day). However, I do not know whether the Riemann Hypothesis is true, false, or independent of ZFC. If you asked me, "Is the Riemann Hypothesis true, false, or independent of ZFC?", I would answer, "I don't know" instead of choosing one of the three possibilities, because I don't know. If you asked me to bet on whether the Riemann Hypothesis was true, false, or independent of ZFC, with the bet to be instantly decided by Omega, I might bet 70% true, 20% false, and 10% independent (totally made-up semi-plausible figures that no bearing on the heart of the argument; I haven't really tested my probabilistic calibration), but I wouldn't put >95% implied probability on anything because I'm not that confident in any one possibility. Thusly, for abstract kinds of knowledge, I think I would say that an agent (or a sufficiently agent-y thing) knows an abstract fact X if it tells you about this fact when prompted with a suitably phrased question, and/or if it places/would place a bet in favor of fact X with very high implied probability if prompted to bet about it. 

(One problem with this definition is that, intuitively, when I woke up today, I had no idea what 384384*20201 was; the integers here are also completely arbitrary. However, after I typed it into a calculator and got 7764941184, I now know that 384384*20201 = 7764941184. I think this is also known as the problem of logical omniscience; Scott Aaronson once wrote a pretty nice essay about this topic and others from the perspective of computational complexity.)

I have basically no intuition whatsoever on what it means for a rock* to know that the Riemann Hypothesis is true, false, or independent of ZFC. My extremely stupid and unprincipled guess is that, unless a rock is physically inscribed with a proof of the true answer, it doesn't know, and that otherwise it does.

*I'm using a rock here as a generic example of a clearly-non-agentic thing. Obviously, if a rock was an agent, it'd be a very special rock, at least in the part of the multiverse that I inhabit. Feel free to replace "rock" with other words for non-agents.

Comment by duck_master on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-05-19T14:28:46.611Z · LW · GW

Bumping it again.

Comment by duck_master on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-04-25T19:27:40.289Z · LW · GW

Bumping this.

Comment by duck_master on The secret of Wikipedia's success · 2021-04-22T23:26:16.406Z · LW · GW

I think this applies to every wiki ever, and also to this very site. There are probably a lot of others that I'm missing but this is a start.

Comment by duck_master on The secret of Wikipedia's success · 2021-04-22T23:24:59.533Z · LW · GW

I agree with you (meaning G Gorden Worley III) that Wikipedia is reliable, and I too treat it as reliable. (It's so well-known as a reliable source that even Google uses it!) I also agree that an army of bots and humans undo any defacing that may occur, and that Wikipedia having to depend on other sources helps keep it unbiased. I also agree with the OP that Wikipedia's status as not-super-reliable among the Powers that Be does help somewhat.

So I think that the actual secret of Wikipedia's success is a combination of the two: Mild illegibility prevents rampant defacement, citations do the rest. If Wikipedia was both viewed as Legibly Completely Accurate and also didn't cite anything, then it would be defaced to hell and back and rendered meaningless; but even if everyone somehow decided one day that Wikipedia was ultra-accurate and also that they had a supreme moral imperative to edit it, I still think that Wikipedia would still turn out okay as a reliable source if it made the un-cited content very obvious, e.g. if each [citation needed] tag was put in size 128 Comic Sans and accompanied by an earrape siren* and even if there was just a bot that put those tags after literally everything without a citation**. (If Wikipedia is illegible, of course it's going to be fine.)

*I think trolls might work around this by citing completely unrelated things, but this problem sounds like it could be taken care of by humans or by relatively simple NLP.

**This contravenes current Wikipedia policy, but in the worst-case scenario of Ultra-Legible Wikipedia, I think it would quickly get repealed.

Comment by duck_master on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-04-22T23:12:13.206Z · LW · GW

@Diffractor: I think I got a MIRIxDiscord invite in a way somehow related to this event. Check your PMs for details. (I'm just commenting here to get attention because I think this might be mildly important.)

Comment by duck_master on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-03-28T21:15:37.861Z · LW · GW

Don't worry, it was kind of a natural stopping point anyways, as the discussion was winding down.

Comment by duck_master on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-03-28T21:05:48.159Z · LW · GW

...and it's closed.

Comment by duck_master on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-03-28T20:59:29.072Z · LW · GW

"Mixture of infra-distributions" as in convex set, or something else? If it's something else then I'm not sure how to think about it properly.

Comment by duck_master on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-03-28T20:58:30.037Z · LW · GW

Me too. I currently only have a very superficial understanding of infraBayesianism (all of which revolves around the metaphysical, yet metaphorical, deity Murphy).

Comment by duck_master on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-03-28T20:48:36.777Z · LW · GW

More specifically: if two points are in a convex set, then the entire line segment connecting them must also be in the set.

Comment by duck_master on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-03-28T20:48:08.781Z · LW · GW

Here's an ELI5: The evil superintelligent deity Murphy, before you were ever conceived, picked the worst possible world that you could live in (meaning the world where your performance is worst), and you have to use fancy math tricks to deal with that.

Comment by duck_master on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-03-28T20:46:59.239Z · LW · GW

I think that if you imagine the deity Murphy trying to foil your plans whatever you do, that gives you a pretty decent approximation to true infraBayesianism.

Comment by duck_master on "Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party · 2021-03-28T20:45:47.430Z · LW · GW

Google doc where we posted our confusions/thoughts earlier: https://docs.google.com/document/d/1lKG_y_Voe02OkRGG9yaxtMuGM_dQBUKjj9DXTA8rMxE/edit

My ongoing confusions/thoughts:

  • What if the super intelligent deity is less than maximally evil or maximally good? (E.g. the deity picking the median-performance world)
  • What about the dutch-bookability of infraBayesians? (the classical dutch-book arguments seem to suggest pretty strongly that non-classical-Bayesians can be arbitrarily exploited for resources)
  • Is there a meaningful metaphysical interpretation of infraBayesianism that does not involve Murphy? (similarly to how Bayesianism can be metaphysically viewed as "there's a real, static world out there, but I'm probabilistically unsure about it")