Posts

Are people here still looking for crypto tips? 2021-10-19T09:28:25.037Z
How to deal with unknown probability distributions? 2021-10-18T19:08:48.839Z
Forget AGI alignment, are you aligned with yourself? 2021-10-13T16:20:06.962Z
Natural computing on stars 2021-10-09T09:07:18.211Z
Are democratic elections incompetence-as-a-service? 2021-10-08T13:33:48.745Z
Does any of AI alignment theory also apply to supercharged humans? 2021-10-07T14:43:55.322Z
What do determinists here think about free will and Chalmer's hard problem of consciousness? 2021-09-30T15:38:51.914Z
Samuel Shadrach's Shortform 2021-09-30T09:33:04.171Z

Comments

Comment by Samuel Shadrach (samuel-shadrach) on Are people here still looking for crypto tips? · 2021-10-19T19:23:08.248Z · LW · GW

Totally agree, a random internet tip shouldn't subsitute for your own research. That being said I'm happy to share my research and further discuss why I believe what to do. And ofcourse prove that I am not, in fact, running a pump and dump scam :p I mean yes I do own the tokens I'm mentioning here but I can reasonably prove I'm not on their core teams, and I can prove that the tokens are in fact linked to legitimate products (or atleast as legitimate as things get in the crypto space).

Comment by Samuel Shadrach (samuel-shadrach) on How to deal with unknown probability distributions? · 2021-10-19T16:55:55.034Z · LW · GW

Just checked, seems useful, although I'm unable to draw the connection to this post. Could you please elaborate?

Comment by Samuel Shadrach (samuel-shadrach) on How to deal with unknown probability distributions? · 2021-10-19T16:47:39.303Z · LW · GW

Got you.

I'm not sure there is a sharp boundary between "I know it came from a dice but which dice is nebulous" and "I know it came from this universe but where exactly from is nebulous". By nebulous for die I mean assuming that you aren't actually given probability weights for the dice themselves. So you'll still have to use nebulous reasoning like - maybe I assume the experimenter likes bigger dice or maybe I assume all dice have equal chance of occuring because uniform distributions are "natural" in some sense.

Questions about domain are interesting but maybe I could have just started the Q by saying x comes from the set of reals or the set of reals inside [0,1]. Questions where I don't even know the domain of x is something I didn't consider, I'm not sure that's mathematically well-formed but heck, why not ask it anyway? :p

Comment by Samuel Shadrach (samuel-shadrach) on How to deal with unknown probability distributions? · 2021-10-19T15:17:43.338Z · LW · GW

I didn't follow the first part. Can you elaborate on both "a unknown thing that is a probability distribution" and "a thing that a probability distribution doesn't exists for".

Agreed regarding universe not being Newtonian. I sorta replied to @Vladimir_Nesov on how existence / non-existence of an elegant "theory of everything" could have implications for reductionism as a principle.

Comment by Samuel Shadrach (samuel-shadrach) on Are people here still looking for crypto tips? · 2021-10-19T15:07:05.869Z · LW · GW

All data on a blockchain is public so yes you can verify. https://etherscan.io/token/0x6c806eddad78a5505fce27b18c6f859fc9739bec#balances
https://etherscan.io/token/0x6c806eddad78a5505fce27b18c6f859fc9739bec#tokenTrade

And no there is no trust involved (except for lost gas fees), it's done using smart contract. One person creates limit order, other person fills it. You can use use uniswap v3 or 1inch.

Comment by Samuel Shadrach (samuel-shadrach) on Are people here still looking for crypto tips? · 2021-10-19T15:05:31.152Z · LW · GW

Yep there are escrow smart contracts. One person creates limit order, other person fills. You can do this on uniswap v3 or 1inch.

TrueFi seems cool since they do KYC rather than just on-chain scores. Will they use traditional means to sue borrowers who refuse to pay back loans?

Comment by Samuel Shadrach (samuel-shadrach) on Are people here still looking for crypto tips? · 2021-10-19T14:06:33.277Z · LW · GW

True. Although my timeframe right now is closer to 1 year than 3, and would also recommend anyone holding these to check once a month for any important news that changes things. Maybe I should edit it into the post.

Comment by Samuel Shadrach (samuel-shadrach) on How to deal with unknown probability distributions? · 2021-10-19T13:04:33.795Z · LW · GW

Thank you for this! Kolmogrow OP.

Btw regarding my response (if anyone here cares):

I wonder which of the two principles is more influential (in a Bayesian sense).

Anthropic principle: Universes in which humans can exist are more likely to occur
Solomonoff applied to universe: Universes whose rules are expressible with Turing machines with smaller number of states are more likely to occur.

Gut says anthropic principle. Solomonoff thing doesn't really push at the boundaries of our imagination on possible universes. But maybe you apply both? First anthropic, then Solomonoff. I wonder if Kolmogrow complexity of our universe's rules is in fact small. If we fail to find a "theory of everything" that seems to go against using Kolmogrow, and against reductionism.

Keen on people's thoughts.

Formalising everything using Turing machines still seems amazing to me. If Occam's razor can be formalised, why not everything else in the Sequences? Wish someone did all the decision theory stuff using Turing machines.

Comment by Samuel Shadrach (samuel-shadrach) on Are people here still looking for crypto tips? · 2021-10-19T10:15:46.920Z · LW · GW

Originally DMed but I might as well put it here:

1. RPL (rocketpool)

Thesis: Ethereum is switching from PoW to PoS early next year (don't trust their timelines but yea). Lots of hobbyist miners switching off, lots of hobbyist stakers switching on. Rocketpool offers value-add service on top of the in-protocol staking. They have subsequently attracted huge community, there's likely more rocketpool hobbyist stakers than just in-protocol. TAM is huge, it's easily 10-25% of ethereum's circulating supply that will be staked. Best case scenario (> 10% odds imo) is a majority goes through rocketpool, median case is they still capture atleast $1B imo, so it's a profit.

To buy: Need to know to use metamask + uniswap, need $10k for the gas fees to be worth it

2. RGT (rari governance token)
or REPT-B (rari ETH pool governance token bond)

Thesis: I worked with the team for 4 months and I'm hugely bullish on their capability to execute. Startup mindset in a world where not everyone has that razor sharp focus. Their product is great and to be very honest it might be a bit late for those crazy 10x gains. But 10x from here still not impossible.

An even better bet is buying REPT. Rari got hacked in May, but didn't have funds to pay back users so they issued an IOU. Now they kinda have the funds (in RGT, not USD) but it's unclear whether they will pay out. I'm personally highly confident they will pay out if Rari grows 2x from here, and possibly even if Rari remains at their current state for longer duration. Each token is an IOU for $1, you'll be able to buy it for around $0.2

To buy: RGT from coinbase or using metamask + uniswap, if using metamask need $10k for gas fees to be worth

REPT does not have a liquid market, you will have to negotiate an OTC deal with a random person in the discord. This may seem daunting but the whole thing can be done anonymously within a day.

--

Oh and btw I'm also bullish ETH itself for a bunch of reasons, but I've gone full degen and bought stuff on top of ethereum that can outperform ETH while getting you all the gains ETH makes due to correlation.

Links:
https://rari.capital/
https://discord.gg/HzUMPuT
https://www.coingecko.com/en/coins/rari-governance-token

https://www.rocketpool.net/
https://discordapp.com/invite/tCRG54c
https://www.coingecko.com/en/coins/rocket-pool

Edit: I'd recommend 1 year timeframe for investment (can be extended if things go well), and monitor news about the projects ideally once a month. (You won't be first to react to the news ofcourse, but if the thesis is invalidated you'll know to exit.)

Comment by Samuel Shadrach (samuel-shadrach) on Humans are the universal economic bottleneck · 2021-10-19T07:14:44.392Z · LW · GW

Aside from working out distribution inefficiencies and similar, this is the unique limit on prosperity.

Why is this limit unique? Why can't we be working on "distribution inefficiencies and similar" for the next 100 years?

Maybe I'll ask this, does your statement regarding universal bottleneck apply explicitly to humans? Or does it also apply to apes and bacteria and AI? Cause some of it seems tautological - doing more total work means finding ways to do more work per unit time.

Comment by Samuel Shadrach (samuel-shadrach) on Humans are the universal economic bottleneck · 2021-10-19T05:31:53.930Z · LW · GW

Can you expand on what you mean by "universal economic bottleneck"? Given two bottlenecks - say humans and capital investment in a sector - are you saying humans are:

 - always slower? not true imo, sometimes all the humans already interested are not rich, so they need to wait for the rich folks to come invest in physical resources

 - more necessary? not true imo, both are just as necessary, project is dead in the water without capital or physical resources

I agree with your statement is as broad as "human innovation is responsible for exponential tech" - but then I'm not sure if that's new info. Ofcourse apes would not grow at the same rate we do.

Comment by Samuel Shadrach (samuel-shadrach) on Your Time Might Be More Valuable Than You Think · 2021-10-19T05:07:26.204Z · LW · GW

Yup true, I'm assuming it's something like a surgeon or researcher where more years does in fact expose you to more useful data. I'm wondering if that translates into more useful insights or discoveries though. Can the surgeon remember 10 years of new insights without forgetting anything from the previous 20?

Comment by Samuel Shadrach (samuel-shadrach) on How to deal with unknown probability distributions? · 2021-10-19T05:04:50.553Z · LW · GW

You always have some context. 

Practically, yep. One level is usually enough.

But philosophy is often going meta for the heck of it and (sometimes but not necessarily) hoping that those high-level insights also cascade down to practical solutions.

Also I agree what happens if you're right or wrong matters for any practical computational device (such as humans). If I try formalising, I guess you can define a utility function on top of the probability distribution for how much you care about getting the probabilities of X how correct in different ranges, and then map that to practical algorithms that prioritise only getting the info you care most about in finite time.

But I think it's still worth asking without specifying what you care most about in a given situation. Same "meta for the heck of it" spirit. I care about any insight that gets me any closer (to objective truth) than I already can.

Comment by Samuel Shadrach (samuel-shadrach) on How to deal with unknown probability distributions? · 2021-10-19T04:41:26.576Z · LW · GW

Omg yes, that;s very close. Thank you for the phrase.

Comment by Samuel Shadrach (samuel-shadrach) on Explaining Capitalism Harder · 2021-10-18T18:35:22.785Z · LW · GW

I also see pro-capitalism as a bad frame compared to anti-X simply because capitalism is a lot more natural than other social structures (in my mind).

If you have large-scale society that needs ways to cooperate on how they use resources while having conflicting goals, common units of accounting and exchange emerging is very natural. It does not need a top-down design or designers or implementers to come into existence. Similarly the idea of a band of people coming together to share ownership of some activity is very natural too. Hence the notion of a corporation.

Democracy on the other hand is very consciously designed - a mechanism for governance doesn't spontaneously burst into three pillars of governance each with asymmetric but balanced checks against each other. Dictatorship feels a lot more natural to me.

Large portions of what count as capitalism today don't exist because they are good or better than other systems or that anybody reasoned it that way, they just exist because that's the path of least resistance. So a pro-capitalism stance is usually an anti-X stance where X is some consciously designed proposal to not go with this least resistance path. Onus on the person presenting X to weigh both benefits and harms, at all levels starting from the ideals to the practical implementation to the transition phase.

Comment by Samuel Shadrach (samuel-shadrach) on Your Time Might Be More Valuable Than You Think · 2021-10-18T17:58:22.665Z · LW · GW

Genuine question I have: is 30 years of experience really worth much more than 20 years of experience? Would be keen on knowing from people who actually have that kind of experience.

Cause my gut is inclined towards no cause a) finite memory to retain insights b) finite max depth at which you can explore one field c) too much depth might have diminishing returns. But definitely not a strong opinion, happy to be 100% wrong.

Comment by Samuel Shadrach (samuel-shadrach) on Your Time Might Be More Valuable Than You Think · 2021-10-18T17:53:41.599Z · LW · GW

"his time doesn't seem as high leverage" because he chooses not to work on high leverage stuff or because he is incapable of it? For instance, if he provides advice and resources to 10 new startups (or 10 new exploratory divisions in his company), could that be equivalent to him doing a startup himself? Plus he has a ton of experience now.

I don't have a strong opinion on it but maybe his ability to direct fledgling new divisions working on AI or self-driving cars or whatever could be higher impact than building a search engine 20 years, if he commits sufficient time to it. (And maybe he does, for all I know)

Comment by Samuel Shadrach (samuel-shadrach) on Is moral duty/blame irrational because a person does only what they must? · 2021-10-17T05:29:53.783Z · LW · GW

Still unsolved, even on this site.Yudkowsky has cool articles though, I liked the one on Zombies. https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies

Comment by Samuel Shadrach (samuel-shadrach) on Forget AGI alignment, are you aligned with yourself? · 2021-10-17T05:17:12.494Z · LW · GW

Thanks for reply. Makes sense! If your differences are not worth dying for, then you will end up finding ways to work together.

Comment by Samuel Shadrach (samuel-shadrach) on Is nuking women and children cheaper than firebombing them? · 2021-10-16T08:30:25.775Z · LW · GW

Point is you will have to answer that question - whether after deliberation or instinctly - and then move on to a hundred other decisions and calculations relevant to the situation. And if you fumble too much, your superior might replace you with someone who is more prepared to make such decisions.

Comment by Samuel Shadrach (samuel-shadrach) on Forget AGI alignment, are you aligned with yourself? · 2021-10-16T08:15:49.568Z · LW · GW

Most disagreements could be easily solved by each of us taking half.

Could you please elaborate on this?

Self preservation isn't worth risking to make a few changes to the copy's plans.

Would this mean you personally value your own life pretty highly (relative to rest of humanity)? 

Hedonism is fun and destruction is easy, but creation is challenging and satisfying in a way neither of them are.

Makes sense, can totally relate!

Comment by Samuel Shadrach (samuel-shadrach) on Forget AGI alignment, are you aligned with yourself? · 2021-10-15T15:16:17.638Z · LW · GW

Yup makes sense. But I also feel the "toy agent model" of terminal and instrumental preferences has real life implications (even though it may not be best model). Namely that you will always value yourself over your clone for instrumental reasons if you're not perfect clones. And I also tend to feel the extent to which you value yourself over your clone will be high in such high stakes situations.

Keen on anything that would weaken / strengthen my intuitions on this :) 

Comment by Samuel Shadrach (samuel-shadrach) on Actually possible: thoughts on Utopia · 2021-10-15T15:11:31.606Z · LW · GW

Just to clarify, utopia here I wish to mean in terms of positivity/valence of actual experience, rather than just technological superiority. So Nazis might think they will reach utopia once they have technological superiority, but it wouldn't be in my book until they're also much happier than people today. 

I would hope desires to exterminate races are not stable in "happy" societies, but the truth is I don't really know. (If they're not stable I'm assuming either self-destruction, or else psychological rewiring until they don't want to kill people)

Comment by Samuel Shadrach (samuel-shadrach) on Forget AGI alignment, are you aligned with yourself? · 2021-10-15T15:07:38.589Z · LW · GW

Thanks for replying!
I generally agree with your intuition that similar people arw worth cooperating with, but I also feel like when the stakes are high this can break down. Maybe I should have defined the hypothetical such that you're both co-rulers or something until one kills the other.

Cause like - worst case in a fight is you lose and the clone does what they want - which is already not that awful (probably), this is already guaranteed. But you may still believe you have something non-trivially better to offer than this worst case. And you may be willing to fight for it. (Just my intuitions :p)

Do you have thoughts on what you'll do once you're the ruler?

Comment by Samuel Shadrach (samuel-shadrach) on Actually possible: thoughts on Utopia · 2021-10-14T15:57:37.856Z · LW · GW

War doesn't necessarily prevent utopias from being created. Nor do bad rulers - I'm sure Nazis would work towards utopia too if they knew it were possible. The only thing we know for certainty will prevent utopias from being created is existential risks as identified in the post.

Comment by Samuel Shadrach (samuel-shadrach) on Forget AGI alignment, are you aligned with yourself? · 2021-10-14T11:14:18.137Z · LW · GW

Interesting point!

But if human preferences make references to self, then those preferences are also relevant to the AGI alignment problem. (trying to make AI have the same preferences that humans have).

Although I guess my example was also about:
Even if human's terminal preferences do not make references to self, they will still instrumentally value self and not the clone, because of lack of trust in clone's preferences being 100% identical.

Comment by Samuel Shadrach (samuel-shadrach) on Forget AGI alignment, are you aligned with yourself? · 2021-10-14T07:58:07.039Z · LW · GW

Thank you, I will check!

Comment by Samuel Shadrach (samuel-shadrach) on Forget AGI alignment, are you aligned with yourself? · 2021-10-14T07:57:24.749Z · LW · GW

Interesting. Let's say you both agree to leave the room. Would you later feel guilt looking at the all the suffering in the world, knowing you could have helped prevent it? Be it genocides, world wars, misaligned AI, Zuckerberg becoming the next dictator, or something else.

Comment by Samuel Shadrach (samuel-shadrach) on Forget AGI alignment, are you aligned with yourself? · 2021-10-14T07:52:21.283Z · LW · GW

If you trust that the other person has identical goals to yours, will it matter to you who presses the button? Say you both race for the button, you both collide into each other but miss the button. Will you now fight or graciously let the other person press it?

Comment by Samuel Shadrach (samuel-shadrach) on Book Review: Philosophical Investigations by Wittgenstein · 2021-10-13T04:57:07.760Z · LW · GW

Thanks for the review!

Quick feedback, it's a bit long, I only read half, I'm sure I'm not the only one.

But otherwise definitely a good review.

Comment by Samuel Shadrach (samuel-shadrach) on Apprenticeship Online · 2021-10-11T04:21:01.810Z · LW · GW

Fairly new opinion of mine growing increasingly strong is exactly this: 

Online environments can be consciously designed, just as physical ones.

Economic incentives have so far forced everyone to keep their platforms simple, general-purpose and addictive. Like Facebook. Why spend more money building more features only to reduce the size of your target audience? But once the general-purpose social media space gets saturated, people will increasingly design online social environments for smaller niches and take advantage of the nuances of that environment.

Reddit started this by providing mods with ability to customise, discord has taken it to the next level, I wouldn't be surprised if the next tool is even more customisable than discord.

Comment by Samuel Shadrach (samuel-shadrach) on Steelman arguments against the idea that AGI is inevitable and will arrive soon · 2021-10-11T04:12:42.135Z · LW · GW

How about chances of there being a war if any country comes anywhere close to AGI? How about a pre-emptive global ban on AGI research, like how human gene editing is banned? As long as AGI research is highly compute-intensive, a ban would be feasible to implement.

Comment by Samuel Shadrach (samuel-shadrach) on Shoulder Advisors 101 · 2021-10-10T16:29:36.864Z · LW · GW

Interesting. Did you have shoulder advisors when you were younger?

Comment by Samuel Shadrach (samuel-shadrach) on What do determinists here think about free will and Chalmer's hard problem of consciousness? · 2021-10-10T14:13:55.978Z · LW · GW

when their words boil down to a pure appeal to intuition, we should not engage with it

I don't think this is true, I see intuitions as more fundamental to phenomenology than say math or logic. I can imagine a conscious person who sucks at reason but yet has intuition, I can't imagine a person who has logic but no intuition to guide them.

Comment by Samuel Shadrach (samuel-shadrach) on Shoulder Advisors 101 · 2021-10-10T14:02:39.121Z · LW · GW

Thanks for the reply!

I feel uncomfortable making my shoulder advisors make incorrect predictions, and I do value correctness. I feel I have a natural tendency to dismiss people as boring or unintelligent, and therefore I counteract this with a conscious effort to remind myself that people have more to offer than I think they do. And this attitude has helped me. So if I try modelling a response of someone I care about and consider smart, I know there's a good chance their response will be something I haven't guessed already, and I don't want to force myself to guess incorrectly or typify them. If absolutely pressed, I could enumerate categories of thoughts or something along with probabilities - but that would be a conscious effort. I can however predict emotional response like - will this person even bother to read a wall of text, will they be annoyed I sent it to them, will they read and reply something smart. (To answer your suggestion in last para)

 

re: hypothesis 3 yep it definitely sounded like what you were suggesting was intended to be low effort, hence I mentioned that it feels high effort to me.

re: hypothesis 2, if I may elaborate: it's not that I don't care, it's just that the caring is at a conscious level, and therefore if I must model someone because of this care, it feels like effort. To take a slightly different example: I have learnt the "double diamond" design thinking process, I think it is useful and yet I will need to expend energy applying it to a problem, it isn't my natural way of approaching a problem.

Comment by Samuel Shadrach (samuel-shadrach) on Shoulder Advisors 101 · 2021-10-10T13:49:15.294Z · LW · GW

I'm honestly not sure.

I remember having full-blown conversations with people I'm close to in my head, but they feel more like a fear-driven response where I'm having to defend myself against them, rather than working with them on anything. Or maybe I'm just anxious and I'm working out the courage before I ask them irl (thing being asked could be very mundane, or could be important).

Another reason to model people would be to figure out who find annoying, interesting, worth befriending, etc. I feel like that happens very automatically - first I experience the annoyance or interesting thing in conversation irl, then I make a mental note of it. So the next time they come up in my mind, I can reason about such and such thing I found annoying, and what underlying personality trait or ideological difference it indicates. So I reason more in my own words describing them, rather than "their words".

I can't remember any other reason I would model someone.

If it helps, I'm 20 and have struggled with social interactions in the past, possibly I will just develop this skill over time.

Comment by Samuel Shadrach (samuel-shadrach) on Shoulder Advisors 101 · 2021-10-09T14:57:29.033Z · LW · GW

Interesting concept but idk if this works for myself personally.

Hypotheses:

 - I don't know anyone deeply enough to predict their responses or perspectives with high accuracy

 - I don't care about anyone's perspectives on my decisions enough to want to model them (even though I admit they might be useful)

 - Modelling other people this way takes too much energy

Comment by Samuel Shadrach (samuel-shadrach) on Are democratic elections incompetence-as-a-service? · 2021-10-09T08:12:05.009Z · LW · GW

True!

Comment by Samuel Shadrach (samuel-shadrach) on Does any of AI alignment theory also apply to supercharged humans? · 2021-10-07T16:46:56.006Z · LW · GW

Thank you for this! Will read.

Comment by Samuel Shadrach (samuel-shadrach) on Reductive Reference · 2021-10-07T15:06:01.437Z · LW · GW

This makes the map more real than the territory imo.

Comment by Samuel Shadrach (samuel-shadrach) on Reductive Reference · 2021-10-07T15:05:15.901Z · LW · GW

The map is the only thing telling you that the territory exists. What are your thoughts on solipsism or evil demons?

Comment by Samuel Shadrach (samuel-shadrach) on Reductionism · 2021-10-04T16:19:43.465Z · LW · GW

a disbelief that the higher levels of simplified multilevel models are out there in the territory.

 

This assumes the existence of a territory. Do you use reductionism to prove this existence, or appeal to intuition or something else?

Comment by Samuel Shadrach (samuel-shadrach) on Defeating Ugh Fields In Practice · 2021-10-04T13:36:35.131Z · LW · GW

Imo there are likely geographies where this idea can be tested at small scale, and if it is proven effective, then we can get to convincing the larger public. Doesn't seem intractable to me.

Comment by Samuel Shadrach (samuel-shadrach) on When Money Is Abundant, Knowledge Is The Real Wealth · 2021-10-04T05:50:19.259Z · LW · GW

assume that you will hold your knowledge back from the public/market, which is basically unethical


I don't see why this holds - all knowledge being publicly accessible can sometimes be dangerous to the public good. As for private good, capitalist society itself is founded on informational asymmetries. Be it knowledge of your business sector, knowledge about your employees, specific scientific knowledge which you patent, so on and so forth.

The context here seems to be more basic / overview knowledge of a scientific sector - the kind of work you'd find in a literature review. A lot of people have such overview knowledge in some sector, not everyone bothers to share.

Comment by Samuel Shadrach (samuel-shadrach) on Samuel Shadrach's Shortform · 2021-10-03T05:21:54.396Z · LW · GW

If you define bad = inconsistent as an axiom then yes, trivial proof. If you don't define bad you can't provie anything. You can't capture the intuitive notion of bad using FOL.

Comment by Samuel Shadrach (samuel-shadrach) on Transhumanism as Simplified Humanism · 2021-10-02T17:43:25.023Z · LW · GW

Also

: As a 20-year old myself, I'm not sure if I'd lead a happier life if I died at 40 or 60. Although I'm sure if I was 39 and dying, I'd probably prefer living further over dying. What I'm optimising for (a utility function if you really wanna call it that) is something that varies with time and circumstance, and having knowledge of this fact doesn't create in me any desire to force consistency across times or circumstances.

Therefore I'd personally be looking for life extension tech only nearing my old age.

Comment by Samuel Shadrach (samuel-shadrach) on Transhumanism as Simplified Humanism · 2021-10-02T17:34:30.979Z · LW · GW

But a moral philosophy should not have special ingredients.

 

Why ever not? Human morality is deeply at odds with anything rational or consistent. There's a reason utilitarianism looks absurd to most people, it attempts to force consistency in a domain where these doesn't exist as strongly. Deontological ethics are a lot closer to what most people believe and practice in their day to day lives, and these rules are not simple - nor are they reasoned out in a social vacuum.

Comment by Samuel Shadrach (samuel-shadrach) on [Book review] Gödel, Escher, Bach: an in-depth explainer · 2021-10-02T16:25:35.356Z · LW · GW

Probably a variant of the Church-Turing thesis?

Most intuitive deduction processes will end up like ZFC, the same way most intuitive notions of how a computer should run end up being expressible as families of turing machines. So ZFC incompleteness will still apply.

There is a bunch of metamathematics around over which axioms can be added or deleted to ZFC and what their philosophical implications could be. So yeah, people have tried making systems that admit reasoning can't be expressed in ZFC and are yet useful or intuitive in some sense.

Comment by Samuel Shadrach (samuel-shadrach) on Timeless Causality · 2021-10-02T14:03:18.725Z · LW · GW

Second derivatives not sufficient in QM, you need to know n-th order derivative for all n I think

Comment by Samuel Shadrach (samuel-shadrach) on Mind Projection Fallacy · 2021-10-02T13:32:37.991Z · LW · GW

This assumes an objective understanding of what "bias" is, though. Not sure that exists.