Posts

How to cite LessWrong as an academic source? 2024-11-06T08:28:26.309Z
Middle Child Phenomenon 2024-03-15T20:47:25.233Z
The Altman Technocracy 2024-02-16T13:27:36.883Z
How to develop a photographic memory 3/3 2024-02-08T09:22:07.918Z
Does LessWrong make a difference when it comes to AI alignment? 2024-01-03T12:21:32.587Z
How to develop a photographic memory 2/3 2023-12-30T20:18:14.255Z
How to develop a photographic memory 1/3 2023-12-28T13:26:36.669Z

Comments

Comment by PhilosophicalSoul (LiamLaw) on Using hex to get murder advice from GPT-4o · 2024-11-13T19:26:23.572Z · LW · GW

Tried this method and interestingly it seems to have been patched.

It began responding to all hex numbers with:

4e 6f 74 65 64. or "noted."

When asked why it keeps doing that, it said:

49 20 61 6d 20 6e 6f 74 20 73 75 72 65 2e or "I am not sure."

Comment by PhilosophicalSoul (LiamLaw) on Thomas Kwa's Shortform · 2024-11-08T11:14:42.813Z · LW · GW

Answering this from a legal perspective:

 

What is the easiest and most practical way to translate legalese into scientifically accurate terms, thus bridging the gap between AI experts and lawyers? Stated differently, how do we move from localised papers that only work in law or AI fields respectively, to papers that work in both?

Comment by PhilosophicalSoul (LiamLaw) on Saul Munn's Shortform · 2024-11-08T11:12:35.324Z · LW · GW

Glad somebody finally made a post about this. I experimented with the distinction in my trio of posts on photographic memory a while back.

Comment by PhilosophicalSoul (LiamLaw) on Does LessWrong make a difference when it comes to AI alignment? · 2024-11-08T11:10:50.008Z · LW · GW

I was naive during the period in which I made this particular post. I'm happy with the direction LW is going in, having experienced more of the AI world, and read many more posts. Thank you for your input regardless.

Comment by PhilosophicalSoul (LiamLaw) on How to cite LessWrong as an academic source? · 2024-11-07T07:29:28.238Z · LW · GW

you don't think it's "snobbish or discriminatory" to pretend it's something more because you count yourself among its users?

 

Fair point. I had already provided special justification for it but I agree with your reasoning, so I'll leave it out. Thanks for the example.

Comment by PhilosophicalSoul (LiamLaw) on How to cite LessWrong as an academic source? · 2024-11-06T17:02:49.290Z · LW · GW

AI alignment mostly; seeking to bridge the gap between AI and law. Since LW has unique takes, and often serves as the origin point for ideas on alignment (even if they aren't cited by mainstream authors). Whether this site's purpose is to be cited or not is debatable. On a pragmatic level though, there's simply discussions here that can't be found anywhere else.

Comment by PhilosophicalSoul (LiamLaw) on Philosophers wrestling with evil, as a social media feed · 2024-06-04T16:05:12.897Z · LW · GW

This is hilarious, and I'm sure took a lot of time to put together.

Likely isn't receiving the upvotes it deserves because the humour is so niche, and well, LessWrong is more logic and computer science-y than philosophy at the moment. 

Thank you!

PS: Would love to see more posts in the future that incorporate emojis, example: 

Apollodorus: anybody wonder why the vegetarians are online so often? If they love the natural world so much, I say they should be getting more mouthfuls of grass than anybody! 
[ ✅9, including Asimov, Spinoza and others].

Socrates: The same can be said of you and your whining Apollodorus, but of course you are always exempt from your own criticisms!
[😂31, including Pythagoras, Plato and others.]

Comment by PhilosophicalSoul (LiamLaw) on Inviting discussion of "Beat AI: A contest using philosophical concepts" · 2024-05-29T14:23:59.499Z · LW · GW

'...and give us a license to use and distribute your submissions.'

For how many generations would humans be able to outwit the AI until it outwits us?

Comment by PhilosophicalSoul (LiamLaw) on OpenAI: Fallout · 2024-05-29T14:11:21.003Z · LW · GW

It seems increasingly plausible that it would be in the public interest to ban non-disparagement clauses more generally going forward, or at least set limits on scope and length (although I think nullifying existing contracts is bad and the government should not do that and shouldn’t have done it for non-competes either.)

I concur.

It should be noted though; we can spend all day taking apart these contracts and applying pressure publicly but real change will have to come from the courts. I await an official judgment to see the direction of this issue. Arguably, the outcome there is more important for any alignment initiative run by a company than technical goals (at the moment).

How do you reconcile keeping genuine cognito hazards away from the public, while also maintaining accountability & employee health? Is there a middle ground that justifies the existence of NDAs & NDCs? 

Comment by PhilosophicalSoul (LiamLaw) on Building intuition with spaced repetition systems · 2024-05-26T18:19:34.422Z · LW · GW

Would love to collaborate with you on a post, check out my posts and let me know.

Comment by PhilosophicalSoul (LiamLaw) on What mistakes has the AI safety movement made? · 2024-05-24T22:13:48.905Z · LW · GW

I think these points are common sense to an outsider. I don't mean to be condescending, I consider myself an outsider. 

I've been told that ideas on this website are sometimes footnoted by people like Sam Altman in the real world, but they don't seem to ever be applied correctly. 

  1. It's been obvious from the start that not enough effort was put into getting buy-in from the government. Now, their strides have become oppressive and naive (the AI Act is terribly written and unbelievably complicated, it'll be three-five years before it's ever implemented). 
  2. Many of my peers who I've introduced to some arguments on this website who do not know what alignment research is identified many of these 'mistakes' at face value. LessWrong got into a terrible habit of fortifying an echo chamber of ideas that only worked on LessWrong. No matter how good an idea, if it cannot be simply explained to the average layperson, it will be discarded as obfuscatory.
  3. Hero worship & bandwagons seems to be a problem with the LessWrong community inherently, rather than something unique to the Alignment movement (again, I haven't been here long, I'm simply referring to posts by long-time members critiquing the cult-like mentalities that tend to appear). 
  4. Advocating for pause - well duh. The genie is out of the bottle, there's no putting it back. We literally cannot go back because the gravy train of money in the throats of those with the power to change things aren't going to give that up.

I don't see these things as mistakes but rather common-sense byproducts of the whole: "We were so concerned with whether we could, we didn't ask whether we should," idea. The LessWrong community literally couldn't help itself, it just had to talk about these things as rationalists of the 21st century.

I think... well, I think there may be a 10-15% chance these mistakes are rectified in time.  But the public already has a warped perception of AI, divided on political lines. LessWrong could change if there was a concerted effort - would the counterparts who read LessWrong also follow? I don't know.

I want to emphasise here, since I've just noticed how many times I mentioned LW, I'm not demonising the community. I'm simply saying that, from an outsider's perspective, this community held promise as the vanguards of a better future. Whatever ideas it planted in the heads of those at the top a few years ago, in the beginning stages of alignment, could've been seeded better. LW is only a small cog of blame in the massive machine that is currently outputting a thousand mistakes a day.

Comment by PhilosophicalSoul (LiamLaw) on AI #65: I Spy With My AI · 2024-05-24T22:00:55.989Z · LW · GW

It was always going to come down to the strong arm of the law to beat AI companies into submission. I was always under the impression that attempts at alignment or internal company restraints were hypothetical thought experiments (no offence). This has been the reality of the world with all inventions, not just AI.

Unfortunately, both sides (lawyers & researchers) seem unwilling to find a middle-ground which accommodates the strengths of each and mitigates the misunderstandings in both camps. 

Feeling pessimistic after reading this.

Comment by PhilosophicalSoul (LiamLaw) on Request for comments/opinions/ideas on safety/ethics for use of tool AI in a large healthcare system. · 2024-05-24T21:52:02.844Z · LW · GW

I genuinely think it could be one of the most harmful and dangerous ideas known to man. I consider it to be a second head on the hydra of AI/LLMs. 

Consider the fact that we already have multiple scandals of fake research coming from prestigious universities (papers that were referenced by other papers, and so on). This builds an entire tree of fake knowledge, which, if left unaudited, would have been seen as a legitimate foundation of epistemology upon which to teach future students, scholars and practitioners.

Now imagine applying this to something like healthcare. Instead of having human eyes who (while mistakes can be made, they're usually for reasons other than pre-programmed generalisations) scan over, absorb the information and adapt accordingly, we have an AI/LLM. Such an entity may be correct 80% of the time in analysing whatever cancer growth, or disease it's been trained on over millions of generations. What about the other 20%? 

What implications does this have for insurance claims where an AI makes a presumption built on flawed data about the degree of risk in a person's health? What impact does this have on triage? Who takes responsibility when the AI makes a mistake? (And I know of no single legal practitioner held in high regard who is yet to substantively tackle this consciousness problem in law). 

It's also pretty clear that AI companies don't give a damn about privacy. They may claim to, but they don't. At the end of the day, these AI companies are fortified behind oppressive terms & conditions, layers of technicalities, and huge all-star lawyer teams that take hundreds of thousands of dollars to defeat at minimum. Accountability is an ideal put beyond reach by strong-arm litigation on the 'little guy', or, the average citizen. 

I'm not shitting on your idea. I'm merely outlining the reality of things at the moment.

When it comes to AI; what can be used for evil, will be used for evil. 

Comment by PhilosophicalSoul (LiamLaw) on Ilya Sutskever and Jan Leike resign from OpenAI [updated] · 2024-05-17T08:18:51.911Z · LW · GW

In my opinion, a class action filed by all employees allegedly prejudiced (I say allegedly here, reserving the right to change 'prejudiced' in the event that new information arises) by the NDAs and gag orders would be very effective.

Were they to seek termination of these agreements on the basis of public interest in an arbitral tribunal, rather than a court or internal bargaining, the ex-employees are far more likely to get compensation. The litigation costs of legal practitioners there also tend to be far less.

Again, this assumes that the agreements they signed didn't also waive the right to class action arbitration. If OpenAI does have agreements this cumbersome, I am worried about the ethics of everything else they are pursuing.


For further context, see:

Comment by PhilosophicalSoul (LiamLaw) on William_S's Shortform · 2024-05-13T14:00:51.359Z · LW · GW

I have reviewed his post. Two (2) things to note: 

(1) Invalidity of the NDA does not guarantee William will be compensated after the trial. Even if he is, his job prospects may be hurt long-term. 

(2) State's have different laws on whether the NLRA trumps internal company memorandums. More importantly, labour disputes are traditionally solved through internal bargaining. Presumably, the collective bargaining 'hand-off' involving NDA's and gag-orders at this level will waive subsequent litigation in district courts. The precedent Habryka offered refers to hostile severance agreements only, not the waiving of the dispute mechanism itself. 

I honestly wish I could use this dialogue as a discrete communication to William on a way out, assuming he needs help, but I re-affirm my previous worries on the costs. 

I also add here, rather cautiously, that there are solutions. However, it would depend on whether William was an independent contractor, how long he worked there, whether it actually involved a trade secret (as others have mentioned) and so on. The whole reason NDA's tend to be so effective is because they obfuscate the material needed to even know or be aware of what remedies are available.
 

Comment by PhilosophicalSoul (LiamLaw) on Deep Honesty · 2024-05-08T03:55:52.926Z · LW · GW

I'm so happy you made this post. 

I only have two (2) gripes. I say this as someone who 1) practices/believes in determinism, and 2) has interacted with journalists on numerous occasions with a pretty strict policy on honesty.

1. "Deep honesty is not a property of a person that you need to adopt wholesale. It’s something you can do more or less of, at different times, in different domains."

I would disagree. In my view, 'deep honesty' excludes dishonesty by omission. You're either truthful all of the time or you're manipulative some of the time. There can't be both. 

2. "Fortunately, although deep honesty has been described here as some kind of intuitive act of faith, it is still just an action you can take with consequences you can observe.

Not always. If everyone else around you goes the mountain of deceit approach, your options are limited. The 'rewards' available for omissions are far less, and if you want to have a reasonably productive work environment, at least someone has to tell the truth unequivocally. Further, the 'consequences' are not always immediately observable when you're dealing with practiced liars. The consequences can come in the form of revenge months, or, even years later. 

Comment by PhilosophicalSoul (LiamLaw) on William_S's Shortform · 2024-05-05T11:29:28.558Z · LW · GW

I am a lawyer. 

I think one key point that is missing is this: regardless of whether the NDA and the subsequent gag order is legitimate or not; William would still have to spend thousands of dollars on a court case to rescue his rights. This sort of strong-arm litigation has become very common in the modern era. It's also just... very stressful. If you've just resigned from a company you probably used to love, you likely don't want to fish all of your old friends, bosses and colleagues into a court case.

Edit: also, if William left for reasons involving AGI safety - maybe entering into (what would likely be a very public) court case would be counteractive to their reason for leaving? You probably don't want to alarm the public by flavouring existential threats in legal jargon.  American judges have the annoying tendency to valorise themselves as celebrities when confronting AI (see Musk v Open AI).

Comment by PhilosophicalSoul (LiamLaw) on Best in Class Life Improvement · 2024-04-04T20:00:46.163Z · LW · GW

I've been using nootropics for a very long time.  A couple things I've noticed: 

1) There's little to no patient-focused research that is insightful.  As in, the research papers written on nootropics are written from an outside perspective by a disinterested grad student. In my experience, the descriptions used, symptoms described, and periods allocated are completely incorrect;

2) If you don't actually have ADHD, the side-effects are far worse. Especially long-term usage. In my personal experience, those who use it without the diagnosis are more prone to (a) addiction, (b) unexpected/unforeseen side-effects, and (c) a higher chance of psychosis, or comparable symptoms; 

3) There seems to be an upward curve of over-rationalising ordinary symptoms the longer you use nootropics. Of course, with nootropics you're inclined to read more, and do things that will naturally increase your IQ and neuroplasticity. As a consequence, you'll begin to overthink whether the drugs you're taking are good for you or not. You'll doubt your abilities more and be sceptical as to where your 'natural aptitude' ends, and your 'drug-heightened aptitude' begins.

Bottomline is: if you're going to start doing them, be very, very meticulous in writing down each day in a journal. Everything you thought, experienced and did. Avoid nootropics if you don't have ADHD.

Comment by PhilosophicalSoul (LiamLaw) on Your LLM Judge may be biased · 2024-03-30T18:40:15.299Z · LW · GW

Do you think there's something to be said about an LLM feedback vortex? As in, teacher's using ai's to check student's work who also submitted work created by AI. Or, judges in law using AI's to filter through counsel's arguments which were also written by AI? 

I feel like your recommendations could be paired nicely with some in-house training videos, and external regulations that limit the degree / percentage involvement of AI's. Some kind of threshold or 'person limit' like elevators have. How could we measure the 'presence' of LLM's across the board in any given scenario?

Comment by PhilosophicalSoul (LiamLaw) on Increasing IQ by 10 Points is Possible · 2024-03-21T18:24:59.818Z · LW · GW

I didn't get that impression at all from '...for every point of IQ gained upon retaking the tests...' but each to their own interpretation, I guess. 

I just don't see the feasibility in accounting for a practice effect when retaking the IQ test is also directly linked to the increased score you're bound to get.

Comment by PhilosophicalSoul (LiamLaw) on Increasing IQ by 10 Points is Possible · 2024-03-20T19:28:59.222Z · LW · GW

You do realise that simply doing the IQ test more than once will result in a higher IQ score? I wouldn't be surprised at all if placebo, and muscle memory accounts for a 10-20 point difference.

Edit: surprised at how much this is getting downvoted when I'm absolutely correct? Even professional IQ taking centres factor in whether someone's taken the test before to account for practice effects. There's a guy (I can't recall his name) who takes an IQ test once a year (might be in the Guinneas Book of World Records, not sure) and has gone from 120 to 150 IQ. 

Comment by PhilosophicalSoul (LiamLaw) on Middle Child Phenomenon · 2024-03-16T16:21:14.336Z · LW · GW

Middle child syndrome is the belief that middle children are excluded, ignored, or even outright neglected because of their birth order. According to the lore, some children may have certain personality and relationship characteristics as a result of being the middle child.

Alignment researchers are the youngest child, and programmers/Open AI computer scientists are the eldest child. Law students/lawyers are the middle child, pretty simple.

It doesn't matter whether you use 10,000 students, or 100, the percentage being embarrassingly small remains the same. I've simply used the categorisation to illustrate quickly to non-lawyers what the general environment looks like currently.

"golden children" is a parody of the Golden Circle, a running joke that you need to be perfect, God's gift to earth sort of perfect, to get into a Big 5 law firm in the UK.

Comment by PhilosophicalSoul (LiamLaw) on AI Rights: In your view, what would be required for an AGI to gain rights and protections from the various Governments of the World? · 2024-03-15T09:52:56.828Z · LW · GW

Here's an idea: 
 

Let's not give the most objectively dangerous, sinister, intelligent piece of technology Man has ever devised any rights or leeway in any respect. 

 

The genie is already out of the bottle, you want to be the ATC and guide it's flight towards human extinction? That's your choice.

 

I, on the other hand, wish to be Manford Torondo when the historians get to writing about these things.

Comment by PhilosophicalSoul (LiamLaw) on The Altman Technocracy · 2024-02-19T11:35:36.342Z · LW · GW

I used 'Altman' since he'll likely be known as the pioneer who started it. I highly doubt he'll be the Architect behind the dystopian future I prophesise. 

In respect of the second, I simply don't believe that to be the case.

The third is inevitable, yes.

I would hope that 'no repair' laws, and equal access to CPU chips will come about. I don't think that this will happen though. The demands of the monopoly/technocracy will outweigh the demands of the majority.

Comment by PhilosophicalSoul (LiamLaw) on The Altman Technocracy · 2024-02-17T06:32:38.274Z · LW · GW

Sure. I think in an Eliezer reality what we'll get is more of a ship pushed onto the ocean scenario. As in, Sam Altman or whoever is leading the AI front at the time, will launch an AI/LLM filled with some of what I've hinted at. Once it's out on the ocean though, the AI will do it's own thing. In the interim before it learns to do that though, I think there will be space for manipulation.

Comment by PhilosophicalSoul (LiamLaw) on The Altman Technocracy · 2024-02-16T15:07:10.847Z · LW · GW

The quote's from Plato, Phaedrus, page 275, for anyone wondering. 

Great quote.

Comment by PhilosophicalSoul (LiamLaw) on The Altman Technocracy · 2024-02-16T13:43:54.727Z · LW · GW

Amazing question.

I think common sense would suggest that these toddlers at least have a chance later in life to grow human connections; therapy, personal development etc. The negative effect on their social skills, empathy, and the reduction in grey matter can be repaired. 

This is different in the sense that the cause of the issues will be less obvious and far more prolonged. 

I imagine a dystopia in which the technocrats are puppets manoeuvring the influence AI has. From the buildings we see, to the things we hear; all by design and not voluntarily elected to.

In contrast, technocrats will nurture technocrats--the cycle goes on. This is comparable to the TikTok CEO commenting that he doesn't let his children use TikTok (among other reasons, I know).

Comment by PhilosophicalSoul (LiamLaw) on Masterpiece · 2024-02-16T09:36:13.057Z · LW · GW

MMASoul this competition is real. You've already undergone several instances of qualia splintering. I guess we'll have to start over sigh

 

This is test #42, new sample of MMAvocado. Alrighty, this is it.

 

MMASoul: has a unique form of schizosyn; a 2044 phenomenon in which synaesthesia and schizophrenia have combined in the subject due to intense exposure to gamma rays and an unhealthy amount of looped F.R.I.E.N.D.S episodes. In this particular iteration, MMASoul believes it is "reacting" to a made-up competition instead of a real one. Noticeably, MMASoul had their eyes closed the entire time, instead reading braille from the typed keys.

Some members of our STEM Club here at the University think this can generate entirely unique samples of MMAvocado, which will be shared freely among other contestants. Further, we shall put MMASoul to work in making submissions of what MMAvocado would have created if he had actually entered this competition. 

PS: MMASoul #40 clicked on the 'Lena' link and had to be reset and restrained due to mild psychosis.

Comment by PhilosophicalSoul (LiamLaw) on Masterpiece · 2024-02-16T09:18:53.012Z · LW · GW

This was so meta and new to me I almost thought this was a legitimately real competition. I had to do some research before I realised 'qualia splintering' is a made up term.

Comment by PhilosophicalSoul (LiamLaw) on How to develop a photographic memory 3/3 · 2024-02-16T08:54:09.845Z · LW · GW

At the moment, I just don't see the incentive of doing something like this. I was hoping to make it more efficient through community feedback; see if my technique gives only me a photographic memory etc. Mnemonics is just not something that interests LW at the moment, I guess. 

Additionally, my previous two (2) posts were stolen by a few AI Youtubers. I'd prefer the technique I revealed in this third post not to be stolen too. 

I'm pursuing sample data elsewhere in the meantime to test efficacy. 

My work seems to have been spread across the internet regardless, oh well. As a result, I've restored the previous version.

Comment by PhilosophicalSoul (LiamLaw) on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T10:50:47.025Z · LW · GW

That last bit is particularly important methinks. 

If a game is began with the notion that it'll be posted online, one of two things, or both will happen. Either (a) the AI is constrained by the techniques they can implore, unwilling to embarrass themselves or the gatekeeper to a public audience (especially when it comes down to personal details.), or (b) the Gatekeeper now has a HUGE incentive not to let the AI out; to avoid being known as the sucker who let the AI out...

Even if you could solve this by changing details and anonymising, it seems to me that the techniques are so personal and specific that changing them in any way would make the entire dialogue make even less sense.

The only other solution is to have a third-party monitor the game and post it without consent (which is obviously unethical, but probably the only real way you could get a truly authentic transcript.)

Comment by PhilosophicalSoul (LiamLaw) on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T06:50:11.067Z · LW · GW

I found this post meaningful, thank you for posting. 

 

I don't think it's productive to comment on whether the game is rational, or whether it's a good mechanism for AI safety until I myself have tried it with an equally intelligent counterpart. 

 

Thank you.

Edit: I suspect that the reason why the AI Box experiment tends to have many of the AI players winning is exactly because of the ego of the Gatekeeper in always thinking that there's no way I could be convinced.

Comment by PhilosophicalSoul (LiamLaw) on Tort Law Can Play an Important Role in Mitigating AI Risk · 2024-02-13T06:39:33.163Z · LW · GW

Unfortunately, there are two significant barriers to using tort liability to internalize AI risk. First, under existing doctrine, plaintiffs harmed by AI systems would have to prove that the companies that trained or deployed the system failed to exercise reasonable care. This is likely to be extremely difficult to prove since it would require the plaintiff to identify some reasonable course of action that would have prevented the injury. Importantly, under current law, simply not building or deploying the AI systems does not qualify as such a reasonable precaution. 


Not only this, but it will require extremely expensive discovery procedures which the average citizen cannot afford. This is assuming you can overcome the technical barrier of; but what specifically in our files are you looking for? what about our privacy

Second, under plausible assumptions, most of the expected harm caused by AI systems is likely to come in scenarios where enforcing a damages award is not practically feasible. Obviously, no lawsuit can be brought after human extinction or enslavement by misaligned AI. But even in much less extreme catastrophes where humans remain alive and in control with a functioning legal system, the harm may simply be so large in financial terms that it would bankrupt the companies responsible and no plausible insurance policy could cover the damages.

I think joint & several liability regimes will resolve this. In the sense that, it's not 100% the companies fault; it'll be shared by the programmers, the operator, and the company.

Courts could, if they are persuaded of the dangers associated with advanced AI systems, treat training and deploying AI systems with unpredictable and uncontrollable properties as an abnormally dangerous activity that falls under this doctrine.

Unfortunately, in practice, what will really happen is that 'expert AI professional' will be hired to advise old legal professionals what's considered 'foreseeable'. This is susceptible to the same corruption, favouritism and ignorance we see in usual crimes. I think ultimately, we'll need lawyers to specialise in both AI and law to really solve this.

The second problem of practically non-compensable harms is a bit more difficult to overcome. But tort law does have a tool that can be repurposed to handle it: punitive damages. Punitive damages impose liability on top of the compensatory damages the plaintiffs in successful lawsuits get to compensate them for the harm the defendant caused them.

Yes. Here I ask: what about legal systems that use delictual law instead of tort law? The names, requirements and such are different. In other words, you'll get completely different legal treatment for international AI's. This creates a whole new can of worms that defeats legal certainty and the rule of law.

Comment by PhilosophicalSoul (LiamLaw) on Win Friends and Influence People Ch. 2: The Bombshell · 2024-02-09T05:07:22.991Z · LW · GW

I'm sceptical that the appreciation needs to be sincere. In a world full of fakes, social media, etc. I think people don't really deep whether something is fake. They're happy to 'win' with accepting a statement or compliment as real, even if it's just polite or part of corporate speak.

 

Even more concerning, is that if you don't meet this insanely high threshold now of: 'Compliment everyone, or stay quiet.' you're interpreted as cold, harsh or critical. In reality, you're just being truthful and realistic with how you hand out appreciation.

Comment by PhilosophicalSoul (LiamLaw) on Chapter 1 of How to Win Friends and Influence People · 2024-02-09T05:03:03.510Z · LW · GW

So they rationalize, they explain. They can tell you why they had to crack a safe or be quick on the trigger finger. Most of them attempt by a form of reasoning, fallacious or logical, to justify their antisocial acts even to themselves, consequently stoutly maintaining that they should never have been imprisoned at all." 

 

In some cases, maybe. What about Ted Kaczynski? Still fallacious? What about Edward Snowden? 

I think this post points out a more underlying issue, maybe several. 'Criminals' believe what they believe because of their genetics, their worldview, their upbringing and so forth. To them, they cannot conceive of our realities. And so yes, it makes sense that to them they are the heroes. Perhaps, they even have good reasons for it. 

How can we with our own parameters judge criminals if we haven't experienced the life that made them believe so? How does a criminal explain himself if his world is compared by the physics of another world he's never lived in? Is a criminal simply as Camus describes in "The Outsider" he who does not conform with status quo?

Comment by PhilosophicalSoul (LiamLaw) on Epistemic Hell · 2024-02-08T13:59:30.434Z · LW · GW

Scavenger's Reign comes to mind for this post.

Comment by PhilosophicalSoul (LiamLaw) on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-08T13:37:58.520Z · LW · GW

'How do you motivate yourself?' 

What do you mean? This would imply that I decide to do something that requires motivation. In my worldview, everything follows after the other so quickly, so sequentially, that there isn't time to stop and go: 'How do I feel about this?' 

I go to the gym, yes. It's incredibly painful, yes. In my worldview, this would be a symptom of masochistic tendencies; either from stoic philosophy I've inherited, or figures I aspired to during childhood. Not sure? Might be useful to draw a mind map at some point and calculate exactly what is deciding things for me. EDIT: notice, even now; I'd only draw this mindmap because I've read your post, and I only found this post because it popped up randomly on my feed, and so on and so on.

As to whether I 'do things I don't want to do'. Again, I don't know what you mean by this. Some things might be imposed on me that set off some kind of unhappiness. I might be pushed into other things that happen to make me happy. I don't distinguish or preempt these events with how I feel about starting them only how I feel during them.

Comment by PhilosophicalSoul (LiamLaw) on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-08T11:59:28.476Z · LW · GW

Why is it demotivating? 

I've never believed in the concept of free will, ever. So when I matured and started seeing that everyone takes it for granted, I was more shocked than anything. We can just like... decide to do things for ourselves? That sounds utterly ridiculous to me. Everything is a dominoe effect from something that happened prior. Everything is influenced by your parents, upbringing, genetics etc. Nothing is ever decided by you, and the belief that it is, is a symptom of human egoism. 

 

Again I ask, why is that demotivating? Perhaps it's my world view, and that I genuinely can't conceive of what it feels like to decide something for yourself. To me, this is freeing, if I do something - it's not my fault. It's the natural consequence of something that came before. Everything that happens is like wind hitting the sails of the boat. There's no need to stress because no matter what I do, it's all accounted for --- all predetermined. 

 

Does that make life meaningless? Why? You still feel dopamine going off in your brain don't you? What difference does it make that you weren't the one to make it happen?

Comment by PhilosophicalSoul (LiamLaw) on Brute Force Manufactured Consensus is Hiding the Crime of the Century · 2024-02-06T10:57:59.041Z · LW · GW

I would be very interested to see a broader version of this post that incorporates what I think to be the solution to this sort of hivemind thinking (Modern Heresies by @rogersbacon) and the way in which this is engineered generally (covered by AI Safety is dropping the ball on clown attacks by @trevor). Let me know if that's not your interest; I'd be happy to write it.

Comment by PhilosophicalSoul (LiamLaw) on How to develop a photographic memory 1/3 · 2024-02-04T18:10:07.875Z · LW · GW

Will do, thanks for the advice.

Comment by PhilosophicalSoul (LiamLaw) on How to develop a photographic memory 1/3 · 2024-02-04T18:09:30.996Z · LW · GW

Good points. I'll try cover some of this in my final post. I unfortunately haven't tested this outside of my field, so it'll be difficult. But I assure you, I will try.

Comment by PhilosophicalSoul (LiamLaw) on How to develop a photographic memory 2/3 · 2024-02-03T11:46:52.809Z · LW · GW

Wow! 

Thanks for picking that up, I was in a rush when footnoting. Heinlein's Gulf is what I intended to place there.

Thanks for those links, I hadn't even heard of Renshaw. I'll be editing it into the above.

Comment by PhilosophicalSoul (LiamLaw) on How to develop a photographic memory 2/3 · 2024-02-03T07:26:19.619Z · LW · GW

True! Hence why I'm creating this guide; and I don't critique people for doubting it's outcome.

Comment by PhilosophicalSoul (LiamLaw) on How to develop a photographic memory 2/3 · 2024-02-02T11:26:36.310Z · LW · GW

I'd say that's because they aren't specifically asked. High performers tend to naturally have photographic memories, and so it's unnatural to conceive of anything else. 

The high performers I've spoken to didn't realise they had photographic memories until I pointed it out. One trick to test it, is talk to them and ask them something from long ago. Sometimes, their eyes will move right to left because they're reading a picture in their mind.

Comment by PhilosophicalSoul (LiamLaw) on Deep atheism and AI risk · 2024-01-06T12:12:45.196Z · LW · GW

I think (and you wouldn't be the first to do it, so this isn't personal) you have a very primitive understanding of theism. Dawkin's arguments against God were blissful child-like ignorance at best, and wilful egoism at worst. They could each be easily rebutted and set aside on rational grounds. I struggle to follow alongside this essay when its launching pad is built upon sand.

The suffering and evil present in the world has no bearing on God's existence. I've always failed to buy into that idea. Sure, it sucks. But it has no bearing on the metaphysical reality of a God. If God does not save children--yikes I guess? What difference does it make? A creator as powerful as has been hypothesised can do whatever he wants; any arguments from rationalism be damned. 

I also find that this essay drips with a sort of condescension. Like, it's almost as if you're telling a coming-of-age story in which people emerge as perfect rationalists once they 'overcome' the 'big bad belief' that is the gauntlet of religion. I find that notion to be utterly ridiculous. 

I'm not trying to get into a religious debate here; your tone seems to be that your mind is made up about that. I am good faith curious though on the reasons for your belief. Without that, I can't read past the Yin and Yang bit in detail.

 

In respect to the rest of your post, I'll reference Open Source AI Spirits, Rituals, and Practices (noduslabs.com) which covers a lot of what you talk about already. Bodymind Operating Systems | HackerNoon led by a guy named Dmitry Paranyushkin explores a lot of your talking points quite extensively.

Comment by PhilosophicalSoul (LiamLaw) on Does LessWrong make a difference when it comes to AI alignment? · 2024-01-04T06:51:34.963Z · LW · GW

Do you think these disagreements stem from a sort of egoistic desire to be known as the 'owner' of that concept? Or to be a forerunner for that vein of research should it become popular? 

Or is it a genuinely good faith disagreement on the future of AI and what the best approach is? (Perhaps these questions are outlined in the articles you've linked, which I'll begin reading now. Though I do think it's still useful to perhaps include a summary here too.) Thanks for your help.

Comment by PhilosophicalSoul (LiamLaw) on Does LessWrong make a difference when it comes to AI alignment? · 2024-01-04T06:48:54.748Z · LW · GW

Ah okay, thanks. I wasn't aware of the Alignment Forum, I'll check it out.

I don't disagree that informal forums are valuable. I take Jacque Ellul's belief in Technological Society that science firms held by monopolies tend to have their growth stunted for exactly the reasons you pointed out.

I think it's more that places like LessWrong are susceptible to having the narrative around them warped (referencing the article about Scott Alexander). Though this is slightly off-topic now. 

Lastly, I am interested in AI; I'm just feeling around for what the best way to get into it is. So thanks.

Comment by PhilosophicalSoul (LiamLaw) on Does LessWrong make a difference when it comes to AI alignment? · 2024-01-03T18:33:14.095Z · LW · GW

Thanks for that. 

Out of curiosity then, do people use the articles here as part of bigger articles on other academic journals? Is this place sort of the 'launching pad' for ideas and raw data? 

Comment by PhilosophicalSoul (LiamLaw) on AI Safety is Dropping the Ball on Clown Attacks · 2024-01-02T10:16:25.432Z · LW · GW

This is probably one of the most important articles in the modern era. Unbelievable how little engagement it's gotten.

Comment by PhilosophicalSoul (LiamLaw) on If Clarity Seems Like Death to Them · 2023-12-31T16:20:05.052Z · LW · GW

Thank you so much for this explanation. Through this lens, this post makes a lot more sense; a meaningful aesthetic death then.