Posts

20 minutes of work as an artist in one future 2024-03-27T06:49:53.418Z
How is ChatGPT's behavior changing over time? 2023-08-17T20:54:46.505Z
Desensitizing Deepfakes 2023-03-29T01:20:41.540Z
Solving Mysteries - 2023-03-28T17:46:12.795Z
Some of My Current Impressions Entering AI Safety 2023-03-28T17:46:12.718Z
Phib's Shortform 2023-03-28T17:46:12.644Z

Comments

Comment by Phib on Epistemic Hell · 2024-01-28T23:04:10.288Z · LW · GW

“they serendipitously chose guinea pigs, the one animal besides human beings and monkeys that requires vitamin C in its diet.“

This recent post I think describes this same phenomena but not from the same level of ‘necessity’ as, say, cures to big problems. Kinda funny too: https://www.lesswrong.com/posts/oA23zoEjPnzqfHiCt/there-is-way-too-much-serendipity.

Comment by Phib on AI doing philosophy = AI generating hands? · 2024-01-18T00:26:47.997Z · LW · GW

So here was my initial quick test, I haven't spent much time on this either, but have seen the same images of faces on subreddits etc. and been v impressed. I think asking for emotions was a harder challenge vs just making a believable face/hand, oops



I really appreciate your descriptions of the distinctive features of faces and of pareidolia, and do agree that faces are more often better represented than hands, specifically hands often have the more significant/notable issues (misshapen/missing/overlapped fingers). Versus with faces where there's nothing as significant as missing an eye, but it can be hard to portray something more specific like an emotion (though same can be said for, e.g. getting Dalle not to flip me off when I ask for an index finger haha).

Rather difficult to label or prompt a specific hand orientation you'd like as well, versus I suppose, an emotion (a lot more descriptive words for the orientation of a face than a hand)

So yeah, faces do work, and regardless of my thoughts on uncanny valley of some faces+emotions, I actually do think hands (OP subject) are mostly a geometric complexity thing, maybe we see our own hands so much that we are more sensitive to error? But they don't have the same meaning to them as faces for me (minute differences for slightly different emotions, and benefitting perhaps from being able to accurately tell).

 

Comment by Phib on AI doing philosophy = AI generating hands? · 2024-01-17T21:34:51.582Z · LW · GW

I think if this were true, then it would also hold that faces are done rather poorly right now which, maybe? Doing some quick tests, yeah, both faces and hands at least on Dalle-3 seem to be similar levels of off to me.

Comment by Phib on 2023 in AI predictions · 2024-01-04T02:24:46.103Z · LW · GW

Wow, I’m impressed it caught itself, was just trying to play with that 3 x 3 problem too. Thanks!

Comment by Phib on 2023 in AI predictions · 2024-01-04T02:07:48.773Z · LW · GW

I don’t know [if I understand] full rules so don’t know if this satisfies, but here:

https://chat.openai.com/share/0089e226-fe86-4442-ba07-96c19ac90bd2

Comment by Phib on Phib's Shortform · 2023-09-15T23:18:40.808Z · LW · GW

Kinda commenting on stuff like “Please don’t throw your mind away” or any advice not to fully defer judgment to others (and not intending to just straw man these! They’re nuanced and valuable, just meaning to next step it).

In my circumstance and I imagine many others who are young and trying to learn and trying to get a job, I think you have to defer to your seniors/superiors/program to a great extent, or at least to the extent where you accept or act on things (perform research, support ops) that you’re quite uncertain about.

Idk there’s a lot more nuance here to this conversation as with any, of course. Maybe nobody is certain of anything and they’re just staking a claim so that they can be proven right or wrong and experiment in this way, producing value in their overconfidence. But I do get a sense of young/new people coming into a field that is even slightly established, requiring to some extent to defer to others for their own sake.

Comment by Phib on The AI apocalypse myth. · 2023-09-08T18:01:24.587Z · LW · GW

I don’t mean to present myself as the “best arguments that could be answered here” or at all representative of the alignment community. But just wanted to engage. I appreciate your thoughts!

Well, one argument for potential doom doesn’t necessitate an adversarial AI, but rather people using increasingly powerful tools in dumb and harmful ways (in the same class of consideration for me as nuclear weapons; my dumb imagined situation of this is a government using AI to continually scale up surveillance and maybe we eventually get to a position like in 1984)

Another point is that a sufficiently intelligent and agentic AI would not need humans, it would probably eventually be suboptimal to rely on humans for anything. And it kinda feels to me like this is what we are heavily incentivized to design, the next best and most capable system. In terms of efficiency, we want to get rid of the human in the loop, that person’s expensive!

Comment by Phib on AI-Plans.com - a contributable compendium · 2023-06-27T00:31:20.930Z · LW · GW

Idk the public access of some of these things, like with nonlinear's recent round, but seeing a lot of apps there and organized by category, reminded me of this post a little bit.

edit - in terms of seeing what people are trying to do in the space. Though I imagine this does not capture the biggest players that do have funding.

Comment by Phib on AI-Plans.com - a contributable compendium · 2023-06-26T17:31:46.974Z · LW · GW

btw small note that I think accumulations of grant applications are probably pretty good sources of info.

Comment by Phib on Phib's Shortform · 2023-06-08T00:39:56.313Z · LW · GW

BTW - this video is quite fun. Seems relevant re: Paperclip Maximizer and nanobots.

Comment by Phib on grey goo is unlikely · 2023-04-17T18:16:03.096Z · LW · GW

low commit here but I've previously used nanotech as an example (rather than a probable outcome) of a class somewhat known unknowns - to portray possible future risks that we can imagine as possible while not being fully conceived. So while grey goo might be unlikely, it seems that precursor to grey goo of a pretty intelligent system trying to mess us up is the thing to be focused on, and this is one of its many possibilities that we can even imagine

Comment by Phib on Being at peace with Doom · 2023-04-10T00:18:46.434Z · LW · GW

I rather liked this post (and I’ll put it on both EAF and LW versions)

https://www.lesswrong.com/posts/PQtEqmyqHWDa2vf5H/a-quick-guide-to-confronting-doom

Particularly the comment by Jakob Kraus reminded me that many people have faced imminent doom (not of human species, but certainly quite terrible experiences).

Comment by Phib on Foom seems unlikely in the current LLM training paradigm · 2023-04-09T23:58:16.174Z · LW · GW

Hi, writing this while on the go but just throwing it out there, this seems to be Sam Altman’s intent with OpenAI in pursuing fast timelines with slow takeoffs.

Comment by Phib on Run Posts By Orgs · 2023-03-29T06:16:21.879Z · LW · GW

I am unaware of those decisions at the time. I imagine people are some degree of ‘making decisions under uncertainty’, even if that uncertainty could be resolved by info somewhere out there. Perhaps there’s some optimization of how much time you spend looking into something and how right you could expect to be?

Comment by Phib on Run Posts By Orgs · 2023-03-29T05:51:27.371Z · LW · GW

Anecdote of me (not super rationalist-practiced, also just at times dumb) - I sometimes discover stuff I briefly took to be true in passing to be false later. Feels like there’s an edge of truth/falsehoods that we investigate pretty loosely but still use a heuristic of some valence of true/false maybe a bit too liberally at times.

Comment by Phib on Phib's Shortform · 2023-03-28T16:25:58.521Z · LW · GW

LLMs as a new benchmark for human labor. Using ChatGPT as a control group versus my own efforts to see if my efforts are worth more than the (new) default

Comment by Phib on Demons from the 5&10verse! · 2023-03-28T05:53:43.155Z · LW · GW

Thanks for writing this, enjoyed it. I was wondering how to best represent this to other people, perhaps with an example of 5 and 10 where you let a participant make the mistake, and then question their reasoning etc. lead them down the path laid out in your post of rationalization after the decision before finally you show them their full thought process in post. I could certainly imagine myself doing this and I hope I’d be able to escape my faulty reasoning…

Comment by Phib on Can GPT-4 play 20 questions against another instance of itself? · 2023-03-28T02:34:13.076Z · LW · GW