Roommate interest and coordination thread 2012-08-02T09:22:17.211Z · score: 9 (12 votes)


Comment by patrickscottshields on Decision Theories: A Semi-Formal Analysis, Part II · 2013-04-28T01:53:56.833Z · score: 0 (0 votes) · LW · GW

I'd like to cite this article (or related published work) in a research project paper I'm writing which includes application of an expected utility-maximizing algorithm to a version of the prisoner's dilemma. Do you have anything more cite-able than this article's URL and your LW username? I didn't see anything in your profile which could point me towards your real name and anything you might have published.

Comment by patrickscottshields on Decision Theory FAQ · 2013-04-11T15:32:43.081Z · score: 0 (0 votes) · LW · GW

I'm not sure which further details you are after.

Thanks for the response! I'm looking for a formal version of the viewpoint you reiterated at the beginning of your most recent comment:

Yes, if the player is allowed access to entropy that Omega cannot have then it would be absurd to also declare that Omega can predict perfectly. [...] The problem specification needs to include a clause for how 'randomization' is handled.

That makes a lot of sense, but I haven't been able to find it stated formally. Wolpert and Benford's papers (using game theory decision trees or alternatively plain probability theory) seem to formally show that the problem formulation is ambiguous, but they are recent papers, and I haven't been able to tell how well they stand up to outside analysis.

If there is a consensus that the sufficient use of randomness prevents Omega from having perfect or nearly perfect predictions, then why is Newcomb's problem still relevant? If there's no randomness, wouldn't an appropriate application of CDT result in one-boxing since the decision-maker's choice and Omega's prediction are both causally determined by the decision-maker's algorithm, which was fixed prior to the making of the decision?

There have been attempts to create derivatives of CDT that work like that. That replace the "C" from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.

I'm curious: why can't normal CDT handle it by itself? Consider two variants of Newcomb's problem:

  1. At run-time, you get to choose the actual decision made in Newcomb's problem. Omega made its prediction without any information about your choice or what algorithms you might use to make it. In other words, Omega doesn't have any particular insight into your decision-making process. This means at run-time you are free to choose between one-boxing and two-boxing without backwards causal implications. In this case Omega cannot make perfect or nearly perfect predictions, for reasons of randomness which we already discussed.
  2. You get to write the algorithm, the output of which will determine the choice made in Newcomb's problem. Omega gets access to the algorithm in advance of its prediction. No run-time randomness is allowed. In this case, Omega can be a perfect predictor. But the correct causal network shows that both the decision-maker's "choice" as well as Omega's prediction are causally downstream from the selection of the decision-making algorithm. CDT holds in this case because you aren't free at run-time to make any choice other than what the algorithm outputs. A CDT algorithm would identify two consistent outcomes: (one-box && Omega predicted one-box), and (two-box && Omega predicted two-box). Coded correctly, it would prefer whichever consistent outcome had the highest expected utility, and so it would one-box.

(Note: I'm out of my depth here, and I haven't given a great deal of thought to precommitment and the possibility of allowing algorithms to rewrite themselves.)

Comment by patrickscottshields on Programming the LW Study Hall · 2013-03-17T19:26:29.875Z · score: 2 (2 votes) · LW · GW

This seems like an opportunity for a startup. It could be a fun project to build startup weekend-style. The concept doesn't seem particular tied to the Less Wrong community, and (based on a couple minutes searching for "online study halls") there don't seem to be other prominent startups taking on this specific challenge.

Comment by patrickscottshields on Decision Theory FAQ · 2013-03-11T03:10:07.649Z · score: 0 (0 votes) · LW · GW

This response challenges my intuition, and I would love to learn more about how the problem formulation is altered to address the apparent inconsistency in the case that players make choices on the basis of a fair coin flip. See my other post.

Comment by patrickscottshields on Decision Theory FAQ · 2013-03-11T03:02:44.115Z · score: 1 (1 votes) · LW · GW

Thanks for this post; it articulates many of the thoughts I've had on the apparent inconsistency of common decision-theoretic paradoxes such as Newcomb's problem. I'm not an expert in decision theory, but I have a computer science background and significant exposure to these topics, so let me give it a shot.

The strategy I have been considering in my attempt to prove a paradox inconsistent is to prove a contradiction using the problem formulation. In Newcomb's problem, suppose each player uses a fair coin flip to decide whether to one-box or two-box. Then Omega could not have a sustained correct prediction rate above 50%. But the problem formulation says Omega does; therefore the problem must be inconsistent.

Alternatively, Omega knew the outcome of the coin flip in advance; let's say Omega has access to all relevant information, including any supposed randomness used by the decision-maker. Then we can consider the decision to already have been made; the idea of a choice occurring after Omega has left is illusory (i.e. deterministic; anyone with enough information could have predicted it.) Admittedly, as you say quite eloquently:

Choice is not something inherent to a system, but a feature of an outsider's model of a system, in much the same sense as random is not something inherent to a Eeny, meeny, miny, moe however much it might seem that way to children.

In this case of the all-knowing Omega, talking about what someone should choose after Omega has left seems mistaken. The agent is no longer free to make an arbitrary decision at run-time, since that would have backwards causal implications; we can, without restricting which algorithm is chosen, require the decision-making algorithm to be written down and provided to Omega prior to the whole simulation. Since Omega can predict the agent's decision, the agent's decision does determine what's in the box, despite the usual claim of no causality. Taking that into account, CDT doesn't fail after all.

It really does seem to me like most of these supposed paradoxes of decision theory have these inconsistent setups. I see that wedrifid says of coin flips:

If the FAQ left this out then it is indeed faulty. It should either specify that if Omega predicts the human will use that kind of entropy then it gets a "Fuck you" (gets nothing in the big box, or worse) or, at best, that Omega awards that kind of randomization with a proportional payoff (ie. If behavior is determined by a fair coin then the big box contains half the money.)

This is a fairly typical (even "Frequent") question so needs to be included in the problem specification. But it can just be considered a minor technical detail.

I would love to hear from someone in further detail on these issues of consistency. Have they been addressed elsewhere? If so, where?

Comment by patrickscottshields on Idea: Self-Improving Task Management Software · 2013-03-10T23:25:19.098Z · score: 2 (2 votes) · LW · GW

Task management has become a passion of mine; for the last two years or so I've been trying to build something close to what you're describing. I think it's cool that you're giving this a shot. Here are some of my thoughts:

  • Start small. Building good task management software is a hard challenge, potentially several orders of magnitude harder than you're expecting. I continually underestimated how much effort it would take to build my task management software.
  • If you want to work on this full-time, consider joining an existing team. Companies such as Asana are already in the task management space, and they have teams of software engineers and data scientists working on cool things. Joining an existing team allows you to specialize on a part of the software, whereas you might spread yourself too thin if you are responsible for all components of the project. Joining an existing team is basically what I'm trying to do now, after I decided pursuing my startup further was suboptimal. (Potential employers reading this: please contact me!)
  • Focus on the fundamental algorithms and APIs before considering presentation. Target the command line; in the browser, it's easy to get distracted by user experience issues and end up prematurely optimizing for them. Unless your software actually does the awesome things you want it to do on the technical side, it won't matter how nice its interface is. Developing for the command line forces you to focus on the actual algorithms and APIs.
  • Don't reinvent things and don't allow feature creep. If you feel like you're doing something new, do more research. Very little in the way of new algorithms, math, data structures, etc. is necessary in this area; most of the work to be done is in picking which things to use that have already been invented. Keep your code base and features small so you don't get overwhelmed by technical debt.
  • Take free online classes in algorithms, data structures, software development, machine learning, statistics, information theory, logic, AI, and planning. There's so much cool stuff out there that you might not know about, which could be useful for this sort of endeavor. For example, something I learned from Tim Roughgarden's algorithms class on Coursera is that a set of tasks with precedence constraints (e.g. constraints of the form "task X must be completed before task Y") can be represented by a directed graph. If the graph is acyclic, a topological sort can, in linear time, can give a sequence of tasks that respects all precedence constraints (if the graph is cyclic, no such sequence exists.)
  • One avenue to explore with this kind of software is data entry optimization. What optimal subset of data should be collected from the user? Data entry consumes users' time; it's suboptimal for a user with thousands of tasks to routinely update each task's parameters. I think by looking at tasks' parameters as random variables, we can use information theory and machine learning to decide when users should be asked to update various data. I wrote a paper exploring this.

One thing you have going for you is that your project is open source. That allows for a lot of little contributions from people who are interested in the work, but already have their own sources of income. That might allow the project to survive where my startup failed. I still care deeply about task management, so it's possible our work will intersect in the future. I'm now following the GitHub repository you made for this project.

Comment by patrickscottshields on Thoughts on the January CFAR workshop · 2013-02-01T01:23:05.369Z · score: 4 (4 votes) · LW · GW

For example, I assumed the median staring salary for computer scientists was a reasonable estimate for what my starting salary would be. It turns out that I can expect to make about twice that much money if I use certain job hunting techniques I learned at the workshop and optimize for money (instead of, say, cool sounding problems).

What changed your expectation of your starting salary?

Comment by patrickscottshields on How to signal curiosity? · 2013-01-16T00:18:50.539Z · score: 2 (2 votes) · LW · GW

The potential benefits from private questioning should be weighed against the cost of the information not being visible to others. I like to see Wei_Dai's questions and the responses they elicit. I think the public exchanges have significant value beyond the immediate participants.

Comment by patrickscottshields on If we live in a simulation, what does that imply? · 2012-10-27T04:22:44.287Z · score: 2 (2 votes) · LW · GW

If our simulators are human, that implies that their universe has laws of physics similar to our own. But if we're living in a simulation, I think it's more plausible that our simulators exist in a world operating under different laws of physics (e.g. they live in a universe which is more amenable to our-universe-scale simulation.) So I think other factors are in play which could lessen the probability that we are being simulated by humans, let alone our future.

Comment by patrickscottshields on If we live in a simulation, what does that imply? · 2012-10-27T04:04:24.777Z · score: 2 (2 votes) · LW · GW

Or maybe just greater means. I imagine many humans would run universe-scale simulations, if they had the means.

Comment by patrickscottshields on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2012-10-27T03:39:55.133Z · score: 0 (0 votes) · LW · GW

For the probability estimates, I think it would be valuable to also ask for a ballpark estimate of how much time the survey-taker has put into thinking about each probability. Some people might spend (or have already spent) significantly more time thinking about these probabilities than others; gathering this information could provide a useful dimension for analysis.

Comment by patrickscottshields on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2012-10-27T03:12:57.557Z · score: 0 (0 votes) · LW · GW

It also creates potential time cost for people looking up what XX and XY chromosomes refer to. If you leave this question in the survey, can you at least include a heuristic for the uninformed, such as "heuristic: biologically female => XX; biologically male => XY)"?

Comment by patrickscottshields on Who Wants To Start An Important Startup? · 2012-08-14T23:19:02.145Z · score: 12 (12 votes) · LW · GW is a startup that appears to be doing a lot of this already.

Comment by patrickscottshields on Roommate interest and coordination thread · 2012-08-14T18:24:34.233Z · score: 0 (0 votes) · LW · GW

I feel like the first paragraph of my original explanation of my situation addressed this, so maybe I don't understand what you're asking. Can you either rephrase your question or give an example of the kind of response you're looking for?

Comment by patrickscottshields on Who Wants To Start An Important Startup? · 2012-08-14T17:05:22.322Z · score: 12 (14 votes) · LW · GW

I started MyPersonalDev a year ago to develop a data-driven personal development web application. The minimum viable product I envision is a task manager for people who like to think about utility functions (give your tasks utility functions!) My long-range vision is to use machine learning and collective intelligence to automate things like next-action determinations, value-of-information calculations, and probability estimates. I've written most of the minimum viable product already and use it extensively to manage my own tasks, but I haven't released anything publicly because it's easier to develop the software without having an existing user base.

The downside of having no user base is that there's no revenue, which is a real issue for me as a cash-strapped college student. I'll graduate in May with a degree in computer science, and I've been thinking hard about what to do after that. My impression is that working for my startup post-graduation would likely involve a period of extreme financial difficulty that I'd like to avoid. Consequently, I've been considering shutting it down and trying to get the best existing job I can get, using salary as the base metric and making adjustments for things like quality-of-life and is-the-company-doing-something-worthwhile. While ideologically frustrating (I like the idea of working for my startup full-time post-graduation), that has seemed to be the most instrumentally rational thing for me to do.

Here are some options I'll throw out there:

  • If there's collective interest in MyPersonalDev as a vehicle for some of the positive impact we're talking about in this thread, I'm interested in working with people to make that happen. Anyone interested in sponsoring development of the software or otherwise making it more financially viable during its startup phase should contact me. For the next nine months before I graduate, it could help to have a small, cheap office space near campus, as I'll be living on-campus and can't conduct commercial activity there. I'll plan to put a media kit together with more information on the company for interested parties.
  • If other programmers want to work with me on MyPersonalDev, either now over the internet, or in-person once I graduate in May, that would be exciting! I'm not sure how we would work it out in terms of equity and salary, but I'm open to suggestions. Right now the company is a stock corporation, of which I am the sole director and shareholder. I like that because it's lean (I don't need to get permission from other people to make business decisions.) That said, I'd want collaborators to be fairly compensated. Some sort of funding or revenue seems necessary for this to happen.
  • If I don't end up working for MyPersonalDev full-time post-graduation, I'm available as a programmer and aspiring rationalist who wants to work on something important. Until May, I'm available online; after May, I'm available in-person.
  • If people want to form a startup together, maybe they should live together too! I started a roommate interest coordination thread two weeks ago for purposes like that.
  • Like I said in that thread: If there were several people interested in working for their own startups, maybe they could lease a building together, or utilize collective purchasing to lower the costs of bookkeeping or legal services. (Is anyone interested in doing that?)

Look at my history of posts for more information about me. Like I said in a recent post:

I'm especially interested in collaborating with other programmers, working in Python or Go, working on data visualizations in D3, programming rationality exercises, or working on something that qualifies as "data science".

I want to work on something important. I want to work on a team. And I want to make enough money to live comfortably. When I graduate in May, I'm very interested in moving towards a more optimal living and working situation. Could we be a fit? Get in touch!

Comment by patrickscottshields on SI/CFAR Are Looking for Contract Web Developers · 2012-08-07T02:19:20.125Z · score: 3 (3 votes) · LW · GW

What are you developing? Why are you developing it in PHP?

Comment by patrickscottshields on Roommate interest and coordination thread · 2012-08-07T01:50:54.093Z · score: 1 (1 votes) · LW · GW

Thanks for this detailed post!

I have assumed a certain level of compromise when considering living situations. For example, I have assumed that people would not be willing to move a specific city for the primary purpose of joining an awesome living environment, but would instead be willing only to optimize within preexisting geographical constraints.

If there were enough people willing to relocate somewhere for the primary purpose of establishing an awesome living environment, that opens up a new class of opportunities more appealing than the ones I've been considering. For example, if there were several people interested in working for their own startups, maybe they could lease a building together, or utilize collective purchasing to lower the costs of bookkeeping or legal services. (Is anyone interested in doing that?)

I think such an intentional living community would be significantly more difficult to create than finding a few compatible roommates in a particular city, but I'm willing to look into it further.

Comment by patrickscottshields on Roommate interest and coordination thread · 2012-08-04T03:34:52.690Z · score: 1 (1 votes) · LW · GW

Taboo "coordinate".

What do you think are the best places to live?

Comment by patrickscottshields on Roommate interest and coordination thread · 2012-08-04T03:07:53.602Z · score: 3 (3 votes) · LW · GW

I enjoyed reading your analysis. If there's anything in particular you want input on, I'd be happy to share my perspective.

Comment by patrickscottshields on Roommate interest and coordination thread · 2012-08-02T23:14:17.487Z · score: 2 (2 votes) · LW · GW

Thanks for sharing. What's your plan? How much of your time do you think it would be optimal to spend assessing your options with regard to where to live?

I love the idea of living with "agent-y" rationalists, but I definitely don't love the idea of slowly discovering that I'm intractably not motivated or smart enough to truly "hang."

My impression is that the majority of aspiring rationalists are willing to work with each other through our flaws, rather than expecting perfection. I suspect the smartest, most popular people in the rationality community take up a disproportionate amount of our attention, which can make inadequacy feel more plausible than it really is. If we try, I don't think we'll have trouble finding awesome living environments.

Comment by patrickscottshields on Looking for a roommate in Mountain View · 2012-08-02T19:14:17.091Z · score: 4 (4 votes) · LW · GW

Thanks for posting this. It inspired me to write a more general roommate coordination thread. I'm interested in the living situation you describe, but my housing situation is set until I finish my computer science degree in May. I also don't have a steady source of income right now.

When considering my prospects about where to live post-graduation, I'm torn between Silicon Valley and places that might have a higher quality/cost ratio. Can you share some of your rationale for choosing Silicon Valley over your other options? How would not having a steady source of income change your thinking about where to live?

Comment by patrickscottshields on Roommate interest and coordination thread · 2012-08-02T18:14:41.811Z · score: 0 (0 votes) · LW · GW

Are you looking to move in there?

Comment by patrickscottshields on Roommate interest and coordination thread · 2012-08-02T09:36:08.790Z · score: 0 (0 votes) · LW · GW

Discuss the concept of this thread here. For example, how could it be more useful? What would you do differently?

Comment by patrickscottshields on Roommate interest and coordination thread · 2012-08-02T09:26:04.301Z · score: 6 (6 votes) · LW · GW

I attended the Center for Applied Rationality's June rationality camp in Berkeley, and would very much like to have a full-time living environment similar to the environment at camp. I'm very interested in joining or working to create a living environment that values open communication and epistemic hygiene, facilitates house-wide life-hacking experimentation, provides a collaborative, fulfilling environment to live and work in, and those sorts of things.

I'll finish my computer science degree in May, and I plan to make changes to my living situation at that time. I plan to apply a portion of my time over the next ten months to identifying and assessing potential living environments, and I am interested in collaborating with others throughout the process. Contact me if you think collaboration could be mutually beneficial (I would rather you err on the side of contacting me.)

I started a software development company last summer under which I have been developing a web application that assesses tasks' utility in order to suggest high-utility tasks to users. I have not publicly released the application, but I use it daily to manage my own tasks. Contingent on my startup remaining a high-utility prospect in my mind, I'd like to work on it full-time after I graduate. I am very interested in live-work arrangements (e.g. working and living on the same premises), or in living close to a coworking space or an affordable office space.

My finances are limited right now. That would change if I got a full-time software engineering job once I graduate, but I'd rather work for my startup and finance things through part-time or contract work if necessary (if you're interested in hiring me, please contact me.) I'm especially interested in collaborating with other programmers, working in Python or Go, working on data visualizations in D3, programming rationality exercises, or working on something that qualifies as "data science".

I live in Kansas, and it's alright here. I preferred the weather in Berkeley when I visited there last month. I think I would enjoy living in the San Francisco bay area, but the cost of living is high there. I'm interested in identifying affordable places to live that are competitive with the amenities of the bay area. I'm also very interested in meeting and networking with potential roommates.

In terms of resources, I have found Sperling's BestPlaces to have a lot of good information about U.S. cities.

Comment by patrickscottshields on Open Thread, July 1-15, 2012 · 2012-07-04T02:18:50.658Z · score: 2 (2 votes) · LW · GW

I'm interested in idea 2. If you write about it, I'm especially interested in what you think we should do about it.

Comment by patrickscottshields on Personality analysis in terms of parameters · 2012-06-21T01:54:35.764Z · score: 0 (0 votes) · LW · GW

There are many different ways we could represent a personality (to varying degrees of accuracy.) I have not found a widely-accepted format, but I think we can each make our own for now. Whenever you wonder why someone acted a certain way, think about what the relevant parameters might have been and write them down. If several people work on this and share their results, perhaps one or more standardized personality representation formats will emerge.

The parameters collected by online user profiles such as those maintained by Facebook, Google Plus, or OkCupid might provide some inspiration.

If we had a good dataset of people and their personality attributes along with some performance measures, we could use machine learning to do neat things like predict relationship compatibility between two people. Imagine a rationalist dating service that used personality data to suggest matches! defines a "Person" model but it focuses primarily on circumstantial attributes rather than mental state.

Comment by patrickscottshields on Suggest alternate names for the "Singularity Institute" · 2012-06-19T16:38:16.800Z · score: 1 (1 votes) · LW · GW

I like "AI Risk Reduction Institute". It's direct, informative, and gives an accurate intuition about the organization's activities. I think "AI Risk Reduction" is the most intuitive phrase I've heard so far with respect to the organization.

  • "AI Safety" is too vague. If I heard it mentioned, I don't think I'd have a good intuition about what it meant. Also, it gives me a bad impression because I visualize things like parents ordering their children to fasten their seatbelts.
  • "Beneficial Architectures" is too vague. It's not clear it's AI-related.
  • "AI Impacts Research" is too vague and non-prescriptive. Unlike "AI Risk Reduction", it's ambiguous in its intentions.
Comment by patrickscottshields on What are you working on? April 2012 · 2012-04-17T02:20:54.210Z · score: 0 (0 votes) · LW · GW

I'm writing a forward planner to help me figure out whether to attend university for another year to finish my computer science degree, or do something else such as working for my startup full-time. I have a working prototype of the planner but still need to input most of the possible actions and their effects.

I chose this project because I think my software will do a better job assessing the utility of alternatives than my intuition, and because I implemented a forward planner for an artificial intelligence class I'm taking and wanted to apply something similar to my own life to help me plan my future.

Comment by patrickscottshields on Common mistakes people make when thinking about decision theory · 2012-04-06T19:26:21.832Z · score: 0 (0 votes) · LW · GW

Thank you. Your comment resolved some of my confusion. While I didn't understand it entirely, I am happy to have accrued a long list of relevant background reading.

Comment by patrickscottshields on Common mistakes people make when thinking about decision theory · 2012-03-28T20:13:01.750Z · score: 2 (2 votes) · LW · GW

I have several questions. I hadn't asked them because I thought I should do more research before taking up your time. Here are some examples:

  • What does it mean to solve the limited predictor problem? In what form should a solution be—an agent program?
  • What is a decision, more formally? I'm familiar with the precondition/effect paradigm of classical AI planning but I've had trouble conceptualizing Newcomb's problem in that paradigm.
  • What, formally, is an agent? What parameters/inputs do your agent programs take?
  • What does it mean for an agent prove a theorem in some abstract formal system S?

I will plan to do more research and then ask more detailed questions in the relevant discussion threads if I still don't understand.

I think my failure to comprehend parts of your posts is more due to my lack of familiarity with the subject matter than your communication style. Adding links to works that establish the assumptions or formal systems you're using could help less advanced readers start learning that background material without you having to significantly lengthen your posts.

Thanks for the help!

Comment by patrickscottshields on Common mistakes people make when thinking about decision theory · 2012-03-28T12:44:55.478Z · score: 3 (3 votes) · LW · GW

My education in decision theory has been fairly informal so far, and I've had trouble understanding some of your recent technical posts because I've been uncertain about what assumptions you've made. I think more explicitly stating your assumptions could lessen the frequency of arguments about assumptions by decreasing the frequency of readers mistakenly believing you've made different assumptions. It could also decrease inquiries about your assumptions, like the one I made on your post on the limited predictor problem.

One way to do this could be to, in your posts, link to other works that define your assumptions. Such links could also function to connect less-experienced readers with relevant background reading.

Comment by patrickscottshields on The limited predictor problem · 2012-03-21T02:59:30.226Z · score: 0 (0 votes) · LW · GW

In section 2, you say:

Unfortunately you can't solve most LPPs this way [...]

By solving most LPPs, do you mean writing a general-purpose agent program that correctly maximizes its utility function under most LPPs? I tried to write a program to see if I could show a counterexample, but got stuck when it came to defining what exactly a solution would consist of.

Does the agent get to know N? Can we place a lower bound on N to allow the agent to time to parse the problem and become aware of its actions? Otherwise, wouldn't low N values force failure for any non-trivial agent?

Comment by patrickscottshields on Attention Lurkers: Please say hi · 2010-04-20T01:14:05.125Z · score: 7 (7 votes) · LW · GW

Hi! I'm Patrick Shields, an 18-year-old computer science student who loves AI, rationality and musical theater. I'm happy I finally signed up--thanks for the reminder!