Posts

Against Compromise, or, Deciding as a Team without Succombing to Entropy 2017-01-08T13:22:45.819Z
Mind uploading from the outside in 2015-11-29T02:05:07.228Z
All discussion post titles, points, and dates as an excel sheet 2014-06-03T14:38:20.341Z
RapGenius + Sequences = ? 2013-08-01T06:04:57.281Z
Rationality & Startups - The Workshop 2012-02-28T11:25:52.384Z
Meetup : London This Sunday 2011-10-14T08:48:31.123Z
[LINK] Robin Hanson on Carl Shulman's recent paper on Whole Brain Emulation 2011-10-05T07:51:22.952Z
Decision Fatigue, Rationality, and Akrasia. 2011-09-19T15:37:26.534Z
Meetup : London Science Museum, Aug. 31 2011-08-26T12:54:14.893Z
London meetup, Sunday 2011-08-21 14:00, near Holborn 2011-08-20T18:00:43.174Z
Meetup : Two-monthly London Meetup 2011-06-29T07:16:32.942Z
London Meetup 05-Jun-2011 - very rough minutes 2011-06-09T13:40:47.148Z
Fine-tuned for Interestingness vs. Ramsey's Theorem 2011-05-16T17:07:49.703Z
[SEQ RERUN] Why truth? And... 2011-04-20T19:20:55.178Z
Link Sharing Thread - April '11 2011-04-11T09:03:08.907Z
96 Bad Links in the Sequences 2011-04-07T10:39:00.843Z
London Hackday, this Friday, April 1st (No, this is not a joke) 2011-03-29T14:57:40.930Z
Project Ideas for the London Hackday 2011-03-20T22:44:30.679Z
Tweetable Rationality 2011-03-12T20:00:09.650Z
Bring Back the Sequences? 2011-03-07T07:21:32.038Z
Rationality Quotes: March 2011 2011-03-02T11:14:22.319Z
[Link] "It'll never work": a collection of failed predictions 2011-02-19T18:02:17.645Z
Is Atheism a failure to distinguish Near and Far? 2011-02-02T04:52:39.226Z
Many of us *are* hit with a baseball once a month. 2010-12-22T17:56:02.982Z
London Meetup on 2011/1/2 2010-12-19T21:01:51.127Z
An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem 2010-12-18T13:26:18.180Z
Reading Level of Less Wrong 2010-12-13T09:54:28.812Z
Calling LW Londoners 2010-12-11T17:53:31.389Z
Fine-Tuned Mind Projection 2010-11-29T00:08:07.800Z
Startups 2010-11-24T21:13:45.409Z
Rationality is Not an Attractive Tribe 2010-11-23T14:08:33.563Z
Common Sense Atheism summarizing the Sequences 2010-11-19T12:55:00.130Z
Zero Bias 2010-11-17T12:16:58.346Z
Beyond Optimization by Proxy 2010-05-27T13:16:45.798Z
Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems 2010-05-10T13:25:41.567Z
Single Point of Moral Failure 2010-04-06T22:44:51.369Z

Comments

Comment by Alexandros on Ivermectin: Much Less Than You Needed To Know · 2022-10-08T16:58:02.898Z · LW · GW

Much of this sounds very speculative, to be completely honest, and I'm not sure I agree with your diagnosis of what "rationalists like Scott care" about.

I would be interested in hearing what prediction, specifically, would be interesting and specific enough to put up on metaculus. Or was that the one about the data not being released? Because I'm actively working on multiple fronts to get it released, so "predicting it won't" just feels wrong.

Comment by Alexandros on Ivermectin: Much Less Than You Needed To Know · 2022-10-07T22:41:31.504Z · LW · GW

Admittedly I've not looked into how metaculus works. How would I go about registering such a prediction?

Understanding that there was randomization failure, and that that failure was at the expense of ivermectin takes about 10-15 minutes for someone who can do addition and subtraction to understand -- I've got all the receipts here:

https://doyourownresearch.substack.com/p/demonstrating-randomization-failure

Maybe a little more time if they want to confirm the receipts and make sure there's no credible counter-argument to be made. It's either that, or the numbers coming out of the trial are false -- not sure which is worse.

Ever since I've written that post, I've seen more internal data from the trial that confirms it. 

How would I go about getting people to bet against me on this? And crucially, how would it help get the data released? I already offered to donate $25k to ACX Grants if Scott helps get the data released, which is my main objective. Will this help in that direction?

Comment by Alexandros on Ivermectin: Much Less Than You Needed To Know · 2022-10-07T10:21:05.115Z · LW · GW

Given that I have access to insider sources of information and a lot of inside data that I can't yet release publicly (you will have to take me on my word on this, sadly) it would be pretty bad form of me to make predictions other than the ones I have already made (many of which were made before I had that inside data): 

The together trial suffered randomization failure: the placebo group is not concurrent, and that triggered a chain of events that led to it allocating disproportionately sick patients to ivermectin, and disproportionately healthy patients to fluvoxamine, with placebo being in the middle. This was amplified by several puzzling decisions by the together team. All this in a backdrop of indefensible dosing for ivermectin, and widespread community use in brazil, where it was available OTC.

I've summarized many of my concerns here: https://doyourownresearch.substack.com/p/10-questions-for-the-together-trial

And I've shared my model of what I think happened here: https://doyourownresearch.substack.com/p/together-trial-solving-the-3-day

There's a lot more to go over, but long story short, what I do doesn't involve a lot of probabilistic arguments, it's just logical inference for the most part, inference that anyone can replicate since I try to post receipts as much as possible. As a result, whenever I've had the chance to see internal data, it's matched my models pretty well.

Comment by Alexandros on On the importance of Less Wrong, or another single conversational locus · 2016-12-02T08:55:54.476Z · LW · GW

LW has a BDFL already. He's just not very interested and (many) people don't believe he's able to restore the website. We didn't "come to believe" anything.

Comment by Alexandros on On the importance of Less Wrong, or another single conversational locus · 2016-12-02T08:54:23.063Z · LW · GW

An additional additional point is that the dictator can indeed quit and is not forced to kill themselves to get out of it. So it's actually not FL. And in fact, it's arguably not even a dictatorship, as it depends on the consent of the governed. Yes, BDFL is intentionally outrageous to make a point. What's yours?

Comment by Alexandros on On the importance of Less Wrong, or another single conversational locus · 2016-11-30T04:24:31.717Z · LW · GW

I've done my fair bit of product management, mostly on resin.io and related projects (etcher.io and resinos.io) and can offer some help in re-imagining the vision behind lw.

Comment by Alexandros on On the importance of Less Wrong, or another single conversational locus · 2016-11-30T04:22:57.964Z · LW · GW

that's awesome. I'm starting to hope something may come of this effort.

Comment by Alexandros on On the importance of Less Wrong, or another single conversational locus · 2016-11-29T10:55:56.959Z · LW · GW

Who is empowered to set Vaniver or anyone else as the BDFL of the site? It would be great to get into a discusion of "who" but I wonder how much weight there will be behind this person. Where would the BDFL's authority eminate from? Would he be granted, for instance, ownership of the lesswrong.com domain? That would be a sufficient gesture.

Comment by Alexandros on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T10:40:52.900Z · LW · GW

Hi Anna,

Please consider a few gremlins that are weighing down LW currently:

  1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.

  2. the no politics rule (related to #1) -- We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we're told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW. Oddly enough I recently saw it linked from the front page of realclearpolitics.com, which means that not only has discussing politics not harmed SSC, it may actually be drawing in people who care about genuine insights in this extremely complex space that is of very high interest.

  3. the "original content"/central hub approach (related to #1) -- This should have been an aggregator since day 1. Instead it was built as a "community blog". In other words, people had to host their stuff here or not have it discussed here at all. This cost us Robin Hanson on day 1, which should have been a pretty big warning sign.

  4. The codebase, this website carries tons of complexity related to the reddit codebase. Weird rules about responding to downvoted comments have been implemented in there, nobody can make heads or tails with it. Use something modern, and make it easy to contribute to. (telescope seems decent these days).

  5. Brand rust. Lesswrong is now kinda like myspace or yahoo. It used to be cool, but once a brand takes a turn for the worse, it's really hard to turn around. People have painful associations with it (basilisk!) It needs burning of ships, clear focus on the future, and as much support as possible from as many interested parties, but only to the extent that they don't dillute the focus.

In the spirit of the above, I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future (that still suffers from problem #1 AFAICT) is exactly what I would do if I wanted to maintain the status quo for a few more years.

Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.

I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

Comment by Alexandros on The correct response to uncertainty is *not* half-speed · 2016-01-17T01:48:28.686Z · LW · GW

Reminds me of the motto "Strong Opinions, Weakly Held". There's no point having a blurry opinion, or not expressing what you believe to be the most likely candidate for a good way forward, even if it's more likely by only a small margin. By expressing (and/or acting on) a clearly expressed, falsifiable opinion, you expose it to criticism, refutation, improvement, etc. And if you hold it weakly, then you will be open to reconsidering. Refusing to make up your mind, and kindof oscilating between a few options, perhaps waiting to see where the wind blows, has its advantages, but especially when it comes to getitng things done, is most often a clear loser. Despite this, our brains seem to prefer it instinctively, maybe due to some ancestral environment echoes about being proven wrong in the eyes of the tribe?

Comment by Alexandros on Mind uploading from the outside in · 2015-11-30T23:46:12.259Z · LW · GW

You appear to be arguing about definitions. I'm not interested in going down that rabbit hole.

Comment by Alexandros on Mind uploading from the outside in · 2015-11-30T05:55:54.791Z · LW · GW

Which in turn depends on what you mean by "artificial".

Comment by Alexandros on Mind uploading from the outside in · 2015-11-30T05:35:45.492Z · LW · GW

I don't use the word consciousness as it's a complex concept not really necessary in this context. I approach a mind as an information processing system, and information processing systems can most certainly be distributed. What that means for consciousness depends on what you mean by consciousness I suppose, but I would not like to start that conversation.

Comment by Alexandros on Mind uploading from the outside in · 2015-11-30T03:40:23.953Z · LW · GW

There are still many intermediate steps. What does it mean "to be conscious of a sensory input"? Are we talking system 1 or system 2? If the brain is composed of modules, which it likely is, what if some of them are digital and able to move to where the information is and others are not? What if the biological part's responses can be modelled well enough to be predicted digitally 99.9% of the time, such that a remote near-copy can be almost autonomous by means of optimistic concurrency, correcting course only when the verdict comes back different than predicted. The notion of the brain as a single indivisible unit that "is aware of an input" quickly fades away when the possibilities of software are taken into account, even when only part of you is digital.

Comment by Alexandros on Mind uploading from the outside in · 2015-11-30T03:20:04.443Z · LW · GW

Surely the point at which your entire sensory input comes from the digital world you are somewhat uploaded, even if part of the processing happens in biological components. what does it mean to "travel" when you can receive sensory inputs from any point in the network? There are several rubicons to be crossed, and transitioning from "has tiny biological part" to "has no biological part" is another, but it's definitely smaller than "one day an ape, the next day software". What's more, what I'm arguing is not that there aren't disruptive steps, but that each step is small enough to make sense for a non-adventurous person, as a step increase in convenience. It's the theseus ship of mind uploading.

Comment by Alexandros on SSC discussion: growth mindset · 2015-04-12T09:34:09.544Z · LW · GW

This whole conversation sounds to me like people arguing whether width or height is a more important factor to the area of a rectangle. Or perhaps what percentage of the total each is responsible for.

It seems we humans are desperate to associate everything with a single cause, or if it has multiple causes, allocate causality to x% of multiple factors. However, success quite often has multiple contributing factors and exhibits "the chain is as strong as its weakest link" type behaviour. When phrased in terms of the contribution width and height make to the area of a rectangle, a lot of the conversation sounds like a category error. A lot of the metaphors we try and apply quite simply do not make sense.

Comment by Alexandros on 'Dumb' AI observes and manipulates controllers · 2015-01-13T18:22:01.860Z · LW · GW

The truly insidious effects are when the content of the stories changes the reward but not by going through the standard quality-evaluation function.

For instance, maybe the AI figures out that the order of the stories affects the rewards. Or perhaps it finds how stories that create a climate of joy/fear on campus lead to overall higher/lower evaluations for that period. Then the AI may be motivated to "take a hit" to push through some fear mongering so as to raise its evaluations for the following period. Perhaps it finds that causing strife in the student union, or perhaps causing racial conflict, or causing trouble with the university faculty affects its rewards one way or another. Perhaps if it's unhappy with a certain editor, it can slip through bad enough errors to get the editor fired, hopefully replaced with a more rewarding editor.

etc etc.

Comment by Alexandros on October 2014 Bragging thread. · 2014-10-08T03:32:48.360Z · LW · GW

Got a project we worked on for my startup covered on hackaday.com!

Comment by Alexandros on Ways to improve LessWrong · 2014-09-16T21:25:55.464Z · LW · GW
  1. I don't know, I haven't done the effort estimation. It just looks like more than I'd be willing to put in.
  2. One hypothesis is that LessWrong.com is a low priority item to them, but they like having it around, so they are averse to putting in the required amount of thought to evaluate a change, and inclined to leave things as they are.
  3. I think it is unlikely it will have as much benefit as you expect, and that the pain will be bigger than you expect. However, if you add the fact that your drive may help you learn to program, then the ROI tips the other way massively.

By the way, an alternative explanation for the fact that so many developers are here but so few (or none) actually contribute to LW code, is that they're busy making lots of money or working on other things they find exciting. This is good news for you, because making the changes may be easier than I originally estimated. As long as you are determined enough.

Comment by Alexandros on Ways to improve LessWrong · 2014-09-16T21:17:48.379Z · LW · GW

The issue is that the content does get written. It just doesn't find its way here.

Comment by Alexandros on Ways to improve LessWrong · 2014-09-15T05:02:13.292Z · LW · GW

I admire your optimism and determination. It's not my intention to convince you not to try. Even if you don't succeed, and it's not impossible that you could succeed, you will certainly get a lot out of it. So take my negativity as a challenge, and prove me wrong :).

Comment by Alexandros on Ways to improve LessWrong · 2014-09-15T03:34:45.210Z · LW · GW

Consider the fact that many, many programmers frequent LW. It's quite likely the majority of members know how to program a computer, and most of them have a very high level of skill. Despite this, contributions to LW's codebase have been minimal over the life of this website. I take this as extremely strong evidence that the friction to getting any change through is very, very high.

Comment by Alexandros on Ways to improve LessWrong · 2014-09-15T03:01:46.008Z · LW · GW

the problem is that these suggestions have orders of magnitude higher cost of implementation. This is further compounded by the fact that 1. LW uses a fork of the reddit codebase, which was not built with modification in mind, and 2. the fact that the owners of LW are (a) hard to engage in a conversation about changes and (b) even harder to get them to actually apply it.

The suggestion I made above suffers from none of these, and is technically implementable in a weekend (tops) by a single developer -- me. Whether it will be successful or not is a different story.

All in all I share your sense that this community is not nearly as optimally organised as it could to be, given the subject matter. Unfortunately we seem stuck in a local maxima of organisation.

Comment by Alexandros on Ways to improve LessWrong · 2014-09-15T02:10:07.170Z · LW · GW

I have spent a fair amount of time thinking about this. Fundamentally in order to discuss improvements, it's necessary to identify the sources of pain. The largest problem (and/or existential threat) I can see with LW is its stagnation/decline, both in content, and in new insights generated here.

Charitably, I suspect LW was built with the assumption that it would always have great content coming in, so the target and focus of most design decisions, policies, implied norms, and ad hoc decisions (let's call all these 'constraints') was to restrict bad content. Even its name can be thought to point to this principle, but the infamous 'Well kept gardens' post is also a good pointer. Unfortunately, the side effects of these constraints plus the community as shaped these constraints has been to push out most of the best authors in the community, including the earliest active members, who have spiraled in many different directions, while being nominally still affiliated with LW and/or it's community. As a result, LW itself is a shadow of its former self. Currently, the community is in a process of concentrating in other venues, with Slate Star Codex probably having more comments/day than LW itself, and SSC is not the only alternate venue.

With the above problem statement in mind, the best ROI for a developer wanting to improve the experience of the broader LW community I can find, is to set up a Hacker News clone (e.g. an instance of telesc.pe) aimed at the issues the LW community cares about.

Having a central location that aggregates worthy content from LW, SSC, OB, the MIRI blog, most other rationlist-sphere blogs, plus an equal amount of content from the rest of the web that is of rationalist interest, collectively filtered by the community, would make my experience of the LW-sphere much, much better, and I suspect I am pretty typical in this regard.

The aggregator not being under MIRI/LW control would probably be a net positive, given the history of management of LW itself. The point would not be to replace the things LW does well (giving a venue for people to post relevant material), but to replace the things it does not do well (aggregating the wider rationality community, filtering quality in a quasi-democratic way)

The major problem for such an aggregator would of course be lack of adoption, so I would like to hear from other LW members if such a move would interest them. I am committing to set this up if convinced that there is indeed enough interest. I have provisionally bought distributedconspiracy.com for this purpose.

Comment by Alexandros on "Follow your dreams" as a case study in incorrect thinking · 2014-08-21T11:13:10.944Z · LW · GW

I wonder to what degree 'follow your dreams' is a counterbalance to Dunning-Kruger. I.e. the people that should follow their dreams are likely to underestimate themselves, so a general 'go for it against the odds' climate might be just enough to push them to actually follow through. This would still leave the less skilled to suffer in following dreams they can't succeed at, but there should be some thought as to whether the end result is positive for humanity-in-general or not.

There is also something to be said that some times the people that should follow their dreams are not apparent, and you only figure out they "had it in them" if indeed they go through the process of actually pushing through and improving themselves for it. This is why investment (and hiring) is so hard. All of a person's history isn't enough to tell you whether they will succeed in a new environment. You can select for an unbroken string of success, but that still leaves a huge amount of false negatives. Again, this lends credence as to whether it is better for humanity-in-general to contain the 'follow your dreams' meme.

And of course there is the related thought that the success cases of following your dreams might be wider than actually succeeding at them. In that case, following your dreams pushes you to strive for excellence, and that will push you to develop conscientiousness, a positive attitude towards learning, potentially improve your degree of agency. These characteristics are extremely valuable in many roles. Following something more conventional might not have motivated you enough to actually mould yourself into a more fierce agent. If this last thought is true, following your dreams, even in zero-sum games, might me a positive-sum game when looked at with a wide enough lens.

Comment by Alexandros on Quantified Risks of Gay Male Sex · 2014-08-19T14:28:13.707Z · LW · GW

and of course this is another case of 'just because you hired the top 1% of the CVs you got, doesn't mean that those you hired are in the top 1% of programmers'. Less good programmers are more often looking for a job.

Is there a name for this pattern?

Comment by Alexandros on Identification of Force Multipliers for Success · 2014-06-25T18:32:07.100Z · LW · GW

funnily enough this list translates pretty well in the context of a whole business or organisation. great work!

Comment by Alexandros on Bragging Thread, June 2014 · 2014-06-14T05:53:48.201Z · LW · GW

The bay is where it's at for the kind of thing I want to do. The amount and seniority of people I spoke to face-to-face in 2 months in SF I didn't speak to in 3 years in London. San Francisco is a city so dense with developers and startup folk that New Relic feels comfortable paying for poster ads on the street. Being where the density of talent is, is a no-brainer. Besides that, the money is there, the partners are there, and the developer thought leaders are mostly there. It's kind of hard to make a case for being anywhere else, really. Plus, it's a pretty awesome area to live in, on the balance.

Comment by Alexandros on Bragging Thread, June 2014 · 2014-06-11T08:38:17.046Z · LW · GW

I spent the last two months in the valley away from my team and close ones. Pitched my startup to several investors big and small. I had to learn the game and the local culture on the fly. I went through insane ups and downs while keeping it together (mostly).

In the end I returned with signed a term sheet with one of the biggest funds in the valley for about 2.5x the amount I was looking for. This quadruples the value of our shares from our last round in September. Assuming term sheet converts to money in the bank, me and my team will be moving to the bay in the next 6 months with enough backing to take a proper shot at building a huge company. And now, to actually get some work done :)

Comment by Alexandros on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-05T19:08:47.368Z · LW · GW

Complaint isn't actually a high enough barrier. If I had a waiter serve me breakfast every morning in bed, and suddenly I had to go to the kitchen for it, you bet I'd complain. The question is, would people not visit links based on the title alone?

In any case, I've explained this enough times that I think I've done as much as I could have. I'll just leave it at this.

Comment by Alexandros on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-05T18:22:26.670Z · LW · GW

All I'm saying is that we have a supply problem, and you're raising a demand issue. Also, the issue you're raising is based on an anecdote that seems sufficiently niche as to not be worth the tradeoff (i.e. not solving the supply issue). If you have evidence of generality of the demand for summaries, I'd like to see it.

Comment by Alexandros on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-05T14:42:00.777Z · LW · GW

But what does it matter if 1% of all links that should end up here, actually do? Hacker news is a proven model, people not clicking without summaries isn't an issue.

Comment by Alexandros on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-05T13:55:54.989Z · LW · GW

And growth in status of the survey.

Comment by Alexandros on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-05T13:53:03.559Z · LW · GW

I was, for a period, a major submitter of links to Hacker News. The process for doing that with the bookmarklet they provide is literally two clicks and 10 seconds. How many of each is it for LW today?

Comment by Alexandros on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-05T04:43:15.990Z · LW · GW

That's the problem. Posting a summary is a trivial (or not so trivial) inconvenience.

Comment by Alexandros on All discussion post titles, points, and dates as an excel sheet · 2014-06-03T20:41:34.341Z · LW · GW

posts per month, upvotes per month. (i understand score is positive minus negative, but it cancels out). potentially comments per month too, but I didn't fetch that data. substitute month for your preferred granularity of course.

Comment by Alexandros on Open thread, 3-8 June 2014 · 2014-06-03T19:57:09.463Z · LW · GW

but if we're talking startups, I'd probably look at where the money is and go there. Can this be applied to groups of traders? c-level executives? medical teams? maybe some other target group are both flush with cash and early adopters of new tech?

Comment by Alexandros on Open thread, 3-8 June 2014 · 2014-06-03T19:55:15.051Z · LW · GW

whatever team state matters. maybe online/offline, maybe emotional states, maybe doing biofeedback (hormones? alpha waves?) but cross-team. maybe just 'how many production bugs we've had this week'.

Comment by Alexandros on Open thread, 3-8 June 2014 · 2014-06-03T18:38:50.181Z · LW · GW

I've thought about taking this idea further.

Think of applying the anklet idea to groups of people. What if soccer teams could know where their teammates are at any time, even if they can't see them? Now apply this to firemen. or infantry. This is the startup i'd be doing if I wasn't doing what I'm doing. plugging data feeds right into the brain, and in particular doing this for groups of people, sounds like the next big frontier.

Comment by Alexandros on All discussion post titles, points, and dates as an excel sheet · 2014-06-03T18:35:07.952Z · LW · GW

True. I guess I was being a bit cheeky. LW is no longer being kept at all AFAICT (or just on maintenance), just wanted to see if it's on an upward or downward trajectory. I obviously think there is a problem, and I have a solution to suggest, but I wanted to double check my intuition with the numbers.

Comment by Alexandros on All discussion post titles, points, and dates as an excel sheet · 2014-06-03T18:33:32.945Z · LW · GW

post updated with code, go crazy! number of comments is another one I'd add if I ran it again.

Comment by Alexandros on All discussion post titles, points, and dates as an excel sheet · 2014-06-03T18:32:59.625Z · LW · GW

done

Comment by Alexandros on All discussion post titles, points, and dates as an excel sheet · 2014-06-03T18:24:54.838Z · LW · GW

Well, it's not being 'kept' anymore for one, but I didn't need analysis for that. I guess the question is if it is flourishing or dying out.

Comment by Alexandros on Open Thread February 25 - March 3 · 2014-03-11T01:46:39.954Z · LW · GW

I am not convinced it is the optimal route to startup success. If it was, I would be doing it in preference over my current startup. It is highly uncertain and requires what looks like basic research, hence the altruism angle. If it succeeds, yes, it shouldake a lot of money and nobody should deprive it's creators of the fruits of their labour.

Comment by Alexandros on Open Thread February 25 - March 3 · 2014-02-28T07:10:51.578Z · LW · GW

Obviously you'd take a different angle with the marketing.

Off the cuff, I'd pitch it as a hands-off dating site. You just install a persistent app on your phone that pushes a notification when it finds a good match. No website to navigate, no profile to fill, no message queue to manage.

Perhaps market it to busy professionals. Finance professionals may be a good target to start marketing to. (busy, high-status, analytical)

There would need to be some way to deal with the privacy issues though.

Comment by Alexandros on Open Thread February 25 - March 3 · 2014-02-28T07:05:58.358Z · LW · GW

fantastic, thanks!

Comment by Alexandros on Open Thread February 25 - March 3 · 2014-02-28T07:03:29.713Z · LW · GW

Well, at this point we're weighing anecdotes, but..

  1. Yes! They do tend to push their rationality to the limit. Hypothesis: knowing more about rationality can help push up the limit of how rational one can be.

  2. Yes! It's not about rationality alone. Persistent determination is quite possibly more important than rationality and intelligence put together. But I posit that rationality is a multiplier, and also tends to filter out the most destructive outcomes.

In general, I'd love to see some data on this, but I'm not holding my breath.

Comment by Alexandros on Open Thread February 25 - March 3 · 2014-02-27T11:00:18.275Z · LW · GW

I question the stats that says 1% success rate for startups. I will need to see the reference, but one I had access to basically said "1% matches or exceeds projections shown to investors" or some such. Funnily enough, by that metric Facebook is a failure (they missed the goal they set in the convertible note signed with Peter Thiel). If run decently, I would expect double digit success rates, for a more reasonable measure of success. If a driven, creative rationalist is running a company, I would expect a very high degree of success.

Another thing much more common in rationalists than the common population is the ability to actively solicit feedback, reflect and self-modify. This is surprisingly rare. And incredibly vital in a startup.

Success at startups is not about not doing stupid things. I've made many MANY mistakes. It's about not doing things stupid enough to kill your company. Surprisingly, the business world has a lot of tolerance for error, as long as you avoid the truly bad ones.

Comment by Alexandros on Open Thread February 25 - March 3 · 2014-02-27T08:54:49.871Z · LW · GW

Well, it's more than a hypothesis, it's a goal. If it doesn't work, then it doesn't, but if it does, it's pretty high impact. (though not existential-risk avoidance high, in and of itself).

Finding a good match has made a big subjective difference for me, and there's a case it's made a big objective difference (but then again, I'd say that) and I had to move countries to find that person.

Yeah, maybe the original phrasing is too strong (blame the entrepreneur in pitch mode) but the 6th paragraph does say that it's an off-chance it can be made to work, though both a high improvement potential and a high difficulty in materialising it are not mutually exclusive.

Comment by Alexandros on Open Thread February 25 - March 3 · 2014-02-27T07:36:29.199Z · LW · GW

This isn't "I'm smart and rules don't apply". Smartness alone doesn't help.

But, to put it this way, if rationality training doesn't help improve your startup's odds of success, then there's something wrong with the rationality training.

To be more precise, in my experience, a lot of startup failure is due to downright stupidity, or just ignoring the obvious.

Also, anecdotally, running a startup has been the absolute best on-the-job rationality training I've ever had.

Shockingly, successful entrepreneurs I've worked with score high on my rationality test, which consists of how often they say things that are uncontested red flags, and how well-reasoned their suggested courses of action are. In particular, one of our investors is the closest approximation to a bayesian superintelligence I've ever met. I can feed him data & news from the past week, and almost hear the weighting of various outcomes shift in his predictions and recommendations.

In short,

  1. Rationalists are more likely to think better, avoid obvious errors.
  2. Thinking better improves chances of startup success
  3. Rationalists have better chances of startup success.

I do understand this sounds self-serving, but I also try to avoid the sin of underconfidence. In my experience, quality of thinking between rationalists and the average person tends to be similar to quality of conversation here versus on YouTube. The problem is when rationalists bite off more than they can chew in terms of goals, but that's a separate problem.