User Tools

Site Tools


featured_articles:ai_impacts_research_bounties

AI Impacts research bounties

Published 06 August, 2015; last updated 28 September, 2017

We are offering rewards for several inputs to our research, described below. These offers have no specific deadline except where noted. We may modify them or take them down, but will give at least one week’s notice here unless there is strong reason not to. To submit an entry, email katja@intelligence.org. There is currently a large backlog of entries to check, so new entries will not receive a rapid response.

1. An example of discontinuous technological progress ($50-$500)

This bounty offer is no longer available after 3 November 2016.

We are interested in finding more examples of large discontinuous technological progress to add to our collection. We’re offering a bounty of around $50-500 per good example.

We currently know of two good examples (and one moderate example):

  1. Nuclear weapons discontinuously increased the relative effectiveness of explosives.
  2. High temperature superconductors led to a dramatic increase in the highest temperature at which superconducting was possible.

To assess discontinuity, we’ve been using “number of years worth of progress at past rates”, as measured by any relevant metric of technological progress. For example, the discovery of nuclear weapons was equal to about 6,000 years worth of previous progress in the relative effectiveness of explosives. However, we are also interested in examples that seem intuitively discontinuous, even if they don’t exactly fit the criteria of being a large number of year’s progress in one go.

Things that make examples better:

  1. Size: Better examples represent larger changes. More than 20 times normal annual progress is ideal.
  2. Sharpness: Better examples happened over shorter periods. Over less than a year is ideal.
  3. Breadth: Metrics that measure larger categories of things are better. For example, fast adoption curves for highly specific categories (say a particular version of some software) is much less interesting than fast adoption curves for much broader categories (say a whole category of software).
  4. Rarity: As we receive more examples, the interestingness of each one will tend to decline.

AI Impacts is willing to pay more for better examples. Basically we will judge how interesting your example is and then reward you based on that. We will accept examples that violate our stated preferences but satisfy the spirit of the bounty. Our guess is that we would pay about $500 for another example as good nuclear weapons.

How to enter: all that is necessary to submit an example is to email us a paragraph describing the example, along with sources to verify your claims (such sources are likely to involve at least one time series of success on a particular metric). Note that an example should be of the form ‘A caused abrupt progress in metric B’. For instance, ‘The boliolicopter caused abrupt progress in the maximum rate of fermblangling at sub-freezing temperatures’.

2. An example of early action on a risk ($20-$100)

This bounty offer is no longer available after 3 November 2016.

We want: a one sentence description of a case where at least one person acted to avert a risk that was least fifteen years away, along with a link or citation supporting the claim that the action preceded the risk by at least fifteen years. 

We will give: up $100, with higher sums for examples that are better according to our judgment (see criteria for betterness below), and which we don’t already know about. We might go over $100 for exceptionally good examples.

Further details

Examples are better if:

  1. The risk is more novel: relatively similar problems have not arisen before, and would probably not arise sooner than fifteen years in the future. e.g. Poverty in retirement is a risk people often prepare for more than fifteen year before it befalls them, however it is not very novel because other people already face an essentially identical risk, and have done many times before. 
  2. The solution is more specific: the action taken would not be nearly as useful if the risk disappeared. e.g. Saving money to escape is a reasonable solution to expecting your country to face civil war soon. However saving money is fairly useful in any case, so this solution is not very specific.
  3. We haven’t received a lot of examples: as we collect more examples, the value of each one will tend to decline.

Some examples:

  1. Leo Szilard’s secret nuclear patent: the threat of nuclear weapons was quite novel. It’s unclear when Szilard expected such weapons, but quite plausibly at least fifteen years later in 1934. The secret patent does not seem broadly useful, though useful for encouraging more local nuclear research, which is somewhat more broadly useful than secrecy per se. More details in this report. This is a reasonably good example.
  2. The Asilomar Conference on recombinant DNA: the risk of was arguably quite novel (genetically engineered pandemics), and the solution was reasonably specific (safety rules for dealing with recombinant DNA). However the risks people were concerned about were immediate, rather than decades hence. More details here. This is not a good example.

Evidence that the example is better in the above ways is also welcome, though we reserve the right not to explore it fully.

featured_articles/ai_impacts_research_bounties.txt · Last modified: 2022/09/21 07:37 (external edit)