User Tools

Site Tools


research_problems:promising_research_projects

Promising research projects

Published 05 April, 2018; last updated 26 March, 2021

This is an incomplete list of concrete projects that we think are tractable and important. We may do any of them ourselves, but many also seem feasible to work on independently. Those we consider especially well suited to this are marked Ψ. More potential projects are listed here.

Project

Review the literature on forecasting (in progress) Ψ

Summarize what is known about procedures that produce good forecasts, and measures that are relatively easier to forecast. This may involve reading secondary sources, or collecting past forecasts and investigating what made some of them successful.

This would be an input to improving our own forecasting practices, and to knowing which other forecasting efforts to trust.

We have reviewed some literature associated with the Good Judgment Project in particular.

Review considerations regarding the chance of local, fast takeoff Ψ

We have a list of considerations here. If you find local, fast take-off likely, check if the considerations that lead you to this view are represented. Alternately, interview someone else with a strong position about the considerations they find important. If there are any arguments or counterarguments that you think are missing, write a short page explaining the case.

Collecting arguments on this topic is helpful because opinion among well-informed thinkers on the topic seems to diverge from what would be expected given the arguments that we know about. This suggests that we are missing some important considerations, that we would need to well assess the chance of local, fast takeoff.

Quantitatively model an intelligence explosion Ψ

An intelligence explosion (or ‘recursive self-improvement’) consists of a feedback loop where researcher efforts produce scientific progress, which produces improved AI performance, which produces more efficient researcher efforts. This forms a loop, because the researchers involved are artificial themselves.

Though this loop does not yet exist, relatively close analogues to all of the parts of it already occur: for instance, researcher efforts do lead to scientific progress; scientific progress does lead to better AI; better AI does lead to more capacity at the kinds of tasks that AI can do.

Collect empirical measurements of proxies like these, for different parts of the hypothesized loop (each part of this could be a stand-alone project). Model the speed of the resulting loop if they were put together, under different background conditions.

This would give us a very rough estimate of the contribution of intelligence explosion dynamics to the speed of intelligence growth in a transition to an AI-based economy. Also, a more detailed model may inform our understanding of available strategies to improve outcomes.

Interview AI researchers on topics of interest Ψ

Find an AI researcher with views on matters of interest (e.g. AI risk, timelines, the relevance of neuroscience to AI progress) and interview them. Write a summary, or transcript (with their permission). Some examples here, here, here. (If you do not expect to run an interview well enough to make a good impression on the interviewee, consider practicing elsewhere first, so as not to discourage interacting with similar researchers in the future.)

Talking to AI researchers about their views can be informative about the nature of AI research (e.g. What problems are people trying to solve? How much does it seem like hardware matters?), and provide an empirically informed take on questions and considerations of interest to us (e.g. Current techniques seem really far from general). They also tell us about state of opinion within the AI research community, which may be relevant in itself.

Review what is known about the relative intelligence of humans, chimps, and other animals (in progress)

Review efforts to measure animal and human intelligence on a single scale, and efforts to quantify narrower cognitive skills across a range of animals.

Humans are radically more successful than other animals, in some sense. This is taken as reason to expect that small modifications to brain design (for instance whatever evolution did between the similar brains of chimps and humans) can produce outsized gains in some form of mental performance, and thus that AI researchers may see similar astonishing progress near human-level AI.

However without defining or quantifying the mental skills of any relevant animals, it is unclear a) whether individual intelligence in particular accounts for humans’ success (rather than e.g. ability to accrue culture and technology), b) whether the gap in capabilities between chimps and humans is larger than expected (maybe chimps are also astonishingly smarter than smaller mammals), or c) whether the success stems from something that evolution was ‘intentionally’ progressing on. These things are all relevant to the strength of an argument for AI ‘fast take-off’ based on human success over chimps (see here).

Review explanations for humans’ radical success over apes (in progress)

Investigate what is known about the likely causes of human success, relative to that of other similar animals. In particular, we are interested in how likely improvement in individual cognitive ability is to account for this (as opposed to say communication and group memory abilities).

This would help resolve the same issues described in the last section (‘Review what is known about the relative intelligence of humans, chimps, and other animals’).

Collect data on time to cross the human range on intellectual skills where machines have surpassed us (in progress) Ψ

For intellectual skills where machines have surpassed humans, find out how long it took to go from the worst performance to average human skill, and from average human skill to superhuman skill.

This would contribute to this project.

Measure the importance of hardware progress in a specific narrow AI trajectory Ψ

Take an area of AI progress, and assess how much of annual improvement can be attributed to hardware improvements vs. software improvements, or what the more detailed relationship between the two is.

Understanding the overall importance of hardware progress and software progress (and other factors) in overall AI progress lets us know to what extent our future expectations should be a function of expected hardware developments, versus software developments. This both alters what our timelines look like (e.g. see here), and tells us what we should be researching to better understand AI timelines.

research_problems/promising_research_projects.txt · Last modified: 2022/09/21 07:37 (external edit)