Influencing the direction of research and the choice of research questions

In this post I explore potential strategies for improving the choice of research questions in scientific research, and it was first posted on the EA Forum.

My main takeaways after working on this post are:

  • Influencing the direction of research seems more pressing, and more neglected, to me than working on improving other aspects of the research system.
  • Influencing research policy seems like a potentially valuable and rather unexplored area.
  • I also think it could be worth looking into influencing academic culture and grantmaking criteria to improve priorities between fields and projects.
  • Improved roadmapping resources through online databases or an expanded type of review articles could support better decision-making, particularly for actors that are already value-aligned.
  • I think people who consider starting initiatives in this area ought to think very carefully about if their project would be impactful enough if it succeeded to be worth the effort and the high risk of failure (Ozzie Gooen’s post on Flimsy Pet Theories, Enormous Initiatives comes to mind).

Note that when I write about “scientific research” or “science”, I mean this in a broad sense that is not meant to exclude the social sciences.

Throughout the post I’ll use the thinking emoji 

to mark out sections where I describe what I think could be an opportunity for a new project to make these easy to find when scrolling through.

Selecting the “right” research questions

Why this is relevant

Selection of research questions occurs at many levels, from the most general when high-level political decisions are made about prioritizing e.g. climate research, to the most specific when an individual researcher formulates a research question for an upcoming project. In between these extremes are the decisions made regarding funding, hiring and publishing, where the peer review process often plays a central part.

Selection and framing of research questions are key to setting the direction of scientific research, and thereby have a huge influence on how much value is produced by the resources spent on it. While hopefully this post can have some value for explicitly EA-aligned research, my main focus is on influencing the general scientific research system. For reference, a recent estimate is that $46 billion is now committed to EA (including everything, not just research) while an estimated $1700 billion is spent annuallyon general scientific research and development. While we hopefully can expect EA-funded research to generate more impact than average per dollar spent, research that happens outside EA could easily be as important or more important to the direction that history will take.

As a definition of what a good or “better” research question would be, I will stick to my previous position that better science is science that more effectively improves the lives of sentient beings (keeping improved  “understanding the universe” as an important instrumental goal). I’ve come across some interesting criticism of that position by Philip Kitcher in his book Science in a Democratic Society, which is well worth a read for anyone who wants to explore more complex proposals for how to set priorities in scientific research, but for the purpose of this post I’ll use the term “value-aligned” to signify aligned with the aim of improving the lives of sentient beings (which I understand as aligned with the typical values of the EA community). 

To be a bit more specific, I would think that science would produce more value if more priority was given to research that meets the needs and preferences of the global poor as well as the needs and preferences of future people.

Why I think it’s more pressing to improve what we do research on than to improve quality and efficiency in research

Despite the importance of research questions, it seems like this area is getting much less attention than other aspects of “improving research”. There is a growing reform movement in science, generally going by the label metascience, with conferences such as Metascience and Aimos and organizations such as Center for Open Science or METRICS, but the focus of these initiatives is not about setting priorities for what we should try to learn in scientific research but rather to improve the quality, reliability and efficiency of science. The problems in focus are for example how to make sure that published results can be reproduced by a different research group, or how to make sure that statistical methods are used in appropriate ways, or how to best share results and data. These are important and interesting challenges, but if the research questions themselves are not relevant and valuable to answer, it does not really matter how reliable or accessible the results are.

I would argue that it is more pressing to work on improving the direction of research by improving what fields and research questions are prioritized, than to add further resources to improving the quality and efficiency of research overall. 

This is mainly based on two assumptions: 

1) I think there is a lot of room for improvement on how we prioritize different research areas and research questions. This means that additional work spent on quality and efficiency might be wasted on research areas that are not very valuable anyway.

2) I think the existing metascience reform movement has good momentum and will be able to make progress on quality and efficiency. This means that the marginal utility for additional work on these issues is less, even when applied to a comparatively valuable research field.

I have noticed that also within EA, a lot of people who are interested in improving science are drawn to classical metascience issues such as open access or reproducibility. What I would like to see in these cases is really explicit theories of change, stating the logical steps all the way to the ultimate objective of an initiative. I think there is a risk that people who (like myself) have had disappointing and frustrating experiences of academic research sometimes fail to take the necessary step back to consider that what has been most frustrating in our personal experiences might not be the things that are most in need of fixing.

My guess regarding why the (non-EA) metascience reform movement focuses on quality and efficiency rather than on priorities is that this has to do with the tradition of “value-free” science and ideals of scientific freedom. It is hard to discuss improvements to research questions without getting into a discussion of values – how do we determine what research is most valuable, and who is to make that decision? I think many researchers find it more comfortable to address quality issues or open access than to ask fundamental questions about what research is valuable to pursue at all.

There is a previous, very interesting, forum post called Disentangling “Improving Institutional Decision-Making” which differentiates between technical improvement of decision quality on the one hand and improvement in terms of greater value-alignment on the other hand. When I approach improvement of research questions both of these perspectives apply, though I will focus mainly on value-alignment. As an example, improved value-alignment could be if a grantmaker in health research would move from a focus on national disease burden to global disease burden. An example of technical improvement of decision quality could be if researchers got access to better information on what research projects were already underway, so that they could make more informed decisions in their selection of research questions.

Another related post is Improving the future by influencing actors’ benevolence, intelligence, and power. Improvement of research questions would correspond to improving the benevolence and/or intelligence of the actors that influence the scientific research agenda.

What could we do to improve the choice of research questions?

In the upcoming sections, I suggest some possible strategies for improving the choice of research questions. The first sections deal with research policy, grantmaking, and academic culture, where I believe there could be opportunities to improve the value-alignment or benevolence of important decisions and incentive structures. The final section covers “roadmapping”: improvement of the understanding of current knowledge landscape and knowledge gaps, which could have some opportunities for value-alignment but probably more potential for improving the decision-quality or intelligence of decision makers that are already reasonably value-aligned.

Research policy

Influencing which areas get priority

Research policymaking is decided both at national levels and in bodies such as the EU, the UN and the OECD. Research policy documents generally highlight specific priority areas to which a lot of funding is then directed – see for example the current research and innovation strategy by the European commission, which guides the spending of €95.5 billion in the EU funding programme Horizon. Priority areas can be defined on a very general level, such as promoting circular economy, or on a more specific level, as when the World Health Organization makes a list of “priority pathogens” to focus the development of new antibiotics.

Influencing which areas of research get priority in this way seems like an important opportunity to improve the value-alignment of research questions, given that we could identify changes that would be desirable. From my personal experience working with research funding in a limited field (antibiotic resistance), I have the impression that it can be rather straightforward to identify desirable changes at least within a field. This is not to say that it would be easy to figure out the optimal resource allocation within the field, but if the current situation is far from optimal it might not be so difficult to identify at least some changes that would be improvements (such as prioritizing funding for projects that address the disease burden of low-income countries).

I am uncertain about the tractability of identifying desirable changes in resource allocation between fields, but it might be a similar situation where one can figure out some plausible improvements even though the optimal allocation is uncertain. For example, one might push for changes that would benefit AI alignment research or biosafety.

Something interesting about research policy is that it doesn’t seem to get a lot of media attention and is rarely the topic of political debate (with some exceptions, see next section), while at the same time these policies seem like they could have a large impact on society. This might be an indication that influencing research policy could be a rather tractable way to influence both the near term and the long term future – if other stakeholders don’t have strong positions on what research policy should look like, it should be possible to influence the outcomes. 

One type of established lobbying towards research policy worth mentioning is the representation of patient interests in health related research. An example is the charity James Lind Alliance that combines the perspectives of patients, clinicians and carers to identify what they think are the most important unanswered research questions, but there are also many patient advocacy organizations that focus on a specific type of condition (e.g. American Cancer Society or American Heart Association).

Identify some desirable changes in broad science policy and assess how impactful they would be in expectation.

Investigate what allocation seems desirable between research with unpredictable ends, and research aiming to solve specific problems or challenges (roughly the explore-exploit dilemma of research agenda setting). This should include a review of previous work on this question.

Identify the most significant institutions for shaping global research policy and investigate how their decisions are made, the size of their budgets and their priority areas. This should include a survey of previous research on science policy development (e.g. by SPRU).

Investigate the implementation of previous research policies to understand how the wording of policy documents translate into specific funding allocation, research project proposals, and research results and publications.

Develop estimates for cost effectiveness on science policy work – how much resources seems to be needed to achieve relevant changes? How does this compare to other policy work, or to other research improvement work? Study the track record of existing organizations that has attempted to influence science policy (e.g. patient advocacy organizations).

Identify the most relevant career paths for influencing research policy.

Influence political attitudes and legislation

While some areas of research become politically popular and prioritized for public funding, the other end of the spectrum would be areas that become controversial, unpopular, regulated or banned. This can make certain research questions unattractive or impossible to pursue even though they might be important and potentially impactful to work on.

One example is research on psychedelic substances as potential drugs for conditions such as treatment-resistant depression or addiction. This seems like a field that could potentially achieve huge gains in welfare if such treatments turned out to be effective, but the research in this field has been extremely limited by narcotics legislation. NGOs such as the UK-based Beckley foundation and the US-based MAPS appear to have played important roles in the process of making such research possible once again.

Other areas, while not restricted by law, might just be perceived as unattractive to fund and support. Geoengineering research seems like one such area: a recent example is how local protests stopped a test in northern Sweden. Such popular opposition to a field of research could potentially also lead to legislation against it.

Study what strategies have been successful for affecting policy on psychedelic research. Are there lessons that would be useful for making controversial but important geoengineering research more feasible.

Influencing funding at grantmaker level

At the grant-maker level I believe that there is often opportunity to improve both value-alignment and decision-making quality. Grant decisions are made differently at different funding agencies, but a common process is to use some kind of peer review board that consists partly or completely of other researchers (“peers”). The reviewers would be expected to have gone over application materials in advance of the meeting, and to make the decision together during the meeting. There are often explicit criteria for project selection that still leave lots of room for interpretation (e.g. “intellectual merit” that is a criteria for NSF grants). The discussion could be more or less structured depending on the funding agency and the participants themselves, but it seems fair to say that the decisions are often influenced as much by the social dynamics of the group and the personal preferences of the individuals as by rational reasoning and criteria (the book How Professors Think gives detailed insight in the process of social science review boards in the US for anyone who wants a deep dive).

Peer review has been criticized on many accounts, such as racial bias, inconsistency, inefficiency and risk-aversion. There are several existing initiatives to try to innovate and improve the grant-making process, for example by using lotteries, but although these would affect the incentives that researchers are exposed to, the focus is rarely to influence the direction of research or the selection of research questions. In other words: these initiatives are about improving intelligence or “technical decision quality”, rather than improving the benevolence or value-alignment of grant decisions.

To improve research questions in a way that would promote welfarist value creation would probably involve work on selection criteria, both in terms of setting criteria that are value aligned and in terms of making sure the selection process implements the criteria in a reliable and constructive way. My personal experience of assisting in work on research applications is that there is also a very real risk that well-intended and fundamentally constructive criteria can turn into counter-productive micromanagement, so this type of work should be done with care.

Investigate what decision criteria and decision processes are used by the most important research grantmakers.

Assess the expected impact of different existing (or novel) grant-making processes on research question selection and on the direction of research.

Assess the expected outcomes of changing the composition of review boards, for example by involving more non-experts in the grant review process to represent a wider or different range of values and priorities.

Develop and advocate for improved decision criteria and processes to be used by research grantmakers.

Influence academic culture: Promoting discussion on values and prioritization

Currently, the reform movements in scientific research focus on value-free concepts such as quality, reproducibility and transparency. Could we imagine similar movements that promote a selection of research questions that correspond better to global and future needs, and if yes, would that lead to an improvement to the current situation? 

The previously mentioned science philosopher Philip Kitcher proposes that the setting of the research agenda and prioritization between different fields and research questions should be done in discussions that involve a representative group of the (global) population who has been tutored to understand the complexity of the issues. Though this seems practically very difficult to achieve, his writing might be taken as a general argument for a more explicit discussion of values and priorities that should take the interests of all stakeholders (including the global poor and future people) into account. 

Kitcher does not propose welfarism as the ultimate goal, arguing that people also value other things than welfare and that such values (e.g. curiosity) should be accounted for. However, it seems to me that a shift in the direction of involving all stakeholders would still result in a much higher alignment with “EA values” than the current situation. I think there are (at least) two significant challenges with Kitcher’s approach: firstly, to establish a discussion where the interests and values of all stakeholders are represented in a reasonable way, and secondly, to make sure that everyone in the discussion understands the science well enough to have a meaningful discussion. Still, even if it cannot be done perfectly, some level of change in this direction might be valuable.

A more limited but practical approach is taken in a recent conference paper co-authored by Alexander Herwix (that also references Effective Altruism) that proposes a framework for discussion of research question selection. They argue for a more systematic approach both on the level of specific research projects and on the broader level of setting a direction for a larger research programme. The paper is on information systems research but the framework seems like it would be applicable for other fields as well. The framework is similar to a business model canvas and provides a basis for discussion where the outcome would depend on the values or priorities of the participants, but the framing nudges toward value-alignment by introducing criteria such as scale and tractability.

My best guess would be that a more explicit discussion regarding values and prioritization in science would lead to higher value alignment. This is based on me believing that most researchers would not intentionally and explicitly disregard the needs of the poor, the underprivileged or of future people. I don’t think the consequences of such a discussion are obvious though, and there is a possibility that it could backfire if complex questions become politicized and reduced to twitter discussions that in turn makes science policy more political and less tractable to work with.

Do a review of what contexts exist already where academics and/or non-experts are involved in discussions about the research agenda and selection of research questions.

Try out setting up discussions on research priorities with selected groups of academics, for example with organizations such as Global young academy, to get a better understanding of how such discussions could develop.

Investigate research using animal experiments as a test case for tutoring non-researchers to be able to participate in a discussion about the value of answering specific research questions. The knowledge-gap between laymen and researchers in ethics boards is known as an obstacle for meaningful discussion in this context, and this could be a test case for developing training material to improve such discussions. The fact that there is already an established forum in the form of ethics committees and laypeople who are willing to dedicate time to this work is an advantage, though the political (emotional?) sensitivity of animal experiments could be a disadvantage.

Roadmapping: Increasing the understanding of current knowledge, ongoing research and relevant knowledge gaps

For decision makers that are (reasonably) value-aligned, it could make sense to improve their decision-making with regards to research questions simply by improving their access to clear information regarding the current knowledge, current ongoing research and relevant knowledge gaps that could be explored.

I expect initiatives in this section to be most impactful if they were to target some particularly value-aligned field of research or decision maker. It could also be possible though to implement initiatives in a way that nudges decision makers towards value-aligned conclusions, for example by promoting the use of key indicators or measures that make such values salient.

Analyze the level of value-alignment of one or several major funders in a research field. Does it seem like value-alignment or decision-quality is a bottleneck for impact? How much could the impact of their resources be improved by improving value-alignment and/or decision-quality?

Review articles

The standard way of mapping current knowledge in a field is through a review article that surveys and summarises previously published studies. Review articles play an important role in getting newcomers up to speed with a topic and are generally more accessible (as in, easier to understand for a non-expert) than the surveyed research articles themselves. Additionally, review articles often comment on the quality and robustness of the underlying studies which is helpful for non-experts who want to use the results to inform for example policy or grantmaking.

Many review articles go further and identify “knowledge gaps” or suggest areas for further investigation, using implicit or explicit value-judgements about what additional research would be most valuable. It is unclear to me to what extent such recommendations actually influence the direction of subsequent research, but it seems plausible that it should have some influence simply by pointing out tractable directions. Possibly they also influence funding decisions since the review article itself could be used to support grant applications on these topics.

Investigate to what extent the identification of knowledge gaps or recommendations for further research influence subsequent research projects

In the current format I’m not convinced that it would be an especially good use of resources to simply prioritise doing more review articles in general, but it might be valuable for researchers who co-author review articles to influence the value-alignment of a knowledge-gap analysis or further research recommendations. An EA organization could potentially provide support and feedback for such work.

Establishing “R&D Hubs” for up-to-date information on ongoing research

A different way to establish an overview of a research field is to establish platforms, “R&D Hubs”, that offer up-to-date information on ongoing research projects. An example from global health is the Global AMR R&D Hub that maps up most of the ongoing research projects related to antimicrobial resistance worldwide. In this case, the information is collected from funding organizations. Long before the conclusion of a project, it’s possible to access the project title, the name of the PI, funding amounts and in many cases a brief project description. Dimensions is a project in the same direction but that aims to include all fields of research, and Clinicaltrials.gov provides information on clinical trials.

This type of platform seems most valuable in fields that have a high level of value-alignment but where coordination is a bottleneck. For a researcher it could make it easier to identify potential collaborators and to make sure they don’t unintentionally duplicate other projects, and for a grant-maker it makes it easier to evaluate where there seems to be more or less funding available from other sources. 

Evaluate existing platforms that provide information on ongoing research: Who uses them? Do they have any impact on decision-making, and if so, how?

It might be valuable to create knowledge gaps platforms listing promising research project ideas (for EA research, or for other fields).

Develop a new type of research field survey/review concept, combining the accessible, easy-to-grasp properties of a review article with the up-to-date and more comprehensive scope of an online database or R&D Hub.

This could in addition to published research include information about ongoing research projects that have not yet led to publications by sourcing information from preregistered research questions, information from research funders about what projects they have granted funding for, as well as preprints of unpublished research. Another possibility that could be worth looking into could be to somehow include previously unpublished knowledge regarding failed experiments, as an alternative way to manage positive publication bias.

Note that this format would not automatically promote the selection of more value-aligned research questions, but adding a value-aligned knowledge-gap analysis of the field could potentially influence the decisions of the readers in this direction. 

My thoughts about publishing

Scientific publishing comes up very frequently in discussions about what is wrong with scientific research: journals are exploiting researchers that supply peer review services free of charge, they charge too much for access and they pay no royalties on the material they publish (see for example this article if you are new to the topic). This doesn’t seem great, but I’m not convinced so far that publishing reform should be a priority for those who want to improve scientific research.

If the issue is that publishers make too much money without providing value for it, that might be unfair but not enough reason to believe that reforming publishing would fundamentally improve the value production of scientific research.

One might argue that the most prestigious journals in a field are setting the research agenda by selecting what to publish, and that they do this in a way that selects for less valuable research. I think, however, that this would not really be a problem if grantmakers and university hiring committees had sound criteria for decision-making.

I know that there are people who disagree strongly with me on this point, and I would love to see a post making the case for how reform of scientific publishing could improve the research system in more fundamental ways than simply reducing waste or improving efficiency.

Also: I do think that open access is valuable, especially to enable research outside of rich universities, but the bottleneck there again seems to be the acceptance by the research community (that those who recruit for prestigious academic positions should be known to value high-quality open-access publications) rather than a lack of open-access journals. 

Conclusion

To sum up, I think there are a number of interesting ideas (marked by the thinking emojis throughout the post) worth exploring in this area, especially related to increasing value-alignment of the research agenda.

I think it is generally difficult to change the dynamics of the research system, and that it is therefore very important to think through if a certain initiative is promising enough to be worth the effort. If an initiative succeeds in changing the system, that change might not be reversible: therefore we should also think in advance about possible unintended consequences.

I would love to get in touch with more people interested in these issues – do reach out if you consider working on any of these ideas or just would like to have a chat!