
How Good Intentions Can Backfire (And How to Avoid It)
Written October 29 2024
Est. 20-25 min read
This article draws heavily from Probably Good (see their up-to-date entry here).
Introduction
We encourage you to work on problems that are both important and neglected, aiming to increase the positive impact you can have for God’s Kingdom.
Taking these steps can indeed lead to great impact, but it also raises the potential to unintentionally do harm. The more important the issue, the more damaging it can be to hinder progress; the more neglected an area, the greater effect we may have on its direction; and the more influence we have, the more it matters if we’re wrong.
This is true even when we’re careful and prayerful—it’s possible to accidentally make things worse, sometimes far worse, despite good intentions.
In some areas of life, the downsides of failure are limited. If you write a book that no one reads, for example, the worst outcome is wasted time. But when it comes to making a difference, especially in fragile or high-stakes areas, there are numerous ways to unintentionally set back the field, and the downsides are not as limited. Here, the potential for negative impact can be just as great or greater than the potential for good.
So, if you’re trying to make a difference—especially if you’re taking on a challenging or ambitious role—it’s important to prayerfully consider how you might unintentionally make things worse. This doesn’t mean only taking "safe" actions that everyone agrees with. Meaningful impact often requires stepping out in faith and challenging conventional wisdom, which may involve some controversy. For instance, David stepped forward to face Goliath, despite the skepticism of others who doubted his ability and questioned his unconventional choice of weapon—a sling and five stones. The question, therefore, is how to take these steps wisely, minimising harm as much as possible - just as David relied on God for guidance and strength.
Though we don’t enjoy focusing on this cautionary topic, we believe it’s important to address if we want to serve God effectively in tackling pressing problems. Ironically, cautious people—the ones least likely to need this reminder—are probably the most likely to take it to heart. But we think this conversation is essential for those who want to make a meaningful, positive impact.
In this article, we’ll highlight six common ways people can unintentionally hinder their cause. While you may be aware of many of these risks, people often overlook one or two when starting in a high-stakes area.
There’s no way to entirely eliminate these risks. Balancing these risks with the potential upside of new projects involves difficult judgment calls. To that end, we’ll also share seven strategies for reducing the risk of doing harm, even in high-stakes areas. The short version is to avoid options with major potential downsides. But if that’s not feasible: seek a range of perspectives; avoid naive optimism or acting alone; practice humility in your views; build expertise and judgment, aligning them with the project’s demands; and steer clear of hard-to-reverse decisions, like rapid expansion.
Why it can be hard to recognise negative impact
What we’re concerned about in this article is the chance of leaving the world worse than it would have been, given what would have occurred had you not acted.
One way to do this is to take a big, ill-conceived gamble that’s more likely to do harm than good.
But unfortunately, this can happen even if the most direct effects of your work are clearly positive. For instance, you might do something helpful, but in the process get in the way of somebody who’s even better qualified.
Imagine a (misguided) first year medical student who comes across a seriously injured pedestrian on a busy street, announces that they know first aid, and conducts it on the injured person. They have good intentions and look as though they’re helping. But imagine that a passerby who was about to call an ambulance refrained because the student showed up and took charge. In that case, the counterfactual may actually have been better medical care at the hands of an experienced doctor, making their apparent help an illusion. The medical student should have called an ambulance instead.
Few people probably persist doing actions that are obviously harmful in this way, but making things counterfactually worse like this is probably quite common and hard to detect.
Of course in this situation we would also need to think about the impact of freeing up the ambulance to attend to even more serious scenarios. Which is to say that measuring true impact can get complicated fast.
Fragile problem areas and the biggest risks
One of our biggest concerns about this article is the possibility that we’ll accidentally discourage people from starting high-value projects. So we want to emphasise that we don’t think the risks we’re going to discuss are equally pressing in every field or problem area.
For example, we believe that global health is a relatively safe problem to work on. It is an enormous area, that’s generally acknowledged as legitimate, that has an established reputation, and where success is often measurable. (Although it is of course still possible to fail and accidentally cause harm.)
Reducing extinction risk is generally a riskier problem to work on and transformative AI policy and strategy might be the very riskiest that we recommend. This problem’s legitimacy is disputed, its reputation is not yet established, success is hard to measure, and a key goal is coordinating groups with conflicting interests.
More generally, we expect that unintended consequences are concentrated in the following situations:
Unestablished fields that lack an existing reputation or direction
Fields with bad feedback loops, where you can’t tell if you’re succeeding
Fields without an expert consensus, where the nature of a problem is hard to measure, and people judge it based on the people involved
Fields that involve conflict with another group who could use your mistakes to paint your whole field in a bad light
Fields in which uncovering or spreading sensitive information can cause harm
Fields where things are already going unexpectedly well, since random shocks are more likely to make things worse rather than better
We’ll discuss many of these situations in more detail later in this article.
The rest of this article will focus on the specific risks you should watch out for when working in these fragile fields.
Ways to cause an unintended negative impact - from most to least obvious
We’ll now discuss six ways people unintentionally cause harm. Just note that this post doesn’t deal with cases where someone made the best decision available but had a negative impact through sheer bad luck – there’s nothing to be done about that. You could also be successful solving an issue, but for it to turn out that the issue was actually negative to solve, but we’re also going to bracket issues of problem selection (read more).
1. Lack of expertise or poor judgement
The world is extremely complicated, and most projects have significant unforeseen effects, which can easily be negative. The worse your judgement, the more likely this is.
This is the most obvious and visible category of negative impact: you do something that makes the problem worse in a way that someone with greater competence would have foreseen ahead of time.
This category includes doctors who don’t wash their hands, people who deliver harmful social programs (an example is Scared Straight), and academics who publish incorrect findings because they use bad statistical methods.
A common form of mistake among novices is to lack strategic judgement. For instance, you might call attention to information that’s more sensitive than you realised.
This is an especially dangerous trap for people new to working on reducing extinction risk. Imagine learning that a new technology could cause a catastrophe if it’s misused as a weapon. Your first instinct might be to raise public awareness so policymakers are pressured to develop countermeasures or a nonproliferation strategy. While this may be useful in certain circumstances, increasing the profile of a threat can backfire by making it rise to the attention of bad actors (an example of an information hazard). Being careful with sensitive information sounds obvious, but if you’re new to an area it’s often not obvious exactly what information is most sensitive.
Another common oversight is failing to appreciate how damaging interpersonal conflict can be and how hard it is to avoid. Interpersonal conflicts can harm a whole field by reducing trust and solidarity, which impedes coordination and makes recruitment much more difficult. Nobody wants to join a field where everybody is fighting with each other.
An example from history: Dr. Ignaz Semmelweis realised in 1847 that cleaning doctors’ hands could save patients’ lives. His colleagues were at first willing to indulge his whim, and infection rates plummeted on his unit. But after a series of miscommunications and political conflicts within the hospital system, Semmelweis came to be regarded as a crank and was demoted. The practice of handwashing was abandoned, and thousands of patients died from infection over the next decades until later researchers proved him right. If he’d prioritised clear communication of his ideas and better relationships with his colleagues, the establishment might not have been so tragically late in realising that his ideas were correct.
The risk of making a misjudgment is a good reason (when possible) not to rush into solving a complex problem without getting the necessary training, mentoring, supervision or advice, and to embed yourself in a community of colleagues who may notice before you make a major mistake. Semmelweis’s story also highlights the importance of being good at communicating your ideas, not just developing them.
One of our biggest concerns about this article is the possibility that we’ll accidentally discourage people from starting high-value projects. So we want to emphasise that we don’t think the risks we’re going to discuss are equally pressing in every field or problem area.
For example, we believe that global health is a relatively safe problem to work on. It is an enormous area, that’s generally acknowledged as legitimate, that has an established reputation, and where success is often measurable. (Although it is of course still possible to fail and accidentally cause harm.)
Reducing extinction risk is generally a riskier problem to work on and transformative AI policy and strategy might be the very riskiest that we recommend. This problem’s legitimacy is disputed, its reputation is not yet established, success is hard to measure, and a key goal is coordinating groups with conflicting interests.
More generally, we expect that unintended consequences are concentrated in the following situations:
Unestablished fields that lack an existing reputation or direction
Fields with bad feedback loops, where you can’t tell if you’re succeeding
Fields without an expert consensus, where the nature of a problem is hard to measure, and people judge it based on the people involved
Fields that involve conflict with another group who could use your mistakes to paint your whole field in a bad light
Fields in which uncovering or spreading sensitive information can cause harm
Fields where things are already going unexpectedly well, since random shocks are more likely to make things worse rather than better
We’ll discuss many of these situations in more detail later in this article.
The rest of this article will focus on the specific risks you should watch out for when working in these fragile fields.
What we’re concerned about in this article is the chance of leaving the world worse than it would have been, given what would have occurred had you not acted.
One way to do this is to take a big, ill-conceived gamble that’s more likely to do harm than good.
But unfortunately, this can happen even if the most direct effects of your work are clearly positive. For instance, you might do something helpful, but in the process get in the way of somebody who’s even better qualified.
Imagine a (misguided) first year medical student who comes across a seriously injured pedestrian on a busy street, announces that they know first aid, and conducts it on the injured person. They have good intentions and look as though they’re helping. But imagine that a passerby who was about to call an ambulance refrained because the student showed up and took charge. In that case, the counterfactual may actually have been better medical care at the hands of an experienced doctor, making their apparent help an illusion. The medical student should have called an ambulance instead.
Few people probably persist doing actions that are obviously harmful in this way, but making things counterfactually worse like this is probably quite common and hard to detect.
Of course in this situation we would also need to think about the impact of freeing up the ambulance to attend to even more serious scenarios. Which is to say that measuring true impact can get complicated fast.
We’ll now discuss six ways people unintentionally cause harm. Just note that this post doesn’t deal with cases where someone made the best decision available but had a negative impact through sheer bad luck – there’s nothing to be done about that. You could also be successful solving an issue, but for it to turn out that the issue was actually negative to solve, but we’re also going to bracket issues of problem selection (read more).
1. Lack of expertise or poor judgement
The unilateralist’s curse
One particularly easy way to make a mistake that causes a substantial negative impact is to act unilaterally in contexts where even one person mistakenly taking a particular action could pose widespread costs to your problem area, or the world as a whole. Nick Bostrom has explained that if people act based only on their personal judgement in this context, risky actions will be taken too often. He’s named this phenomenon ‘the unilateralist’s curse,’ illustrated with the following example:
A group of scientists working on the development of an HIV vaccine have accidentally created an airborne transmissible variant of HIV. They must decide whether to publish their discovery, knowing that it might be used to create a devastating biological weapon, but also that it could help those who hope to develop defenses against such weapons. Most members of the group think publication is too risky, but one disagrees. He mentions the discovery at a conference, and soon the details are widely known.
More generally, let’s say that there’s a field of ten people each trying to estimate the expected value of a potentially risky initiative that would turn out to have a negative impact if it’s taken. Even if, on average, the group’s estimate is correct, there will be some people whose estimate of the value is too high and others whose estimate is too low. If everybody acts based on their own judgement alone, then whether the initiative is started will be determined entirely by whether the most optimistic member of the whole group, i.e. the one who most overestimated the initiative’s value, thinks it will be positive. This is a recipe for going ahead with a lot of bad projects.
Fortunately, the curse can be lifted if you take the judgement of the rest of your field into account and refrain from taking unilateral action when most of them would disagree.
2. Reputational harm
Everyone understands that one risk of failure is that it can tarnish your reputation. However, it’s important to recognize that others may perceive your mistakes as a reflection of your community or the broader work of your faith. This means that a misstep can also hinder fellow believers or those engaged in your problem area.
A striking example is the collapse of FTX, whose CEO, Sam Bankman-Fried, claimed he was practicing effective altruism by earning to give. After being charged with fraud in December 2022, his actions not only damaged his own reputation but also set back the credibility of all the areas he was associated with and claimed to support.
In general, the more significant and public the failure, the greater the potential damage.
This reality justifies the need for larger projects to undergo thorough vetting by stakeholders and to be mindful of avoiding controversy. It also illustrates how seeking media attention can be a double-edged sword; while visibility can be beneficial, it can also lead to heightened scrutiny. A project with a lower profile is less likely to attract attention if it fails, while a “normal” failure won’t garner nearly as much interest as a controversial or sensational one.
Moreover, taking a countercultural or unusual approach can sometimes lead to unintended consequences that affect your cause. For instance, if you were to publicly embrace an unconventional interest, such as a fictional fandom, it could overshadow your project, making it a more compelling story and potentially harming the reputation of the cause you were trying to uplift.
There are also subtler ways to negatively impact the reputation of your field. For example, if you're excited about a new idea for improving the world and approach wealthy donors without adequately preparing for their objections, you may come across as naive. This could lead them to reject your proposal and discourage them from supporting similar initiatives in the future.
Even less prominent contributions can harm a field's reputation. If you consistently produce mediocre work that only makes a small impact, it can dilute the overall perception of your field. When people evaluate a field, they often think about the quality of its typical examples. For instance, having several publications in top journals can be more impressive than having a mix of good and mediocre papers.
In newly emerging fields, any published work can serve as an introduction for others. Researchers may benefit from focusing on their strongest ideas—those that will attract promising students and showcase the best of the field.
These reputational risks highlight the importance of putting your best foot forward and considering how your work influences others’ perceptions. However, we mustn’t let concerns about reputation hinder our willingness to share new ideas or promote essential messages. Striving for perfection should not prevent us from making valuable contributions to the Kingdom.
Ultimately, determining when reputational risks outweigh the benefits of sharing new ideas can be challenging. Some points to consider are: (i) it’s less of a concern in well-established fields, (ii) if your work can undergo peer review by respected individuals, it’s likely acceptable, and (iii) if unsure, consider sharing your work privately with trusted individuals rather than posting it publicly.
3. Resource diversion
Almost every project eventually looks for funding and people to hire. Many also try to grab people’s attention.
Unfortunately, there’s only so much money, people and attention in the world.
If you hire someone, they won’t be working elsewhere. If you accept a donation, that money isn’t going to someone else. And if someone is reading this article, they aren’t reading a different one.
This means a project that directly does good can still be counterproductive if those resources naturally would have gone somewhere even better.
This risk becomes larger inasmuch as i) you are an unusually good salesperson, ii) you exaggerate your impact, iii) donors and employees can’t tell what projects are the best, and iv) you draw on resources that are likely to be used well in your absence (rather than bringing in ‘new’ resources that wouldn’t have been focused on doing good otherwise).
When doing an impact evaluation of your project, it’s important to try to roughly compare your impact to the impact that would have been possible if the resources you used had gone to another impactful project they might have plausibly gone to. For instance, readers who work on global poverty often use GiveDirectly as ‘baseline’ that could absorb a lot of funds.
4. Locking in suboptimal choices
Another way you can cause harm is to set your field or problem area on a worse trajectory than it would have taken otherwise, and then finds hard to escape. This is most likely in the earliest stages when your strategy isn’t yet set, people don’t have a view about you, and the field is small, so single actions can meaningfully change the direction of the field as a whole. It’s not a very big concern in larger, more well-established fields.
The decisions you make when you’re just starting out and know the least can stick around or even snowball out of control. This is for a number of reasons:
Most people who know of you will only ever offer a tiny amount of attention, so it’s hard to change their first impression;
Once the media has written about you, people will keep finding those articles, shaping future perceptions. In particular, journalists often draw from the work of earlier journalists.
Terms are hard to change;
Once you define what you believe, you will tend to attract people who agree with that view, further entrenching it;
People find it very hard to fire colleagues, change management structures, or abandon their strategy, so bad choices often carry on even once they’re known to be problematic.
These effects can be hard to notice because we never get to see how alternative choices would have turned out.
Even less prominent contributions can harm a field's reputation. If you consistently produce mediocre work that only makes a small impact, it can dilute the overall perception of your field. When people evaluate a field, they often think about the quality of its typical examples. For instance, having several publications in top journals can be more impressive than having a mix of good and mediocre papers.
In newly emerging fields, any published work can serve as an introduction for others. Researchers may benefit from focusing on their strongest ideas—those that will attract promising students and showcase the best of the field.
These reputational risks highlight the importance of putting your best foot forward and considering how your work influences others’ perceptions. However, we mustn’t let concerns about reputation hinder our willingness to share new ideas or promote essential messages. Striving for perfection should not prevent us from making valuable contributions to the Kingdom.
Ultimately, determining when reputational risks outweigh the benefits of sharing new ideas can be challenging. Some points to consider are: (i) it’s less of a concern in well-established fields, (ii) if your work can undergo peer review by respected individuals, it’s likely acceptable, and (iii) if unsure, consider sharing your work privately with trusted individuals rather than posting it publicly.
5. Crowding out
We’re just pointing out that announcing the start of your project isn’t costless for the rest of your field, so it’s worth doing at least a bit of due diligence before moving ahead and potentially discouraging others. Get advice from people you trust to be honest about whether you’re a reasonable fit for the project you’re considering. Ask around to see if anybody else in your field has similar plans; maybe you should merge projects, collaborate, or coordinate on which project should move forward.
Lastly, try to be honest with yourself about the likelihood that you’ll actually follow through with your plans. One of the most avoidable (and costly) forms of crowding out is when people announce a project but never really get it off the ground. For example, we’ve heard several people say they don’t want to start a local effective altruist group because one already exists, but then the existing group soon becomes neglected or entirely inactive.
If you announce that you’re going to work on a particular problem, or experiment with a particular approach, you can discourage other people from doing the same thing, because they will feel like you’ve got it handled.
For example, once we said we were going to do research into how to do the most good with your career, we may have delayed anyone else in the community from starting a similar project.
We’re not telling you that you should only start a new project if you’re certain you’ll succeed. The other side of this problem is that it’s bad when qualified people successfully identify a gap but don’t take initiative because they think it’s already being handled by others — or they wait for somebody better to come along.
6. Creating other coordination problems
Here’s an article on the substantial benefits that can come from large groups cooperating effectively. But it’s also true that people who fail to coordinate well with other groups can do significant damage.
Larger groups are harder to coordinate than smaller ones. Whether you’re doing research, advocacy or dealing with outsiders, joining a field obligates your peers to invest time making sure you and they are in sync. Furthermore, a lot of coordination relies on high trust, and it’s hard to maintain trust in a larger or shifting group where you don’t have established relationships. Adding people to an area has some direct positive impact, but it also creates an extra cost in the form of more difficult coordination. This makes the bar for growing a cause (especially a small one) higher than it first seems.
How can you mitigate these risks?
If you’re comparing several options, and one of them might have a big negative impact, and the others won’t, then the simplest and safest course of action is to eliminate the one with big downside risks, and then choose the one with the most potential of what remains.
By ‘big negative’ we don’t just mean that you might fail. Rather, we mean set back your field, or make things much worse than you found them.
Why does it normally make sense to just eliminate these options? The next couple of sections provide some justification. In brief: If you think a course of action might have both big costs and benefits, but the benefits outweigh the costs, that relies on getting the details of your estimate right…but your estimate is probably not right. It’s better to take actions that seem good on a wider variety of perspectives. Likewise these actions are often controversial (so not epistemically humble and maybe unilateralist), have reputational risks for the whole field, and often contribute to a breakdown in coordination.
Of course you might find that most of your options have the potential for large negative effects, or as is often the case, the courses of action with the most upsides also have the biggest downsides.
In that situation, you have to reason things through more carefully — the following sections aim to help you do that.
1. Don’t be a naive optimizer
Your understanding of the situation is certain to be incomplete. You’re probably missing crucial information and considerations.
And this runs deep. Your ‘model’ of the situation probably spits out a very uncertain answer about what’s best (“known unknowns”). But then there’s also the chance your model itself is wrong, and you’re thinking about the situation entirely wrong, and this could be in ways you haven’t even considered (“unknown unknowns”). And beyond that, it’s uncertain how to even reason about these kinds of situations — there isn’t a single clearly accepted theory of what how to make decisions under uncertainty.
So if you simply pick a goal that seems good to you, and aggressively pursue it, there are bound to be other important outcomes you’re ignoring.
The more aggressively you pursue your goal, the more likely you are to unintentionally screw over those other outcomes and do a bunch of harm.
This is especially the case if the other outcomes are harder to measure than your main target, which is usually the case. As the earlier sections show, there are lots of indirect and hard-to-track ways to make things worse, like influencing reputation or coordination; it’s seductive to trade these against ‘hard’ outcomes like donating more money, growing your organisation, or landing a promotion.
This is related to “Goodhart’s law“: when a measure becomes a target, it ceases to be a good measure. This is because there are probably edge cases to the measure that are easier to achieve than the real thing we care about. (It’s also related to why the AI alignment problem matters.)
For instance, British hospitals were taking too long to admit patients, so a penalty was instituted for wait times longer than four hours. In response, some hospitals asked ambulances to drive slower because long trips shortened hospital wait times.
Similarly, doing good is a highly complex and nebulous goal. So if you pick one approach to doing it, and pursue it aggressively, it becomes tempting to cut corners, or simply miss other important factors.
Some examples of being a naive optimiser:
Trying to earn as much money as possible, so you can donate as much as possible (but ignoring issues around reputation, character, cooperative norms; as well as other pathways to doing good such as spreading important ideas and building career capital)
Ignoring all considerations except for the one you think is most important (e.g. that there might be a lot of future generations)
An issue for entrepreneurs is to get so convinced by the mission of their own organisation that they cut legal and ethical corners to make it succeed.
How can you avoid being a naive optimiser?
There is no widely accepted solution to the problem of how to reason in the face of deep model uncertainty, but here are some ideas that make sense to us:
Consider multiple models of the situation — many understandings of what matters, many potential outcomes, and many perspectives. This should include what conventional wisdom and other experts would say. You should actively seek out the best arguments against your approach. This gives you the best possible chances of spotting missing considerations.
Take courses of action that either seem good according to many models, or that are very good on one perspective and roughly neutral on the others. In contrast, avoid courses of action that seem crazy on some reasonable perspectives.
Be ambitious but not aggressive.
2. Have a degree of humility
Not only are you probably wrong about what’s best, others probably also disagree with you.
In trying to have an impact, there’s a very difficult tradeoff to make between doing what seems best to you, and deferring to others.
If you defer too much, you’ll never do anything innovative or novel. Also, sometimes ‘common sense’ can lead you to do harm – for example, most people don’t seem to think factory farming is morally bad, but we think it is.
But if you simply go with your own views, there’s a good chance of being wrong, and that can easily make things worse.
Indeed, if you don’t have any special knowledge that others lack, then you could argue your guess about isn’t better than anyone else’s, and you should just do what the average person thinks.
Where to fall on this spectrum of contrarianism is one of the most important drivers of why people take different approaches to doing good.
Here are some more detailed tips on how to strike the balance:
Make a distinction between your “impressions” (what seems best to you) and your “all considered view” (what you believe after taking account of other people’s views). It’s important to develop your own impressions, so that you’re adding to the collective wisdom about what to do, but high-stakes action should generally be taken based on your all considered view.
Put more weight on your own impressions the more reasons there are to trust your views (such as expertise or a track record), and when you can clearly point to information or values that you have which others don’t. If a random uninformed person disagrees with your project, that’s usually not worth worrying about.
Gather lots of views. A large number of non-experts can easily be more accurate than a small number of experts.
Weight someone’s views more by track record than superficial indicators of expertise. In many fields (e.g. most social science), experts aren’t much better at making predictions than random chance. The people with the best judgement are often informed generalists with the right mindset. Either way, try to evaluate how trustworthy people are based on their track record of making similar judgement calls.
Weight the views of others by their strength, both in terms of the stakes and their degree of confidence. If even someone trustworthy believes your project is ‘meh,’ that’s not a big deal either way. If they’re convinced it’s very harmful, you should be a lot more cautious.
Consider the stakes. In some contexts, like private intellectual discussion or doing small test projects, it’s OK to run with your wacky views. The greater the potential harms, the more important it is to consider a wide range of views.
Don’t rely only on your own judgement for high-stakes decisions. For instance, if you’re the CEO of a larger project, it’s important to have a board and strong cofounders to check your most important decisions.
One additional reason for humility: a course of action you think is best is almost certainly not as good as you think it is. We are reminded in Proverbs 3:5-6 to trust in the Lord with all our hearts and lean not on our own understanding. Seeking God’s wisdom and guidance can help us recognise that our perspective may be limited.
Finding what seems to be a perfect course of action follows because you’ve ranked potential actions based on your current understanding. But your current understanding is incomplete, so your analysis contains errors. If you think something is unusually good, that could either be due to correct reasoning, or it could be because you made an unusually large error in your analysis.
3. Develop expertise, get trained, build a network, and benefit from your field’s accumulated wisdom
When entering a field, consider starting out by working for an established organisation with the capacity to supervise your work. Learn about the field and make sure you understand the views of established players. The time this takes will vary a lot according to the area you’re working in. We generally think it makes sense to work in an area for at least 1-3 years before doing higher stakes or independently led projects, like Luke Eure did in the early stages of his career. You can read his story here. In particularly complicated areas — like those that require lots of technical knowledge or graduate study — developing understanding and skills can take longer than 1-3 years.
This is not to only give you knowledge of the crucial issues and main perspectives in the field, but also to make sure you gain the advisors you need to do the steps we covered above.
There can be exceptions. For instance in an emergency like COVID-19, it wasn’t possible for some of those who could contribute to the response to first spend a year training. While lacking relevant expertise should still be a reason for caution, we don’t think that everyone without pre-existing expertise should have done nothing.
Second, if you think you’ve identified a neglected approach within a fragile field, try to understand why people with more experience haven’t taken it. It’s possible you found a promising gap, but maybe your approach has been tried and failed before, or others are avoiding it because of a flaw you don’t have the context to see.
If you’re struggling to find training opportunities or develop the required network, it may be better to stick to safer problems and methods instead of acting unilaterally in a delicate area.
One especially important way to increase your expertise is to improve your judgement. In the world of doing good, we normally lack measurable outcomes, and that means we need to instead rely on judgement to estimate the costs and benefits of different paths.
4. Follow cooperative norms
As your career progresses and your influence grows, you may face temptations to act in ways that are harmful, dishonest, or deemed unethical "for the greater good." However, Scripture teaches us that this is never a wise choice. It’s crucial to uphold integrity, as outlined in Proverbs 10:9, which reminds us that those who walk in integrity walk securely.
This principle reflects the broader importance of cooperative norms in our work for God’s Kingdom. When tackling significant social issues alongside others, you'll likely achieve much more if you commit to being:
Honest and trustworthy — Building trust among your peers requires integrity and adherence to agreed-upon rules.
Helpful — Be willing to support others in their efforts, even if it doesn’t offer immediate benefits to you.
Flexible and willing to compromise — Show a readiness to collaborate and prioritize the group's objectives over your personal preferences.
Polite and respectful — Cultivate a spirit of harmony and avoid unnecessary conflicts.
Judicious — Be discerning enough to withdraw support from those who undermine these shared values.
If you disregard these principles, it not only hinders your own efforts but also disrupts the overall cooperation needed to address important problems, making things worse for everyone involved.
Moreover, untrustworthy actions can hinder the ability of those working on the issue to engage with the wider community. For example, if someone resorts to dishonesty in the name of advocating for a cause, it can tarnish the reputation of the entire field, complicating future efforts to make a positive difference.
While it’s true that violating certain norms—such as in cases of civil disobedience—can sometimes be justified, it’s essential to approach such decisions with extreme caution. Uncooperative actions can lead to more harm than good, and it’s crucial to have compelling reasons and evidence before pursuing them, including first exploring other avenues for change.
5. Match your capabilities to your project and influence
Try to match your capabilities to your degree of influence and the fragility of the problems you’re working on. As the stakes get higher, the more vetting, expertise and caution to bring.
If you’re running a few events at university, you don’t need to subject your plans to a huge amount of vetting. If you’re advocating for a major change to government policy, then you do.
Get honest advice from experts about whether you’re a good personal fit for a project and whether you’re prepared to take it on. Continue to get feedback over the course of the project.
As your project becomes more influential and successful, it becomes more important to keep seeking out advisors and colleagues who will stand up to you. Unfortunately the opposite is often the case: as you get more successful, people will be less inclined to doubt you, and more worried about criticising you. To avoid this, you may need to purposely set up structures and processes to limit the role of your judgement.
If you’ve already developed the competence, training, and network to mitigate these risks in a particular field, your comparative advantage is probably taking on the risky projects that others should stay away from. You also might be able to add value by providing mentorship, training, and/or supervision to promising entrants to your field.
6. Avoid hard-to-reverse actions
Earlier we spoke about how it’s possible to accidentally lock in suboptimal choices — so it’s important to look out for and (all else equal) avoid hard-to-reverse actions..
For instance, as we discussed, a media campaign that’s going to reach lots of people for the first time will create first impressions that can be hard to undo.
Another example of a hard-to-reverse action is to grow too fast, especially if you’re working in a small but fragile field.
Hiring, growing your funding or increasing your influence normally look like concretely good things from the perspective of your project in the short-term — being bigger means more impact — but it can be more ambiguous when you consider the field as a whole and the long-term effects.
Rapid growth is hard to unwind, since it would mean firing people, so you get locked into the new, bigger state. More people are harder to coordinate, and require training and management, which is often in short supply. This leads to more of the issues we’ve covered, like unilateralism, breakdown of norms and errors of judgement. Going fast creates more reputation risks, which might have been avoided if you’d gone more slowly. (Ignoring these more diffuse harms is another example of naive optimization.)
Of course, growing more slowly also has big costs; so it’s a question of balance. However, our sense is that the benefits of growth are usually more tangible than the costs, so people are more likely overestimate the value of growth than underestimate it.
As your career progresses and your influence grows, you may face temptations to act in ways that are harmful, dishonest, or deemed unethical "for the greater good." However, Scripture teaches us that this is never a wise choice. It’s crucial to uphold integrity, as outlined in Proverbs 10:9, which reminds us that those who walk in integrity walk securely.
This principle reflects the broader importance of cooperative norms in our work for God’s Kingdom. When tackling significant social issues alongside others, you'll likely achieve much more if you commit to being:
Honest and trustworthy — Building trust among your peers requires integrity and adherence to agreed-upon rules.
Helpful — Be willing to support others in their efforts, even if it doesn’t offer immediate benefits to you.
Flexible and willing to compromise — Show a readiness to collaborate and prioritize the group's objectives over your personal preferences.
Polite and respectful — Cultivate a spirit of harmony and avoid unnecessary conflicts.
Judicious — Be discerning enough to withdraw support from those who undermine these shared values.
If you disregard these principles, it not only hinders your own efforts but also disrupts the overall cooperation needed to address important problems, making things worse for everyone involved.
Moreover, untrustworthy actions can hinder the ability of those working on the issue to engage with the wider community. For example, if someone resorts to dishonesty in the name of advocating for a cause, it can tarnish the reputation of the entire field, complicating future efforts to make a positive difference.
While it’s true that violating certain norms—such as in cases of civil disobedience—can sometimes be justified, it’s essential to approach such decisions with extreme caution. Uncooperative actions can lead to more harm than good, and it’s crucial to have compelling reasons and evidence before pursuing them, including first exploring other avenues for change.
When entering a field, consider starting out by working for an established organisation with the capacity to supervise your work. Learn about the field and make sure you understand the views of established players. The time this takes will vary a lot according to the area you’re working in. We generally think it makes sense to work in an area for at least 1-3 years before doing higher stakes or independently led projects, like Luke Eure did in the early stages of his career. You can read his story here. In particularly complicated areas — like those that require lots of technical knowledge or graduate study — developing understanding and skills can take longer than 1-3 years.
This is not to only give you knowledge of the crucial issues and main perspectives in the field, but also to make sure you gain the advisors you need to do the steps we covered above.
There can be exceptions. For instance in an emergency like COVID-19, it wasn’t possible for some of those who could contribute to the response to first spend a year training. While lacking relevant expertise should still be a reason for caution, we don’t think that everyone without pre-existing expertise should have done nothing.
Summing up
Gain expertise and understand established perspectives in your field.
Practice integrity and respect in all interactions.
Match your capabilities to the impact of your projects.
Avoid irreversible decisions that may harm your cause.
Pursue positive change with humility and collaboration - fixing your eyes on Jesus.
Need help discerning your career? Sign up for our free one-on-one impact mentorship here.
To read more about it click here.
Do you have any career uncertainties? Click here to read our article on three big career uncertainties you can trust God with.