Trust in open source communities

In chapter 3 of Program Management for Open Source Projects, I talk about the importance of trust. “Open source communities run on trust,” I wrote. I go on to talk about building trust by establishing relationships and credibility. This is fine when you’re coming into a defined role, perhaps if you got hired to fill a sponsored role in a community or if a project leader has asked you to apply your skills to the project.

Most people, of course, don’t come directly into a defined role. They start by making a small contribution: filing a bug, answering a question on a mailing list or forum, submitting a patch, and so on. Sometimes, they don’t even plan to stick around. They’re making one contribution and moving on. The kind of trust-building based on relationships doesn’t work as well in that case. But you still need trust to evaluate a contributor (and thus their contribution).

This issue has only grown more relevant as large language models become widespread. If the person who submitted a pull request didn’t write the code, do they understand it? Can they answer maintainers’ questions or address feedback? Is the code even worth a maintainer’s time to review or is it plausible-looking garbage?

In late January, GitHub product manager Camilla Moraes started a conversation seeking ideas for giving maintainers tools to address low-quality contributions. The conversation produced many good (and also some bad) ideas and highlighted the difficulty of a universal solution. Although the word “trust” only appears six times (as of this writing) in the whole thread, the conversation is basically a discussion of trust. “How can we slow the rate of un-trusted contribution without making life harder for the trusted contributors?” is a fair summary of the underlying issue.

Defining trust

Charles H. Green developed an equation of trustworthiness that includes credibility, reliability, and intimacy. Although it’s a smidge hokey, it’s fundamentally a reasonable representation of trust, so we can roll with it.

Importantly, trust is not a static characteristic of a person. Instead, it’s a dynamic measure that changes based on context and relationship. My coworkers (hopefully) think that I am competent in my work, deliver what I say I will, and am a fun guy to be around. There’s a high degree of trust because I rate highly in credibility, reliability, and intimacy. When I join a new project, I am the same person, but I am relatively or entirely unknown. The other people in the community need to interact with me for a period of time before they can develop trust in me.

Even if I’ve known someone for a long time, their trust in me may change when the context changes. The intimacy and reliability may be the same, but they don’t necessarily know if I’m credible in the new context. Just because I have experience in other languages, that doesn’t mean my Rust code is good. Someone who thinks I write competent Python (we’re pretending here!) would be well served reviewing a Rust contribution very closely, as I’ve written essentially none.

Trust in your community

As with many concepts, we often think about trust in open source communities without explicitly thinking about it. But most projects have some concept of a contributor ladder, where people get increased privileges and responsibilities based on the trust they’ve earned. It’s more important than ever to give deliberate thought to how trust is evaluated in your community.

This not only affects community management concerns but also security. Many projects have automated CI jobs that run on pull requests. These check for code style, run unit and integration tests, and so on. In the best case, bad code (intentional or accidental) can limit resources (including maintainer time). In the worst case, bad code can compromise the project and publish malware. For this reason, projects often require maintainers or other trusted users to grant permission for automated tests to run when the submitter is untrusted. Unfortunately, this still places a time burden on maintainers, which is a precious resource.

I suggest that projects explicitly consider what levels of trust are required to access certain resources (CI jobs, project emails, etc) and how that trust will be measured. The Discourse trust levels are an excellent starting point for building your project’s trust model. The specifics are designed for forum interaction, but you can extrapolate to your project’s activities.

The path to build trust has to be easy, or else you’ll drive away new contributors and your community will wither away over time. Trust levels are a safety measure, not a gatekeeping measure.

Tools to help

I am aware of a few tools to help with trust evaluation. I share them here as a reference, but I have not used them and do not endorse or renounce them. contributor-report is a GitHub Action that gives maintainers a report on a new contributor’s activity levels. This helps maintainers evaluate newcomers on the metrics that make sense for their specific project. vouch is a tool for marking users as vouched (or denounced) and taking action based on that. It can be used to provide a web of trust across projects and communities.

This post’s featured photo by Andrew Petrov on Unsplash.

Ben is the Open Source Community Lead at Kusari. He formerly led open source messaging at Docker and was the Fedora Program Manager for five years. Ben is the author of Program Management for Open Source Projects. Ben is an Open Organization Ambassador and frequent conference speaker. His personal website is Funnel Fiasco.

Share