OK, bear with me.
What if we helped each project produce a report of the top “problems” such as:
- Barriers to adoption
- Contributor ramp-up time
- Contributor churn
- Top FAQs
- Release Cycle
We could help them look at the available metrics and identify some that might serve as indicators or a proxies for conditions they want to track.
The exercise might encourage aspiring data scientists. The examples could provide guidance to projects uncertain about how to measure their progress. The transparency would raise the standard of professionalism without requiring additional work.
I would propose a constraint that the measurement itself require no change of behavior. In other words, a contributor would not need to fill out a form nor check a box, etc. The metrics should be derived from the toolchain’s activities (downloads/week, 7dayActiveUsers, mean-time-to-respond, questions per topic, etc).