Sometimes, it is hard to communicate your worldviews to other people, especially if they don’t have the same background, or “don’t speak the same language”.
For instance, from what I heard, the AI Safety community (communities ?) has been experiencing this issue for a while.
Thus, I wonder if tools such as ModelCollab or CatColab could be used to manage disagreement better. In this case, the goal would not be “collaborate to define a common formalized model {of the things you and I care about}”, but rather “give a more precise description of your underlying assumptions”, so that we both know what we are disagreeing on.
Also, for people who care about Bayesian updates, having ideas/hypotheses written down this way might help keep track of the changes/show how you got to your current beliefs.
This idea keeps nagging me, but I’m not sure how much sense it makes.
Any thoughts ?