A safety case is a hierarchical argument supported by evidence, whose scope is defined by contextual information. The goal is to show that the conclusion of such argument, typically “the system is acceptably safe”, is true. However, because the knowledge about systems is always imperfect, the value true cannot be assigned with absolute certainty. Instead, researchers have proposed to assess the belief that a conclusion is true, which should be high for a safe system. Existing methods for belief calculations were shown to suffer from various limitations that lead to unrealistic belief values. This paper presents a novel method, underlined by formal definitions of concepts such as conclusion being true, or context defining the scope. Given these definitions, a general, probabilistic model for the calculation of belief in a conclusion of an arbitrary argument is derived. Because the derived probabilistic model is independent of any safety-case notation, the elements of a commonly used notation are mapped to the formal definitions, and the corresponding probabilistic model is represented as a Bayesian Network to enable large-scale calculations. Finally, the method is applied to scenarios where previous methods produce unrealistic values, and it is shown that the presented method produces belief values as expected.