In some domains, standards such as ISO 26262 or the UK Ministry of DefenceÂ’s Defence Standard 00-56 require developers to produce a safety case. As the safety case for a complex system can be rather large, automated verification of all or part of it would be valuable. We have approached the issue by designing a method supported by a framework including analysers for safety cases defined in the Goal Structuring Notation (GSN) and systems modelled in the Architecture Analysis and Design Language (AADL). In our approach, the safety case predicates are defined in a subset of the functional language Meta Language (ML). Our approach facilities formalising some parts of a typical safety argument in an ML-like notation, enabling automatic verification of some reasoning steps in the safety argument. Automatic verification not only justifies increased confidence, it can ease the burden of re-checking the safety argument as it (and the system) change.