Autonomous, or self-driving, cars are emerging as the solution to several problems primarilycaused by humans on roads, such as accidents and traffic congestion. However, those benefits come withgreat challenges in the verification and validation (V&V) for safety assessment. In fact, due to the possiblyunpredictable nature of Artificial Intelligence (AI), its use in autonomous cars creates concerns that need tobe addressed using appropriate V&V processes that can address trustworthy AI and safe autonomy. In thisstudy, the relevant research literature in recent years has been systematically reviewed and classified inorder to investigate the state-of-the-art in the software V&V of autonomous cars. By appropriate criteria,a subset of primary studies has been selected for more in-depth analysis. The first part of the reviewaddresses certification issues against reference standards, challenges in assessing machine learning, as wellas general V&V methodologies. The second part investigates more specific approaches, including simulationenvironments and mutation testing, corner cases and adversarial examples, fault injection, software safetycages, techniques for cyber-physical systems, and formal methods. Relevant approaches and related toolshave been discussed and compared in order to highlight open issues and opportunities.