Generating and evaluating three-dimensional (3D) models of unmanned aerial vehicles (UAVs) in forest environments often pose challenges due to the low contrast texture features of trees, especially in presence of sparse trees cover. Producing an accurate 3D model often involves testing a range of scenarios and processing settings that leads to the choice of the optimal model. Commonly, external measurements like Terrestrial Laser Scanning (TLS) or criteria like Ground Control Points (GCPs) and Reprojection Error (RE) are used to determine the optimal model. The use of TLS, although capable of providing insight into the quality of 3D UAV forest models, is expensive and often not accessible. Here, we furthermore demonstrate that residuals on GCPs and RE are inadequate for properly assessing UAV-based 3D forest models. To address this gap, we present CaR3DMIC (the source code and sample dataset can be downloaded from https://doi.org/10.5281/zenodo.10590372), an alternative method to evaluate 3D models of forests solely based on tree features derived directly from the imagery. The method bases on the idea to process the images two times through the photogrammetric work-flow: the first time to create the 3D model and the second time in the opposite direction to re-create the original images based on the derived 3D model. Then, the re-created images are systematically compared against the original images to calculate an evaluation metric. We demonstrate the effectiveness of the suggested method for evaluating 3D forest models with diverse reference sets of segmented tree crowns, tree crown height, tree crown area and Diameter at Breast Height (DBH), as crucial 3D products for forest management. The evaluation of the 3D models optimized using CaR3DMIC returns high correlations with the reference measurements and suggests that CaR3DMIC is a suitable approach to identify optimal settings to derive high-quality 3D models from UAV imagery.