To elaborate on Vlad's post, the typical photogrammetry workflow is not a feasible solution for things like foliage and a slightly different approach is required. I'm not a pro at foliage yet but I hope I can help a bit. A leaf is often too thin for current photogrammetry to derive a coherent model and chunking will not help you in this situation. You could merge together two one-sided models in post, however, the output is unsuitable for real-time rendering as it will cripple your performance, and will require a fair amount of work on your end to generate. On top of that, a leaf is a semi-translucent object that owes much of its appearance to subsurface properties so a different technique is required to accurately render it anyways.
The general idea is that you capture surface details along singular planes such as the top of the leaf and generate maps that combine to form "flat" atlases that you can then modify in programs like 3dsmax & blender to create game assets that appear to be 3D but are made up of as few polys as possible. You can have fantastic results using the 4 or 5 shot method Vlad described but as he alluded to, producing higher quality maps that render more realistically will require special techniques that involve highly controlled lighting. I'd recommend checking out one of Megascan's free plant atlases to see what an atlas typically contains. This video shows the theory of creating very basic game assets using an atlas. More advanced assets would slightly subdivide the plane to bend the atlas to induce more of a 3D effect and can be used to create artificial variety and can include animations to make the asset more dynamic. @artbyrens is a good source to follow as he's a beast with foliage.
In terms of photoscan chunking, the manual provides the best explanation of the methods that I've seen to date. The general idea is that you can merge the models by brute force comparing reference points between models but that can be a very lengthy process. You can shorten this process dramatically by providing markers with known GPS coordinates or by feeding in camera locations. In many professional scenarios the camera location is actually GPS metadata generated by the camera and thus can be very accurate; if you use camera positions estimated by photoscan you can have issues merging chunks because the camera positions are estimates subject to reprojection error. In the end these features simply allow for quicker merging.