Hope-Taylor in 3D
Photogrammetry is a widely used technique within archaeology today. 3D models of finds, objects, trenches and buildings can be constructed from a series of photographs when processed by computer.
We have hundreds of photographs taken during Brian Hope-Taylor’s original excavations at Yeavering. We wondered if there would be a possibility of processing some of these to allow us to apply photogrammetric processing and possibly recreate scenes from the excavations as 3D models.
What is photogrammetry?
In photogrammetry, software scanning identifies common points within a series of photographs. Triangulated lines of sight can be developed from each camera position to the identified points on the object. These lines of sight are then mathematically intersected to produce the 3-dimensional coordinates of the points of interest. This then allows the software to interpolate a ‘point cloud’ for the entire object which can then be converted into a mesh of triangles. This mesh is then texture mapped using the original photographic images to give a realistic surface to the model.
In an ideal world photography destined for the technique is a very controlled process. A calibrated lens, usually with a fixed focal length is used. Soft lighting is used if possible and a systematic series of photographs are taken which cover the entire object, with extra images used to record fine or difficult to render areas.
The model shown below was recorded in good conditions and is the product of over 70 images. Although no scale is shown, the inclusion of ranging markers during the recording process means that this model is to scale allowing measurements to be taken directly from the model including software generated volumetric and other measurements. You can rotate and zoom the model with your mouse.
We are, without doubt, privileged to have access to a very comprehensive photographic record of Hope-Taylor’s excavations at Yeavering thanks to the foresight and skill of the man himself.
However, the images are not ideal for photogrammetry.
Monochrome images give the software less information from which it must identify common points. To make things considerably more problematic the images are taken on different cameras with different lenses of differing focal lengths. Photographs of trenches are taken days or more apart resulting in very different lighting, weather and soil conditions. The ranging rods used to scale the photographs (and later the 3D models) are moved from image to image.
A great deal of work ensues to create a 3D model from the old images. Hundreds of hand mapped control points are added to the images to help the software with alignment. Conflicting areas, such as the ranging rods, have to be masked out on the photographs on which they appear. Adjustments to the exposure of the images are made digitally. Some images are not suitable for the full process, however manually added spatial markers can make these rejected images vital in establishing the point cloud and mesh.
Much midnight oil was burned.