Late august my good friend Jack Nicklin-Jewell got in touch, he was in rehearsals programming the upcoming tour of warhorse and had some issues with the 3D model of the projection screen didn’t match the physically built screen. d3 was being used as the media server of choice for the tour and it was important that the mesh used in d3 was matching as close as possible to the physical set
Jack had been supplied with a cad of the drawings that the set had been constructed of but additional set dressing had been added to the original plywood base to make it look like a giant piece of paper torn from a page, thus why it is called The Rip. The install of the show in various locations around the UK was going to be very tight and handled by LX, to help with a faster slick lineup Fibre optic LED were placed embedded into the surface of the screen. This meant that for lineup it was a simple task of dragging the “quickcal” points from the 3d mesh onto the physical set and the software would workout the rest.
When I first got the call, the production had requested a scan with my Faro Focus but on further discussion and finding out what the actual job was I decided that reconstructing the set using photogrammetry. The main reason being was to make sure the fibre optic lineup dots were captured, this is a detail that could be missed by the the scanner and also I have more control of the actual image texture captured in terms of light levels and quality of image VS the image capture on the scanner.
I was already traveling down to london to do my yearly CVC banking conference with Graymatter and Giggle Studio so I also packed my Sony A7V to go and do the image capture of “The Rip” at Wimbledon before my main job. Over the last few years I’ve been using photogrammetry quite a bit for constructing models but only in the last 12 months has it became part of my official workflow supplementing or replacing a traditional scan depending on the project and the environmental factors.
I arrived in London early and checked into my hotel, I wasn’t on the Graymatter job until the following day so this fitted in nicely. I packed my camera kit and headed over to the New Wimbledon Theatre where they were rehearsing warhorse. I met Jack & Alex there and we went in to find an opportune time to capture the images, I needed a clear stage but as soon as the actors had finished onstage everyone else wanted access to the stage to do other stuff, classic theatre. We eventually found a time where i had 20 mins to get the images, I had already prepared ahead of time some 4k static noise to project onto the surface, this was done to generate some features on the surface of the plain set piece to help the software reconstruct a mesh. Without any features, or very few its very difficult for the photogrammetry software to generate a model form the images, projecting a basic static pattern on a plain surface to help the software was a little trick I developed when doing some R&D into this problem earlier in the year and it works a treat.
Above are a few of the R&D images using projected noise to create a reference base.
Eventually after rehearsals were done i had time onstage to capture the images, it was quite a dark stage even with the lighting and projected noise hitting the projected surface so had to use a higher iso and shutter sped thank id have liked to but i was on a tripod so the shutter speed wasn’t too much of a problem. I took several test shots, checked them on the camera, adjusting my camera values until I was happy with the settings. I think I took around 80 images in a linear parade at various distances and angles to the surface. In addition to the set piece I also made sure I captured the proscenium arch which i had physically measured the distance and then I used this distance to scale the whole scene when reconstructed so the virtual matched the physical.
I brought all of the images into Reality Capture after a little bit of tweaking in DXO photolab and RC did an excellent job of aligning the cameras straight away. I did a reconstruction box to get rid of the rest of the theater that didn’t need constructing but kept in the proscenium left and right which i was using to uniform scale the scene. The trick with projecting noise on the surface worked a treat and it gave me a nice clean construction of the surface.
Once I scaled the scene, I filtered out all of the polys that weren’t part of the set piece, generated a 8k texture and then exported it as an FBX ready to go into Cinema4d to do the retopology and UV.
There was an existing model built from the cad that didn’t match the set butt here was also the associated UV with the model that I needed to closely match with the new retopo I was doing from the capture data. For testing purposes to send to to jack to make sure the workflow fully worked I did a bit more clean up of the mesh in C4d and did a quick mesh using Quadremesher a great tool for a quick quad based mesh. I then did some further clean-up on the re-mesh, matched the UV and sent it over to Jack for testing.
I then went back to the model and started doing a much cleaner retopo, quadremesher is good but in many cases doesn’t give a clean edge on an object of this shape. For my manual retopo I started with a cylinder shape and scaled it up so the radius of the cylinder sat exactly un the “cup” of the screen mesh. I then messed around with the horizontal and vertical sub-divisions to get a good density of quads, once I was happy I made the object editable, roughly deleted all the quads I didn’t need around the edges. Once I had the overall shape, I used the photogrammetry mesh as a guide and using the knife tool I cut around my really clean quad mesh until it was exactly the same shape as the capture mesh. Its totally worth doing these things manually so you have full control of the process, also having a clean mesh like this made the UVing so much easier and cleaner to closely match the existing UV
The next step was to locate the fibre optics from the photogrammetry texture and transfer them to a UV layer so this could be overlayed on the mesh for line-up quck-cal the projection during line up. For this is basically overlayed the source mesh with texture on top of the new mesh and then using the 3d painting mode in the UV window in c4d I painted directly in the new mesh and this dot transferred to a UV layer. Below is a basic example, using where I painted a face on the cube in 3d space and it transfers tot he UV, its super useful for things like this.
In addition, I made another mesh which matched the fibre optic markers as a helper mesh which could be loaded into d3 in addition to the clean mesh as that’s how Jack had it setup and I like consistency
That’s it for this blog post, this was fun project spanning multiple skillsets and also working with good people.
FIN