Exile came out 19 years ago.
And it was 2 years ago since I started this project.
And over a year since my last post.
This project isn’t dead, I promise.
The Bad, the Ugly, the Good, and the Lame
The Bad
There have been set backs and bumps in the road that keep me from spending anywhere near as much time on this as I would like. So this site has been sitting here like a log.
But I have been quietly working on things in the background whenever I could.
The Ugly
In the intervening time between posts I have vacillated to some degree about potentially ditching UE4.
The big roadblock I was hitting with Tomahna was in getting lighting and materials to match the original game. UE4 can create stunning visuals, but I am trying to do a recreation that is as close to a 100% visual reproduction of the original as possible. And I really feel like I’m fighting with Unreal. Lighting is such a critical element, and I don’t feel like I have good control of it with UE4.
I was also having trouble on the backend. Actual coding (not blueprints) in Unreal is painful. Getting a VR setup working correctly can also be painful. Unity is better for both. But Unity has significant drawbacks. Unreal is also such a massive, heavy, kinda bloated feeling engine. It takes forever just to load the editor, movement feels laggy and slow to respond even with test scenes.
I’ve been exploring Godot for the last year. It’s come a long way and I really like it at this point, but it’s not yet mature enough to be used for this project. But maybe when version 4.0 is released later this year… Stay tuned.
Frustrated by my lack of meaningful progress in getting the high fidelity visuals I wanted, I spent time focusing more on areas that not directly related to the original scope of this project (which was just those two little rooms in Tomahna).
The Good
Last time I posted I was still struggling with getting good results with photogrammetry, which is really important for a good recreation of the larger worlds from the game. Most of the software out there is not geared for panoramas, and I was trying to figure out how to roll my own solution.
Fortunately I finally did find (relatively) functional solution.
Agisoft’s Metashape (previously Photoscan).
I feel a little silly about it, since it’s one of the big names in photogrammetry software, and it support’s spherical (equirectangular) panoramas. A few years ago I was trying a whole bunch of different things. I remember attempting to use Photoscan. I tested out the feature using the panoramas from Tomahna (which needed to be reprojected) as well as the frames from the flyby. I got pretty terrible results, and became discouraged with Photoscan and moved on.
In hindsight I was clearly testing the feature with too small a dataset. And the compression from the frames of Tomahna’s flyby are particularly bad. So it’s not surprising (now) that I got such awful results. Garbage in, garbage out, as they say.
A little while after my last post, in a fit of frustration I did another survey of the photogrammetry software. Somehow after all this time, 360 panorama support is pretty dire (and forget directly using cubemaps). Reality Capture still doesn’t offer support for them, after years of rumors that is was an upcoming feature. Photoscan (now Metashape) still seems to be the only game in town that offers some kind of support for spherical cameras.
So I decided to give it another try. But this time I had a much larger data set. And this time I got much better results.
Enough talk, I think some pictures are in order.
The Lame
Unfortunately, this isn’t a silver bullet.
The whole process depends on a lot of redundancy between different views of the scene. It works quite well when the panoramas are situated in a scene in such a way that they are all surrounded by the same environment like inside a room, inside a chasm. But it simply doesn’t work if there are too few panoramas in a place (like Tomahna). When the panoramas are surrounding something large or in a daisy chain (a cave or passage, or transition from interior to exterior) it also breaks down. The result is a collection of different areas that need to be manually adjusted to relate properly to each other.
Additionally, Metashape seems unable to align still images with the nodes. I’m not sure if there is something I could be doing differently that might get it working. But as it stands it means FMV and flybys require an enormous amount of manual work to get the cameras where they are supposed to be, and it doesn’t benefit the reconstructed mesh as much as I think it otherwise could. As an example I can get all the panoramas in an area like the exterior of Jnanin automatically aligned, and I can get the frames of the flybys from Jnanin aligned with themselves, but getting them aligned together is requires manually marking reference points on each frame, which is tedious and takes a lot of time.
On top of that Metashape has the concept of a chunk (really just a collection of images that produce a model). Each section of a scene can have it’s own chunk, and they can be aligned. Which is great, that’s exactly what is needed for this project. But Metashape’s interface feels incredibly limited in how you can interact with the chunks. In fact I found myself needing to delve into making python scripts to do things that seem pretty damn basic.
It makes the process of aligning and orchestrating these different sections of a scene into a single whole rather painful. That needs to be completed before the reconstructions are finalized and used as a basis for the creation of new assets. For this project to move to the next level a large amount of manual work in Metashape will need to be completed.
I’m pretty sure it’s going to have to be dragged across the finish line.