Translations database for the VL Server.
At a first approximation, the translation database acts like an alist mapping translation names (see vl-tname-p) to their vls-data-p contents. However, the true story is somewhat more complex due to the desire to keep translations loaded in a more persistent way.
VLS must be fast enough to respond to the web server in real time. Even using serialize-read with ordinary conses, loading a single translation can take over a minute to load, so it is completely infeasible to imagine loading translations on a per-connection basis. Instead, we need some way to keep translations pre-loaded.
This is a challenge due to the number and size of our translations. At the time of this writing, we are translating 12 versions of the chip every day, so if we save translations for two weeks and don't translate on the weekend, that's 120 models that VLS needs to provide access to. It's easy to imagine these numbers increasing.
We think it would take too long (and perhaps too much memory) to load everything at once. Instead, we will try to load translations only as the need arises. This might sometimes impose a 1-2 minute wait on a user who wants to get started with a translation that isn't already loaded. But, if we simply make sure that all the translations in stable are pre-loaded, then this penalty will probably only be encountered by users working with bleeding edge or older translations.
It might be a good idea to come up with a way for translations to be unloaded after they are no longer being used. We haven't developed such a mechanism yet, partially because our use of honsing means that the incremental cost of having another module loaded is relatively low.
We hons translations as they are read. An unfortunate consequence of this is that our module loading is always done by a single loader thread, and hence clients might in principle have to wait a long time for their translations to be loaded if a long list of translations need to be loaded first. Another disadvantage of this approach, compared with ordinary conses, is that loading a model takes perhaps two or three times as long, increasing the wait time for the unfortunate user who wants to look at a model that is not yet loaded.
But we think these disadvantages are worth the memory savings since so much structure sharing is found between translations. At the time of this writing, we found that to load ten translations took almost exactly twice as long with hons (700 seconds instead of 350) but subsequently only required 10 GB of memory instead of 16 GB to store (with 7 GB of this having been reserved for the ADDR-HT). As we imagine loading even more translations, this becomes a pretty compelling advantage.
Note that all of this honsing is done with respect to the Hons Space in the loader thread. From the perspective of client threads, the modules being dealt with are not normed.