Back in action, and slowly catching up, so lets start with this one:
As Fuzzel mentioned, this is not so much dependent on displacement mapping, but rather doing some correct refraction. For this we would have to insert yet another pass into the rendering pipeline to fake refraction (you will never get the same quality as with ray tracing like in Blender, btw).
Correct refraction seems like a (fps) expensive solution, since it needs at least some vector calculations for applying the fresnel equations. I also rendered my example above with blender by using a refractive node, then it takes more than a minute to render. Luckily, the human eye (at least my eyes) cannot obviously see which are true refractions and which are faked somehow. That is how I found people using displacement mapping (even in 2D) with a simple B&W texture (for instance like I did above, with the facing parameter in Blender defining color, black = facing, white = parallel to view), to fake refraction. Since it is only 2D pixel shifting, based on a static displacement texture, I hoped it would be possible to include it in VPX. And by using precalculated displacement maps and prerendered reflection maps (like above), you would get an approximation of the raytraced view, without a heavy performance penalty.
Yes.I did some tests on this and it's not that easy to add I thought. The biggest problem is if you have multiple transparent elements stacked over each other and if you just want one to be refraction mapped then it looks totally wrong because the depth sorting isn't correct anymore. To solve this we must render the dynamic elements twice and then you really need a fast high end card
Not easily giving up hope from my end, some "what if's" (without rendering all dynamic elements twice of course and just to hopefully trigger a way around the issues):
- Refracting not all at once, but per object: What if you need to specify in the editor every object which is to be refracted (default=no, a checkbox in the object properties). After rendering that object normally with lighting (hope it works like this), a refraction pass is executed for just that object. So not a refraction after everything else, just per dynamic object done. Not sure if this works the same for static objects or you need to do it once for all static objects.
- Splitting the depth bias ranges: What if you would have all normal objects (which can be refracted), need to be below a certain depth bias? Everything above that depth bias threshold, will be rendered on top of the refraction.
- Making the refraction object/texture only one per table: What if you would need to make one big primitive which has one big refraction texture, which does all the refraction? This could make it impossible to have refraction on refraction.
I do not know how the rendering pipeline exactly works, so all above is guesswork from my end. Just hope this triggers a brainwave for a possible solution/workaround.