Nevermind, after many hours of frustrating experience with BAM (I just couldn't get it to load the head tracking plugin) I gave up BAM and started looking in the vpx source code to see how it communicated with BAM.
After some digging i found that it is using a named shared memory (found the relative code in the file BAM_Tracker.h inside "third-party/include/" folder, the relevant code is this:
/// <summary>
/// The shared memory / memory-mapped file name
/// </summary>
static const char *SharedMemoryFileName = "BAM-Tracker-Shared-Memory";
/// <summary>
/// Single captured data about player position.
/// </summary>
struct TPlayerData {
double StartPosition[4]; // x,y,z [mm] + timestamp [ms]
double EndPosition[4]; // x,y,z [mm] + timestamp [ms]
double EyeVec[3]; // [normalized vector]
int FrameCounter;
};
so, basically I rolled my own head tracking solution with ctypes in python, then I used opencv to track my head and feed the data. all good.
here are some videos of the final result. in one of the video you can see what happens when the tracker stop working, I since figured out how to eliminate this problem (it was a webcam focus issue)
https://youtube.com/shorts/Sm7krZm-exo
https://youtube.com/shorts/PXvKNnezIT4
however there are a few considerations that I made:
- For my current setup (just a vertical monitor and some buttons) it is not worth it since my head is mostly still
- For achieving a good effect a BIG monitor is needed (with a 27" the effect is already good but I think 32" would just nail it)
- It really need to be installed on a cabinet and you need to look mostly down
- A better solution than a regular webcam is really needed (I managed to push the limit of my Logitech C615 by using compressed frame, b&w etc but ..) probably the Kinect would be just better since it only feeds the tracking data via usb and not a whole raster image (If I am correct, still waiting for my kinect to arrive)
- If you just keep using a webcam then install the relative software for your cam, in my case I used the logitech software to fine-tune the focus and managed to almost eliminate deadzones or tracking lost
aske me anything about this
if you're interested in the python (using ctypes) code, here's the relevant bit to set up the named shared memory in case you want to roll our own solution:
# Define the TPlayerData struct to match the C++ struct layout
class TPlayerData(ctypes.Structure):
_fields_ = [
("StartPosition", ctypes.c_double * 4),
("EndPosition", ctypes.c_double * 4),
("EyeVec", ctypes.c_double * 3),
("FrameCounter", ctypes.c_int)
]
# Define the shared memory file name and size
SharedMemoryFileName = "BAM-Tracker-Shared-Memory"
SharedMemorySize = ctypes.sizeof(TPlayerData)
# Open the existing shared memory segment
shared_memory = mmap.mmap(-1, SharedMemorySize, SharedMemoryFileName)
then it is just a matter of writing the position, something like this:
initial_data = TPlayerData()
initial_data.StartPosition = (0.0, 0.1, 0.4, 1.10) # <- Edit this value according to your setup
initial_data.EndPosition = (0.0, 0.1, 0.4, 1.10) # <- Edit this value according to your setup
initial_data.EyeVec = (1.0, 1.0, 20.0) # <- Edit this value according to your setup
initial_data.FrameCounter = 1
shared_memory.seek(0)
shared_memory.write(initial_data)
# BAM Integration
then on each frame alter the values the way you want and increase the FrameCounter before writing back the changes
I'll probably write on my blog about this, clean up the code a bit and put it on github when I have the time
long story short, I'll get back to polish this when I finally build my vpin
Edited by pdev, 21 October 2023 - 09:30 PM.