Multi-cam setup in the wild : Elephants #689
Replies: 1 comment
-
|
Hello @kommanderpi, Thank you for reaching out! Your plans for capturing elephant behavior sound fascinating, and I'm happy to offer some thoughts on the challenges you've outlined. I've broken down my recommendations into a few key areas. Camera Calibration: A Two-Stage ProcessIt's helpful to think about camera calibration in two distinct stages: intrinsic and extrinsic.
A Critical Prerequisite: Time SynchronizationI want to highlight a major hurdle that you will need to address before you can achieve accurate 3D triangulation: time synchronization. In my experience, this is a non-trivial problem. Even with cameras started simultaneously, their internal clocks will drift over time, and this will need to be corrected for frequently. If the video streams are out of sync, the triangulation results can become highly inaccurate very quickly. I would strongly recommend focusing on developing a robust method for time-synchronizing your trail cameras first. Once you have a reliable way to ensure your footage is synchronized, you will be in a much better position to tackle the calibration and 3D reconstruction. I am currently in the final stages of my dissertation defense, so my availability is quite limited for the next couple of weeks. However, I am very interested in your project and would definitely be interested in an ongoing discussion to see what might be possible or offer whatever help I can. Regards, Mac |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi @mprib
Thanks for your awesome repo! Im hoping it might be a way forward with my ongoing research -
TLDR: I want to track elephant behavior using multi-cam setups (trail-cameras - sadly have to manually synchronize the time stamps)
I have a pose-estimation model that performs great (PachySet! - fined tuned deeplabcut superquadruped mode).
The problem lies in calibrating multiple cameras for 3d pose-estimation - I have attached an image for reference:

The image depicts two different potential setups - one with 24 cameras, the other with 28 cameras
Pink = camera coverage
Black lines = camera views
Red blocks = static objects in the scene
For scale: the red blocks are 15m apart.
Do you have any thoughts about how I might go about calibrating the cameras? I am skeptical about the chacuro/checkerboard method - but I imagine there might be a method using landmarks in the scene (static markers that I can get gps coordinates of and know relative positions of - painted steel rods or traffic cones or rubber duckies)? Perhaps the static objects could be the markers?
I understand that i could use one (or more cameras as anchors for longer arrays - for example the above and below the static objects?
Thank you so much!
Beta Was this translation helpful? Give feedback.
All reactions