Skip to content

Commit

Permalink
Add anchor points per tile
Browse files Browse the repository at this point in the history
  • Loading branch information
oleg-alexandrov committed Oct 9, 2023
1 parent dbc25a7 commit 16357e5
Show file tree
Hide file tree
Showing 11 changed files with 199 additions and 71 deletions.
11 changes: 8 additions & 3 deletions NEWS.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,22 @@ New tools:

jitter_solve (:numref:`jitter_solve`):
* Added the option ``--weight-image``, to weigh observations based on
geographic location of triangulated points.
geographic location of triangulated points (:numref:`limit_ip`).
* Can use anchor points with frame cameras.
* Added ``--num-anchor-points-per-tile``. This helps when different
images have different sizes but want to ensure the same point density.
* The roll and yaw constraints no longer assume linescan camera positions and
orientations are one-to-one.
* Consider the case of a linescan and frame camera rig
(:numref:`jitter_no_baseline`).

bundle_adjust (:numref:`bundle_adjust`):
* Added the ability to refine the camera intrinsics per sensor
(:numref:`kaguya_tc_refine_intrinsics`).
* When optimizing intrinsics, cameras that do not share distortion can
have different distortion types and sizes.
have different distortion types and sizes. (:numref:`limit_ip`).
* Added the option ``--weight-image``, to weigh observations based on
geographic location of triangulated points.
geographic location of triangulated points.
* For ASTER cameras, use the RPC model to find interest points. This does
not affect the final results but is much faster.

Expand Down
58 changes: 58 additions & 0 deletions docs/bundle_adjustment.rst
Original file line number Diff line number Diff line change
Expand Up @@ -868,6 +868,64 @@ KaguyaTC is already reasonably well-aligned.
meters.) It can be seen that the warping of the DEMs due to distortion is much
reduced.

.. _custom_ip:

Custom approaches to interest points
------------------------------------

Uniformly distributed interest points
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To attempt to create roughly uniformly distributed sparse interest points during
bundle adjustment, use options along the lines ``--ip-per-tile 1000
--matches-per-tile 500 --max-pairwise-matches 10000``. Note that if the images
are very large, this will result in a very large number of potential matches,
because a tile has the size of 1024 pixels. (See :numref:`ba_options` for the
reference documentation for these options.)

For creating dense interest point matches, see :numref:`intrinsics_no_constraints`.

.. _limit_ip:

Limit extent of interest point matches
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To limit the triangulated points produced from interest points to a certain area
during bundle adjustment, two approaches are supported. One is the option
``--proj-win``, coupled with ``--proj-str``.

The other is using the ``--weight-image`` option (also supported by the jitter
solver, :numref:`jitter_solve`). In locations where the georeferenced weight
image is non-positive or has nodata values, triangulated points will be ignored.
Otherwise each reprojection error will be multiplied by the weight closest
geographically to the triangulated point.

Such a weight image can be created from a regular georeferenced image as
follows. Open it in ``stereo_gui``, and draw on top of it one or more polygons,
each being traversed in a counterclockwise direction (:numref:`plot_poly`). Save this
shape as ``poly.shp``, and then run::

cp georeferenced_image.tif aux_image.tif
gdal_rasterize -burn -32768 poly.shp aux_image.tif

The value to burn should be negative and smaller than any valid pixel value in
the image. The ``-i`` option can reverse the area to burn.

Then, create a mask of valid values using ``image_calc`` (:numref:`image_calc`),
as follows::

image_calc -c "sign(var_0)" aux_image.tif -o weight.tif

Examine the obtained image in ``stereo_gui`` and click on various pixels. Pixels
in the holes should be either non-positive or nodata, and pixels outside the
holes should have value 1.

If the image does not have positive values to start with, those values
can be first shifted up with ``image_calc``.

Various such weight images can be merged with ``dem_mosaic``
(:numref:`dem_mosaic`).

Bundle adjustment using ISIS
----------------------------

Expand Down
2 changes: 1 addition & 1 deletion docs/tools/bundle_adjust.rst
Original file line number Diff line number Diff line change
Expand Up @@ -997,7 +997,7 @@ Command-line options for bundle_adjust
reprojection errors in the cameras for this point by this weight value. The
solver will focus more on optimizing points with a higher weight. Points
that fall outside the image and weights that are non-positive, NaN, or equal
to nodata will be ignored.
to nodata will be ignored. See :numref:`limit_ip` for details.

--save-vwip
Save .vwip files (intermediate files for creating .match
Expand Down
73 changes: 59 additions & 14 deletions docs/tools/jitter_solve.rst
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Anchor points can be used only with linescan cameras, so without
frame cameras as inputs (:numref:`jitter_linescan_frame_cam`).

The relevant options are ``--num-anchor-points``,
``--anchor-weight``, ``--anchor-dem``, and
``--num-anchor-points-per-tile``, ``--anchor-weight``, ``--anchor-dem``, and
``--num-anchor-points-extra-lines``. An example is given in
:numref:`jitter_dg`.

Expand Down Expand Up @@ -1235,31 +1235,70 @@ are not needed as much as for linescan cameras.

.. _jitter_no_baseline:

Solving for jitter with no baseline
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Solving for jitter with a linescan and frame rig
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In this example we consider a single frame (pinhole) camera and a single linescan camera,
both looking straight down and seeing the same view. The linescan camera has jitter
that needs to be corrected.
In this example we consider a rig that is made of linescan and frame camera.
They are positioned in the same location and look in the same direction. The
linescan sensor acquires a single very long image line at a high rate, while the
frame camera records a rectangular image of much smaller dimensions and at a
lower rate, with overlap. They both experience the same jitter. The "rigid"
frame camera images are used to correct the jitter in the rig.

A straightforward application of the recipe above will fail, as it is not possible
to triangulate properly the points seen by the two cameras. The following adjustments
are suggested:
A straightforward application of the recipe above will fail, as it is not
possible to triangulate properly the points seen by the two cameras. The
following adjustments are suggested:

- Use ``--forced-triangulation-distance 500000`` for both bundle adjustment and
jitter solving. This will result in a triangulated point even when the rays are
parallel or even a little divergent (during optimization this point will get
refined, so the above value need not be perfectly known).
jitter solving (use here the camera height above the terrain). This will result
in triangulated points even when the rays are parallel or even a little
divergent (during optimization this point will get refined, so the above value
need not be perfectly known).
- Instead of ``--heights-from-dem`` use the option ``--reference-dem`` in
``jitter_solve``, with associated options ``--reference-dem-weight`` and
``--reference-dem-robust-threshold``. See :numref:`jitter_options` for details.
- Use ``--match-files-prefix`` instead of ``--clean-match-files-prefix`` in
``jitter_solve``, as maybe bundle adjustment filtered out too many good matches
with small convergence angle.
- Use ``--min-triangulation-angle 0.0`` in both bundle adjustment and jitter
- Use ``--min-triangulation-angle 1e-10`` in both bundle adjustment and jitter
solving, to ensure we don't throw away features with small convergence angle,
as that will be almost all of them.

Here's the command for solving for jitter, and the bundle adjustment command
that creates the interest point matches is similar.

::

jitter_solve \
--forced-triangulation-distance 500000 \
--min-matches 1 \
--min-triangulation-angle 1e-10 \
--num-iterations 10 \
--translation-weight 10000 \
--rotation-weight 0.0 \
--max-pairwise-matches 50000 \
--match-files-prefix ba/run \
--roll-weight 10000 \
--yaw-weight 10000 \
--max-initial-reprojection-error 100 \
--tri-weight 0.05 \
--tri-robust-threshold 0.05 \
--num-anchor-points-per-tile 100 \
--num-anchor-points-extra-lines 5000 \
--anchor-dem dem.tif \
--anchor-weight 0.01 \
--reference-dem dem.tif \
--reference-dem-weight 0.05 \
--reference-dem-robust-threshold 0.05 \
data/nadir_frame_images.txt \
data/nadir_linescan.tif \
data/nadir_frame_cameras.txt \
data/nadir_linescan.json \
-o jitter/run

The weights and thresholds above can be increased somewhat if the input DEM
is reliable and it is desired to tie the solution more to it.

.. _jitter_out_files:

Output files
Expand Down Expand Up @@ -1436,6 +1475,12 @@ Command-line options for jitter_solve
distributed across each input image. Only applies to linescan cameras. See
also ``--anchor-weight`` and ``--anchor-dem``.

--num-anchor-points-per-tile <integer (default: 0)>
How many anchor points to create per 1024 x 1024 image tile. They will be
uniformly distributed. Useful when images of vastly different sizes (such as
frame and linescan) are used together. See also ``--anchor-weight`` and
``--anchor-dem``.

--anchor-weight <double (default: 0.0)>
How much weight to give to each anchor point. Anchor points are
obtained by intersecting rays from initial cameras with the DEM
Expand Down Expand Up @@ -1514,7 +1559,7 @@ Command-line options for jitter_solve
reprojection errors in the cameras for this point by this weight value. The
solver will focus more on optimizing points with a higher weight. Points
that fall outside the image and weights that are non-positive, NaN, or equal
to nodata will be ignored.
to nodata will be ignored. See :numref:`limit_ip` for details.

--min-triangulation-angle <degrees (default: 0.1)>
The minimum angle, in degrees, at which rays must meet at a
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/orbit_plot.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ id.
--orbit-label pinhole

Notice how above the shared orbit id is specified separately from the dataset
names.
names. Here we omitted the option ``--use-ref-cams``.

These two datasets will be plotted on top of each other, in red and blue, respectively.

Expand Down
6 changes: 3 additions & 3 deletions src/asp/Camera/SatSim.cc
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ double demPixelErr(SatSimOptions const& opt,
vw::Vector2 const& pixel_loc,
double height_guess,
vw::Vector3 & xyz_guess) { // xyz_guess may change

// Calc position along the trajectory and normalized along and across vectors
// in ECEF
vw::Vector3 P, along, across;
Expand Down Expand Up @@ -788,15 +788,15 @@ void calcTrajectory(SatSimOptions & opt,
if (model_jitter)
jitter_amp = calcJitterAmplitude(opt, orig_first_proj, first_proj, last_proj,
dem_georef, t);

// If to apply a roll, pitch, yaw rotation
if (have_roll_pitch_yaw) {
vw::Matrix3x3 R = asp::rollPitchYaw(opt.roll + jitter_amp[0],
opt.pitch + jitter_amp[1],
opt.yaw + jitter_amp[2]);
cam2world[i] = cam2world[i] * R;
}

// The rotation without jitter
vw::Matrix3x3 R0 = vw::math::identity_matrix<3>();
if (have_roll_pitch_yaw)
Expand Down
19 changes: 9 additions & 10 deletions src/asp/Core/InterestPointMatching.cc
Original file line number Diff line number Diff line change
Expand Up @@ -930,16 +930,15 @@ vw::Matrix<double> translation_ip_matching(vw::ImageView<vw::PixelGray<float>> c

// Homography IP matching
// This applies only the homography constraint. Not the best.
bool homography_ip_matching(
vw::ImageViewRef<float> const& image1,
vw::ImageViewRef<float> const& image2,
int ip_per_tile,
int inlier_threshold,
std::string const& match_filename,
size_t number_of_jobs,
std::string const left_file_path,
std::string const right_file_path,
double nodata1, double nodata2) {
bool homography_ip_matching(vw::ImageViewRef<float> const& image1,
vw::ImageViewRef<float> const& image2,
int ip_per_tile,
int inlier_threshold,
std::string const& match_filename,
size_t number_of_jobs,
std::string const left_file_path,
std::string const right_file_path,
double nodata1, double nodata2) {

vw_out() << "\t--> Matching interest points using homography.\n";

Expand Down
2 changes: 1 addition & 1 deletion src/asp/Core/InterestPointMatching2.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1092,7 +1092,7 @@ void match_ip_pair(vw::ip::InterestPointList const& ip1,
sw1.stop();
vw_out() << "Elapsed time in ip matching: " << sw1.elapsed_seconds() << " s.\n";

// TODO(oalexan1): The code below is O(N^2).
// Remove ip duplicates. Complexity is O(n log n).
vw::ip::remove_duplicates(matched_ip1, matched_ip2);

if (asp::stereo_settings().matches_per_tile != 0) {
Expand Down
17 changes: 9 additions & 8 deletions src/asp/GUI/ColorAxes.cc
Original file line number Diff line number Diff line change
Expand Up @@ -452,11 +452,11 @@ class ColorAxesPlotter: public QwtPlotSpectrogram {
const QwtScaleMap & xMap,
const QwtScaleMap & yMap,
const QRectF & canvasRect) const {
// canvasRect is the region, in screen pixel units, where the image
// will go. If the image is narrow, it may not fill fully the
// canvas. If the labels on the axes take up more space when
// zooming, the image region will be affected. So, it is not
// fixed once and for all.
// canvasRect is the region, in screen pixel units, where the image will go.
// If the image is narrow, it may not fill fully the canvas. If the labels
// on the axes take up more space when zooming, the image region will be
// affected. So, canvasRect is not fixed, even if the total
// region allocated to the canvas and axes is fixed.
// xMap and yMap are used to convert from world units to screen.

//vw::Stopwatch sw2;
Expand All @@ -482,13 +482,14 @@ class ColorAxesPlotter: public QwtPlotSpectrogram {
return;
}

// This will be called by draw().
// The rectangle in 'area' is in world units, not in pixel units. imageSize
// has the dimensions, in pixels, of the canvas portion having the image.
virtual QImage renderImage(const QwtScaleMap & xMap,
const QwtScaleMap & yMap,
const QRectF & area,
const QSize & imageSize) const {
// The rectangle in 'area' is in world units, not in pixel units. imageSize
// has the dimensions, in pixels, of the canvas portion having the image.


// Based on size of the rendered image, determine the appropriate level of
// resolution and extent to read from disk. This greatly helps with
// reducing memory usage and latency.
Expand Down
8 changes: 3 additions & 5 deletions src/asp/Tools/bundle_adjust.cc
Original file line number Diff line number Diff line change
Expand Up @@ -2615,7 +2615,7 @@ void ba_match_ip(Options & opt, SessionPtr session,
boost::shared_ptr<DiskImageResource>
rsrc1(vw::DiskImageResourcePtr(image1_path)),
rsrc2(vw::DiskImageResourcePtr(image2_path));
if ( (rsrc1->channels() > 1) || (rsrc2->channels() > 1) )
if ((rsrc1->channels() > 1) || (rsrc2->channels() > 1))
vw_throw(ArgumentErr()
<< "Error: Input images can only have a single channel!\n\n");
float nodata1, nodata2;
Expand Down Expand Up @@ -2701,12 +2701,11 @@ void matches_from_mapproj_images(int i, int j,
// too.
std::string map_match_file = ip::match_filename(opt.out_prefix,
map_files[i], map_files[j]);
try{

try {
ba_match_ip(opt, session, map_files[i], map_files[j],
NULL, NULL, // cameras are set to null since images are mapprojected
map_match_file);
} catch ( const std::exception& e ){
} catch (const std::exception& e) {
vw_out() << "Could not find interest points between images "
<< map_files[i] << " and " << map_files[j] << std::endl;
vw_out(WarningMessage) << e.what() << std::endl;
Expand All @@ -2725,7 +2724,6 @@ void matches_from_mapproj_images(int i, int j,

// Undo the map-projection
for (size_t ip_iter = 0; ip_iter < ip1.size(); ip_iter++) {

vw::ip::InterestPoint P1 = ip1[ip_iter];
vw::ip::InterestPoint P2 = ip2[ip_iter];
if (!asp::projected_ip_to_raw_ip(P1, interp_dem, opt.camera_models[i], georef1, dem_georef))
Expand Down
Loading

0 comments on commit 16357e5

Please sign in to comment.