- Calculate & Visualize the Hopalong Attractor with Python
- Abstract
- Requirements
- Usage
- Features, Functionality, and Special Scenarios
- Performance Optimization
- Alternative Solutions
- Recent Code Changes
- Enjoy the Exploration
- License
- References
- Mathematical Appendix
The "Hopalong"* attractor, authored by Barry Martin of Aston University in Birmingham, England [2],
was popularized by A.K. Dewdney in the September 1986 issue of Scientific American. In Germany, it gained further recognition through a translation titled "HÜPFER" in Spektrum der Wissenschaft [3].
*Nicknamed by A.K. Dewdney.
The mathematical definition of the Hopalong attractor is given by the following system of recursive functions, originally defined in [2]:
Where:
- The sequence starts from the initial point (x0 , y0) = (0 , 0).
- xn and yn represent the coordinates at the n-th iteration of the attractor.
- a, b, and c are parameters influencing the attractor's dynamics.
- sgn is the signum function. However, the programs use
math.copysign()
, which is defined as follows:
The available Python programs calculate and visualize the attractor by iterating the described function system.
Goal:
-
Visual Representation: The attractor is rendered as a density heatmap, with color intensity reflecting areas of high or low point concentration.
-
Efficiency: Large numbers of iterations are computed with low memory usage and high processing speed.
Core Algorithm:
The algorithm consists of two main passes:
-
First Pass: Determines the spatial extent of the attractor trajectory.
-
Second Pass: Maps the sequentially generated trajectory points in continuous space directly to a discrete pixel grid, keeping track of the number of pixel hits used to generate the density heatmap matrix.
For a more in-depth look, consult Pixel-Based Density Approximation
Just-in-time (JIT) compilation is applied and supported by a low-complexity code structure.
For details regarding two-pass strategy, see Two-Pass Approach
To run the programs, the following Python libraries or modules must be installed / imported:
- matplotlib *
- numpy *
- numba * (as of October 26, 2024, Numba only supports Python versions up to 3.12.x. Versions 3.13.x are currently not supported)
- math *
(* mandatory)
Optional (for performance tracking):
- time
- resource
Import the time
and resource
libraries if you want to track process time and system memory usage.
Otherwise, please comment out the corresponding code snippets in the import section and in the main()function.
...
#import time
#import resource
...
...
# Start the time measurement
#start_time = time.process_time()
...
# End the time measurement
#end_time = time.process_time()
# Calculate the CPU user and system time
#cpu_sys_time_used = end_time - start_time
# Calculate the memory resources used
#memMb=resource.getrusage(resource.RUSAGE_SELF).ru_maxrss/1024.0/1024.0
#print(f'CPU User&System time: {cpu_sys_time_used:.2f} seconds')
#print (f'Memory (RAM): {memMb:.2f} MByte used')
...
When you run the programs, you will be prompted to enter the following parameters, which determine the behavior of the Hopalong Attractor:
- a (float or integer): The first parameter affecting the attractor's dynamics.
- b (float or integer): The second parameter affecting the attractor's dynamics.
- c (float or integer): The third parameter affecting the attractor's dynamics.
- n (integer): The number of iterations to run (e.g. 1e6, 1_000_000 or 1000000).
Example Parameters:
- a = -2
- b = -0.33
- c = 0.01
- n = 2e8
Experimenting with different values of these parameters will yield diverse and intricate visual patterns.
The programs generate a visual representation of the attractor trajectory as a density heatmap, where color intensity reflects the frequency of points visited (hit counts). Lighter areas indicate regions of higher density, highlighting the attractor's intricate structure. The generated density heatmap matrix also supports a 3D representation by mapping normalized density values along the Z-axis.
Basic Version 3D ax.contourf3D
-
Basic: Calculates and displays the Hopalong attractor as a 2D density heatmap with an integrated color bar.
-
Basic 2D/3D: Adds the ability to visualize the attractor in 3D by mapping normalized density values along the Z-axis. Users can select the visualization mode at runtime.
-
Extended: Incorporates all features of the Basic version (except the color bar) and includes additional statistics as well as a visualization of the pixel hit count distribution.
Examples of outputs can be found in the "Usage" section above.
Note:
The code for the Basic variant supports both 2D and 3D visualization. To switch modes, comment out the relevant render_trajectory_image function
.Alternatively, use the 2D/3D variant to select the visualization mode during runtime.
-
Continuous to Discrete Mapping
Trajectory points, represented as floating-point coordinates in a two-dimensional continuous space, are scaled and mapped to integer pixel coordinates for visualization. Scaling factors derived from the trajectory's extents and the image dimensions ensure the continuous coordinates fit within the pixel grid while preserving spatial relationships. -
Integer Conversion by Rounding to Nearest Integer
The integer conversion within the continuous to discrete mapping leads to a quantization: closely spaced trajectory points in continuous space may map to the same pixel, resulting in multiple "hits" per pixel. This discretization aggregates local density but may reduce fine details due to grouping within the pixel grid. -
Tracking of Density Accumulation
An image array, initialized with zeros, serves as a blank canvas. Each trajectory point, after being mapped to a pixel, increments the value at the corresponding array index. The resulting array is referred to as the
Density Heatmap Matrix. Pixel hit counts encode the spatial distribution of trajectory point densities in continuous space. Higher hit counts indicate greater density.Note: The total sum of pixel hit counts equals the number of trajectory iterations, as each iteration contributes one pixel, which may be hit multiple times, to the density heatmap matrix.
For mathematical definition, see:Mathematical Appendix:
This process generates a density heatmap matrix where each pixel corresponds to a specific region in continuous space, with emergent density patterns arising from the interaction between the attractor's dynamics and the mapping of continuous coordinates to discrete pixel indices.
-
Impact of Image Resolution:
-
Lower resolutions increase visual density contrast by grouping multiple trajectory points into fewer pixels. This concentrates hit counts, emphasizing density differences but reducing detail.
-
Higher resolutions distribute trajectory points across more pixels, capturing finer data variations and increasing detail but reducing visual density contrast as hit counts are spread more evenly.
-
Using np.histogram2d(..., density=True)
, this approach divides the continuous space into a grid of equal-area bins and counts the number of trajectory points that fall into each bin. The counts are then normalized by the total number of points and the bin area, resulting in a density matrix representing relative point distributions across the entire space for quantitative analysis. The density patterns, similar to the pixel-based approach, arise from the interplay between the attractor's dynamics and the bin grid configuration.
-
Impact of bin size:
-
Smaller bins (higher bin count) improve density precision but reduce density contrast by spreading density over more bins.
-
Larger bins (lower bin count) increase density contrast by concentrating normalized density values in fewer, larger bins but reduce precision.
-
The Matplotlib "hot" colormap represents pixel hit counts (in pixel-based density approximation) and histogram bin counts (in histogram-based density estimation) as colors, scaled to span the full range of the colormap. Darker colors correspond to lower densities, while lighter colors correspond to higher densities, creating a gradient that highlights areas of activity.
Pixel-based density approximation presents a promising alternative to histogram-based density estimation, each offering distinct advantages:
-
Qualitative: The pixel-based approach is well-suited for visual exploration and supports fast algorithms that efficiently handle large numbers of iterations. This efficiency arises from its ability to directly update the density heatmap matrix sequentially. For details, see Two-Pass Approach.
-
Quantitative: The histogram-based approach is ideal for generating a density matrix that accurately estimates the normalized density distribution in continuous space. However, it typically requires caching all data points in advance for precise binning and normalization, leading to significant memory usage. Sequential processing seems to be quite demanding.
Both the pixel-based density approximation (heatmap) and histogram-based density estimation effectively highlight areas of point concentrations during visualization. This is demonstrated in the following comparison panel showcasing a normalized density heatmap and histogram density matrix, along with a quantitative comparison.
For consistency, image size and bin size were both set to 1000 × 1000 pixels, with each pixel representing an area in continuous space, analogous to a bin in a density estimation process. This ensures a direct one-to-one mapping between image resolution and bin resolution, facilitating meaningful visual and quantitative comparisons
The images were rendered as follows:
- Density Heatmap Matrix: Using
matplotlib.pyplot.imshow
withinterpolation='none'
to ensure no smoothing, and the trajectory extents calculated during iterations. - Histogram Density Matrix: Using
matplotlib.pyplot.pcolormesh
withantialiased=False
to maintain bin accuracy, defined by histogram bin edges.
Visual Side-by-Side Comparison
Parameters: a = -2, b = -0.33, c = 0.01
Number of Iterations: 2 × 10^8
Image Size: 1000 × 1000 pixels
Quantitative Comparison of Normalized Density Heatmap and Histogram Density Matrix
PCC: Pearson Correlation Coefficient, CS: Cosine Similarity, SSIM: Structural Similarity Index
Matrix Similarity with Increasing Iterations
Parameters | Iterations | PCC | CS | SSIM |
---|---|---|---|---|
a = -2, b = -0.33, c = 0.01 | n = 10,000 | 0.6217 | 0.6246 | 0.9462 |
a = -2, b = -0.33, c = 0.01 | n = 100,000 | 0.8453 | 0.8540 | 0.9015 |
a = -2, b = -0.33, c = 0.01 | n = 1,000,000 | 0.9204 | 0.9342 | 0.8853 |
a = -2, b = -0.33, c = 0.01 | n = 200,000,000 | 0.9712 | 0.9766 | 0.9520 |
Matrix Similarity Across Parameter Sets
Parameters | Iterations | PCC | CS | SSIM |
---|---|---|---|---|
a = 0.6, b = 0.5, c = 0 | n = 10,000 | 0.6346 | 0.6376 | 0.9633 |
a = 0.6, b = 0.5, c = 0 | n = 100,000 | 0.7724 | 0.7843 | 0.9392 |
a = 0.6, b = 0.5, c = 0 | n = 1,000,000 | 0.9257 | 0.9349 | 0.9521 |
a = 0.6, b = 0.5, c = 0 | n = 200,000,000 | 0.9924 | 0.9935 | 0.9789 |
Matrix Similarity with Periodic Orbit: A 3-Cycle Example --> Special Constellations and Edge Cases
Parameters | Iterations | PCC | CS | SSIM |
---|---|---|---|---|
a = 5, b = 5, c = 0 | n = 1,000,000 | 1.0000 | 1.0000 | 1.0000 |
The programs leverage the math.copysign
function, copysign(x, y)
, which returns a float value with the magnitude (absolute value) of x but the sign of y. On platforms that support signed zeros, copysign(1.0, -0.0)
correctly evaluates to -1.0.
The copysign function can serve as a substitute for the standard signum function and is defined as follows:
This adjustment alters the behavior of certain parameter sets, often leading to intricate, fractal-like patterns instead of periodic orbits or fixed-point dynamics.
Periodic Orbits Defined:
Periodic orbits are trajectories in which the system revisits the same state after a fixed number of iterations.
Example Parameters Yielding Intricate Patterns When Using copysign
:
-
a = 1, b = 2, c = 3 or
-
a = 0, b = 1, c = 1 or
-
a = 1, b =1, c = 1
Certain parameter sets lead to periodic or near-periodic orbits, even when using the copysign
function. In these cases, the attractor's trajectory revisits a limited number of distinct points repeatedly, resulting in high-density cycles. Such cycles frequently occur at the attractor's boundary.
Example Parameter Sets:
-
1: a = p , b = 0, c = 0
-
2: a = p, b = 0, c = p
-
3: a = p, b = p, c = 0
Here, p is a constant parameter (
Case Analysis:
Parameter Set 3 (a=p, b=p, c=0)
For this parameter set, the Hopalong functions are:
With p>0, the system settles into a 3-cycle: (0,0)→(0,p)→(p,p)→(0,0).
The pixel hit count simplifies to n / 3, where n is the number of iterations.
Example:
- Parameters: a=5, b=5, c=0
- Iterations: 1,200,000
Observations:
The 3-cycle structure dominates, with "high-density cycles" clustering along the attractor's extent boundaries.
Note: Variations in the signs of a and b may lead to mirrored or symmetrical trajectories.
Visualization Recommendation:
For edge cases, reduce the image resolution (e.g., 100×100) to better highlight the boundaries of the attractor.
For example:
def main(image_size=(100, 100), color_map='hot'):
This scenario is an ideal use case for the features of the extended program variant, such as pixel hit count statistics, to analyze high-density cycle behavior.
Execution time and resources: Measurement starts after user input and records the CPU time for the entire process, including image rendering. It also tracks the system memory used.
Note: Since user interactions with the plot window, such as zooming, panning, or mouse movements, are also measured, it is recommended to close the plot window automatically. This can be achieved using the commands plt.pause(1)
followed by plt.close(fig)
. As long as there is no interaction with the plot window, the pause time from plt.pause()
is not recorded by the time.process_time()
function.
#plt.show()
plt.pause(1)
plt.close(fig)
The programs leverage the Numba JIT just-in-time compilation for performance optimization. This avoids the overhead of Python's interpreter, providing a significant speedup over standard Python loops. JIT compilation translates Python code into machine code at runtime, allowing for more efficient execution of loops and mathematical operations.
Dummy calls are preliminary invocations of JIT-compiled functions that prompt the Numba compiler to generate machine code before the function is used in the main execution flow. This ensures that the function is precompiled, avoiding compilation overhead during its first actual execution. This process is akin to "eager compilation," as it occurs ahead of time, but it does not require explicit function signatures in the header.
The parallel loop function prange
from the Numba library is not suitable for cross-iteration dependencies, such as those encountered when iterating recursive functions. While it is possible to restructure the second pass to use prange for populating the image array, this could introduce race conditions—situations where multiple threads access and modify shared data simultaneously, leading to inconsistent or unpredictable results. Therefore, this approach was not implemented.
By separating the extent calculation (first pass) from the trajectory point to pixel mapping (second pass), this approach allows for efficient sequential processing. Knowing the overall trajectory extents in advance enables direct and efficient mapping of points to image pixels, optimizing memory usage and maintaining consistent performance.
Advantages:
-
Memory efficiency: The two-pass approach recalculates trajectory points instead of caching them, significantly reducing memory requirements.
-
Scalable Performance: As the number of iterations grows, the two-pass approach maintains efficiency in both memory usage and processing speed.
-
Optimized for JIT Compilation: The straightforward, sequential structure is well-suited for Just-in-Time (JIT) compilation, resulting in faster execution.
Trade-Off: The trajectory points are computed twice — once for extent calculation and once for pixel mapping. This is outweighed by the advantage of scalable performance.
Note: The historical approach based on Dewdney's algorithm
using intermediate variables is applied. Due to the internal optimization by the Numba JIT compiler, this approach does not incur any performance loss.
@njit #njit is an alias for nopython=True
def compute_trajectory_extents(a, b, c, n):
# Dynamically compute and track the minimum and maximum extents of the trajectory over 'n' iterations.
x = np.float64(0.0)
y = np.float64(0.0)
min_x = np.inf # ensure that the initial minimum is determined correctly
max_x = -np.inf # ensure that the initial maximum is determined correctly
min_y = np.inf
max_y = -np.inf
for _ in range(n):
# selective min/max update using direct comparisons avoiding min/max function
if x < min_x:
min_x = x
if x > max_x:
max_x = x
if y < min_y:
min_y = y
if y > max_y:
max_y = y
# signum function respecting the behavior of floating point numbers according to IEEE 754 (signed zero)
xx = y - copysign(1.0, x) * sqrt(fabs(b * x - c))
yy = a-x
x = xx
y = yy
return min_x, max_x, min_y, max_y
# Dummy call to ensure the function is pre-compiled by the JIT compiler before it's called by the interpreter.
_ = compute_trajectory_extents(1.0, 1.0, 1.0, 2)
@njit
def compute_trajectory_and_image(a, b, c, n, extents, image_size):
# Compute the trajectory and populate the image with trajectory points
image = np.zeros(image_size, dtype=np.uint64)
# pre-compute image scale factors
min_x, max_x, min_y, max_y = extents
scale_x = (image_size[1] - 1) / (max_x - min_x) # column
scale_y = (image_size[0] - 1) / (max_y - min_y) # row
x = np.float64(0.0)
y = np.float64(0.0)
for _ in range(n):
# Map trajectory points to image pixel coordinates, rounding to nearest integer
px = round((x - min_x) * scale_x)
py = round((y - min_y) * scale_y)
# Bounds check to ensure indices are within the image
if 0 <= px < image_size[1] and 0 <= py < image_size[0]:
# populate the image and calculate trajectory "on the fly"
image[py, px] += 1 # Respecting row/column convention, accumulate # of hits
xx = y - copysign(1.0, x) * sqrt(fabs(b * x - c))
yy = a-x
x = xx
y = yy
return image
# Dummy call to ensure the function is pre-compiled by the JIT compiler before it's called by the interpreter.
_ = compute_trajectory_and_image(1.0, 1.0, 1.0, 2, (-1, 0, 0, 1), (2, 2))
While the two-pass approach is the primary solution, it’s valuable to consider alternative one-pass methods, each with unique trade-offs in performance, memory usage, and complexity. Here’s an overview:
Description: This method computes all trajectory points in a single pass and stores them in memory, enabling efficient calculation of trajectory extents and mapping to image pixels.
-
Advantages: Leveraging NumPy’s vectorized operations, this approach efficiently computes and maps points in a single pass, potentially increasing performance.
-
Disadvantages:
Full caching requires substantial memory allocation, especially for high iteration counts. This may lead to performance issues from system memory swapping or even memory overflow.
Description: These approaches attempt to reduce memory consumption by either processing the trajectory in chunks or not caching trajectory points at all. However, since the full trajectory extents are unknown at the outset, each variation faces the same limitation: pixel mappings require recalculating as the trajectory's range in continuous space will generally evolve, except in edge cases.
Chunked: The trajectory is divided into manageable chunks, each cached temporarily.
No Caching: Points are computed and mapped to pixels directly without storing them.
-
Advantages: Limits memory usage.
-
Disadvantages: Both approaches become impractical due to the following major limitations: Data Loss and Inaccuracy: As previously computed floating-point values are irrecoverably mapped to integer pixel coordinates, it becomes impossible to retrieve the exact values for remapping, leading to data loss and inconsistencies.
*This also applies analogously to any versions that only process floating point values.
No other one-pass method solutions have been investigated or considered to date. More sophisticated solutions would also contradict the minimum complexity design approach unless a significant performance improvement in calculations with a high number of iterations makes them worth considering.
Overall, the two-pass approach offers an excellent balance of speed, efficiency, and simplicity, making it ideal for attractor calculations that involve large number of iterations. While the trajectory points must be computed during both passes, this method successfully avoids the drawbacks associated with alternative solutions.
OPTIONAL: Using a 3D plot by mapping normalized density values along the Z-axis.
"""
def render_trajectory_image(image, extents, params, color_map):
# Render the trajectory image in 3D
# Create a meshgrid for X and Y coordinates
x = np.linspace(extents[0], extents[1], image.shape[1])
y = np.linspace(extents[2], extents[3], image.shape[0])
x, y = np.meshgrid(x, y)
# Plot with normalized density (hit count) as Z values
z = image / np.max(image) if np.max(image) > 0 else image
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.contourf3D(x, y, z, levels=100, cmap=color_map)
# Customize the plot
ax.set_title(f'Hopalong Attractor - 3D Density (Z) Plot\nParams: a={params["a"]}, b={params["b"]}, c={params["c"]}, n={params["n"]:_}')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.view_init(elev=75, azim=-95) # Adjust angle for better view
plt.show()
"""
-
Explore the attractor in three dimensions by mapping normalized density values along the Z-axis.
You can tryax.contourf3D
(Filled Contours) orax.contour3D
(Unfilled Contours). -
Experiment with different image resolutions, color maps, or ways of populating the image array beyond using the hit count to explore new visual perspectives.
Also check out my simpler Rust version
This project is licensed under the MIT License:
Copyright © 2024 Ralf Becker, Nuremberg
Email: ratwolf@duck.com
The full text of the MIT License can be found at:
https://opensource.org/licenses/MIT
[1]
J. Lansdown and R. A. Earnshaw (eds.), Computers in Art, Design and Animation.
New York: Springer-Verlag, 1989.
e-ISBN-13: 978-1-4612-4538-4.
[2]
Barry Martin, "Graphic Potential of Recursive Functions," in Computers in Art, Design and Animation [1],
pp. 109–129.
[3]
A.K. Dewdney, Program "HÜPFER," in Spektrum der Wissenschaft: Computer Kurzweil.
Spektrum der Wissenschaft Verlagsgesellschaft mbH & Co., Heidelberg, 1988.
(German version of Scientific American).
ISBN-10: 3922508502, ISBN-13: 978-3922508502.
- NumPy Documentation: NumPy is a fundamental package for scientific computing in Python.
- Matplotlib Documentation: A library for creating static, interactive, and animated visualizations.
- Numba Documentation: Numba is a just-in-time compiler for optimizing numerical computations.
- Python Built-in Functions: Overview of built-in functions available in Python.
- Python Math Module: Access mathematical functions defined by the C standard.
- Python Time Module: Time access and conversions.
- Python Resource Module: Interface for getting and setting resource limits.
Definitions:
- T: Set of trajectory points, T = {t1, t2, ..., tN}, where N is the number of trajectory points.
- ti: The i-th trajectory point, represented as a 2D vector in continuous space, ti = (xi, yi), where xi and yi are floating-point coordinates.
- xmin, xmax, ymin, ymax: Minimum and maximum x and y values of the trajectory points, respectively.
- W, H: Width and height of the image (in pixels).
- Sx, Sy: Scaling factors for x and y coordinates, calculated as:
-
pi: The pixel coordinates corresponding to trajectory point ti
pi = (ui, vi), where ui and vi are integer pixel indices. -
D: The Density Heatmap Matrix, a W x H matrix initialized with zeros, Duv represents the density at pixel (u, v):
The mapping from continuous coordinates (xi, yi) to discrete pixel coordinates (ui, vi) is given by:
where round()
represents rounding to the nearest integer.
The Density Heatmap Matrix D is populated as follows:
For each trajectory point ti:
This can be expressed more formally as:
where δ is the Kronecker delta function, principally defined as follows:
The population scheme encompasses the following scenarios:
- Aggregation:
Multiple points mapping to the same pixel result in multiple increments to the corresponding cell in the matrix. - Unique Pixel:
A single point maps to a unique pixel, resulting in a single increment to the respective cell. - No Trajectory Point Mapping:
Pixels with no corresponding trajectory points have their related cells remain at zero density.
The sum of all non-zero elements in the Density Heatmap Matrix equals the total number of hits on pixels, which corresponds to the number of trajectory points N and the number of iterations n, with each iteration contributing one pixel—potentially hit multiple times—to the density heatmap matrix.