-
Notifications
You must be signed in to change notification settings - Fork 0
Usage. UsageExamples
We put here several examples that vispy should support. We should keep these examples in mind while designing the API.
The API should make it easy to implement the following scenario. When panning, the camera is translated and should continue its course with a deceleration when the mouse or finger is released. This is similar to how scrolling works on touchscreen devices (smartphones, tablets).
This requires the event system to support several frame updates after the end of a user action.
[LC] This is supported by the current event API.
When displaying a very long, high-resolution time-dependent signal, it should be possible to implement a dynamic undersampling method where only a subset of the samples are displayed if the view is zoomed out. When zooming in, the data is dynamically updated with more samples in the viewport, while excluding those samples that are outside the range of the view. This particular scenario does not need to be implemented directly in vispy, but the architecture should make it easy to do (e.g. offer some custom hook in some Plot object which allows to update the data as soon as the view is changed).
This scenario can be used to deal with low-memory graphics cards. Also, it would make possible the following application: visualizing large HDF5 datasets which don't fit in memory and need to be adaptively and dynamically fetched from the hard-drive while navigating.
[AK] I think this is called "out of core visualization". I am interested in a similar case, rendering of very large volumetric datasets, an example paper is here.
[CR] Thanks for the pointer, I did not know this term actually. There's a working prototype example here, the code is quite ugly as I need to bypass some limitations in Galry, but it works. The goal is to have this code much cleaner with vispy! In this example, a HDF5 file containing a Nchannels x Nsamples array is displayed smoothly with a time window no larger than 5 seconds. One can pan and zoom freely, and the data is dynamically fetched from the hard drive. When zooming out, the data is undersampled, when zooming in, it's normally sampled so that the total number of vertices at any time is bounded. HDD accesses occur in an external thread so that the UI is not blocked.
It would be great if any built-in visualization could be customized by adding some shader code at some location. For example, consider an hypothetical ImageVisual which displays an image as a textured rectangle. It could be customized by adding some code that computes a fractal in the fragment shader.
It should be feasible to deal with aspect ratios when:
- zooming in,
- resizing the window.
When zooming in with right-click + mouse move (for example), the figure can be resized both horizontally and vertically in an independent manner, depending on the x/y mouse displacements. But sometimes, this behavior is unintended (example: with an photo) so it should be possible to have horizontal and vertical scaling constrained to a given aspect ratio. This behavior could be built in in the library, or a mechanism could be implemented that lets the user customize this.
Relatedly, when resizing the window, two things can happen: the figure is simply stretched to the new size, or the aspect ratio is constrained. In the latter case, the figure could be aligned on the center, left, or right, or this behavior could be customized by the user. For example, imagine that I want to create a photo viewer on top of vispy. When a photo is displayed and the window is resized, the aspect ratio should be fixed and:
- the photo can be scaled to fit entirely in the window,
- or it can be scaled so that there is no hole on the screen (some parts of the photo may be lost),
- or an other behavior could happen.
Also, other visuals in the scene (e.g. text) could have a different behavior. Ideally, vispy should offer a convenient way of dealing with this, either at the figure level, or at the visual level.
[AK] Robert and I have given this a lot of thought and are quite happy with how visvis currently handles this: each Axes object (i.e. subplot) has a daspect
property that gives the relation between the dimensions. It can be set by the user. Zooming and resizing updates this value, unless the daspectauto
property is False.
We will have a viewport that the user can pan/zoom. This viewport should also be capable of automatically scaling to fit all or some of its contents within the visible range. To make this feasible, all visuals must have a mechanism for reporting the range over which they will draw as well as informing their viewport when that range has changed.
Viewports should support a variety of modes for determining the auto-scaled range:
- Scale once to make some or all visuals visible
- Monitor the scene for changes and update automatic scaling whenever necessary
- Automatically pan to center the visuals without changing the scale
- For extended plots, auto-scale the y-axis to fit only the portion of data that is visible along the x-axis
- Exclude outliers when determining range
Some visuals will be drawn scale-invariant. For example: text, arrows, and scatter plot symbols should all (optionally) retain the same scale regardless of the viewport transformation. Viewports that auto-scale should also take this into account (this can be surprisingly tricky).
There should be several high-level, 2D plotting visuals which will often need to be combined and should share a common code base as well as storage for handling data. Examples: line plot, scatter plot, error bars, bar chart, candlestick, etc. Line plots are often combined with scatter plots and/or error bars. Likewise, bar charts and scatter plots can be used alone or combined with error bars. Each of these visuals might have several data-handling features in common:
- data should be efficiently appendable
- provides common transformations (log, fft, etc.)
- handling of nan/inf values
- caching data statistics (min/max/percentiles)
- caching downsampled data
- automatic interleaving with color, line width, etc.
- conversion and caching between indexed and non-indexed formats
- conversion of color formats
When designing these visuals, we want to make sure that 1) memory is conserved (shared between visuals) and 2) code for data handling is properly generalized and not copied between visuals.
[CR] Data could be shared not only in system memory, but also on GPU memory, i.e. the same vertex buffer objects or textures could be shared between different visuals. Example: graph rendering, with a single VBO containing the node positions, and an index for the graph edges. Changing the node positions means changing a single buffer in memory.
Some events triggered by an user action may take some time to complete due to intensive CPU computations or hard drive access. So it should be possible to execute some event callbacks in a background thread, and have a mechanism to handle event completion. We should not rely on backend mechanisms like Qt signals and slots, at least not directly (we need a backend-independent way of doing this).
Alternatively, we could have a mechanism letting the user implement that by himself instead of integrating it natively in vispy.
It should be possible to use the library to design and layout publishable figures using a physical coordinate system (in/cm/pt/em). This has a few implications:
- High-level visuals may need to be aware of their display DPI
- Any size that can be specified numerically should optionally accept values of different units. For example:
plot(x, y, line_width=1.5*pt)
- Must be able to nest scalable viewports (for example, the inner viewport displays plot data while the outer viewport displays the entire figure).
- Scale-invariant visual elements such as text, line widths, and scatter plot symbols should only adjust to match the inner-most viewport. For example, if the inner viewport contains a text label, then the font size should (optionally) remain the same as the viewport is rescaled. However, if the outer viewport is rescaled, then the text should also scale accordingly.
There should exist a mechanism which allows any visual to be projected onto an arbitrary, non-rectilinear coordinate system. Since this is a relatively rare use-case, the mechanism should not interfere with the simplicity or performance of the common, rectilinear case. The coordinate projection itself should be defined in a specialized Visual such that its children specify their data in the untransformed coordinate system and the projection Visual orchestrates the conversion to rectilinear coordinates. An example scenegraph tree might look like: scene -> polar projector -> plot line. In this case, the plot line specifies its data in polar coordinates and the projector converts the line to rectilinear coordinates (possibly involving data resampling?).
[CR] Would this system handle 3D too? [LC] Tentatively yes; I don't see any reason to exclude 3D transforms from this facility.
It should be possible to place objects in the scenegraph hierarchy that change the way child visuals are processed. For example:
- Children are rendered to an off-screen buffer and displayed instead as a textured quad (this is important for caching expensive render operations)
- Children are not drawn, but instead replaced with an alternate hierarchy of visuals (this is a possible way to implement coordinate projection)
- Children are drawn as bounding rectangles rather than their original contents
I do not mean that we should create a GUI toolkit, but making a few simple widgets (like pushbuttons), and a few specialized widgets (e.g. a colormap editor, or contrast-scale editor) would be very valuable. Also allow user to create custom widgets relatively easy.
In visvis, there's a strong separation between visuals that live in screen coordinates (i.e. widgets) and ones that live in "world coordinates". For vispy, we would have to make sure that this all comes together in a nice way with e.g. the above two use cases.
[NR] AntTweakBar can be used before we come with a solution. I've made bindings and it's quite easy to integrate it to a toolkit (made it for glut and glfw).
It should be possible to copy / reuse / serialize visuals to provide a variety of features:
- The same data can be viewed in multiple viewports (for example, 3D modeling software typically gives multiple views of the same scene). A scenegraph might have multiple viewports, all of which have the same children but display them with different transformations. This potentially does strange things to the structure of the scenegraph (items may have multiple parents).
- Alternately, visuals may be copied (thin copies; they could still re-use the underlying data and buffers) rather than re-used. This avoids the multiple-parent issue.
- Should be possible to copy visuals from one scene to another (for example, I am browsing through data and want to 'copy' a plot trace and 'paste' it into another view)
- Should be possible to serialize visuals so that a particular scene can be saved to / restored from file or transmitted over IPC or network connections. This might also be used to serve WebGL and other kinds of web-based clients, or provide a basis for exporting to vector graphics formats.
Not to specify one particular thing that I'd want, but more of an overall idea ... I'd like to be able to apply different volume rendering techniques, perhaps also customize these and somehow make them interactive. Render these together with other visuals (e.g. meshes or lines that represent segmentations or navigational information). The different objects may be partially transparent, which should be handled correctly (at least visually). Oh, and I want this in an augmented/virtual reality setting using a head-mounted display; so in stereo and framerates near 50Hz.
Would like to be able to take any Canvas and render it with a variety of stereoscopic methods: using left/right OpenGL buffers, red/blue anaglyph, side-by-side display, etc. It would be nice to provide some VR functionality with this--generate projection matrices by specifying eye locations + display location (this should allow variable interocular distance and oblique viewing angles).
Would like to have control of displaying multiple datasets on a high-resolution mesh. A specific use case of interest is to provide a backend for PySurfer (see examples of ideal output here) but functionality easily generalizes to other packages. Beyond basic colored triangular mesh display capability, this use case requires at a minimum:
- Lighting control
- Control over drawing the outline of a triangular mesh in addition to the surface itself (hemisphere shown on the left here)
- Colormapping support
- Colorbar and annotation (this could easily be done at the PySurfer level)
- Ability to take "screenshots" / capture current image, esp. if it can be done at a higher resolution than the current display
- (Optional) offscreen rendering support
- Multiple views of the same object, as well as multiple "worlds" for showing different objects simultaneously (something like this)
The following things would be great as they would enhance our current capabilities, but are not essential (ordered by presumed difficulty):
- High-quality antialiasing
- Support for generating graphics/"screenshots" in headless mode (i.e., on systems without a display)
- Volumetric plotting
- WebGL support