Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a 3D paraview extractor option corresponding to layer thicknesses? #264

Open
pwolfram opened this issue Jun 25, 2019 · 7 comments
Open

Comments

@pwolfram
Copy link
Contributor

In order to represent volumetric data in ParaView we need to restructure the data in such a way that volume rendering will work (e.g., topologically 3D vs 2D).

The reason to do this is so that a full 3D volume can be output in https://github.com/MPAS-Dev/MPAS-Tools/blob/master/visualization/paraview_vtk_field_extractor/paraview_vtk_field_extractor.py

@GregAbram's code is at https://gist.github.com/d963664449fe02f511099dc272c47d1a (copied below for convenience)

cc @GregAbram, @xylar, @mark-petersen

# Greg Abram 06/25/2019
from vtk import *
from glob import glob
import numpy as np
from vtk.numpy_interface import dataset_adapter as dsa
from vtk.numpy_interface import algorithms as algs
import sys
import math

args = sys.argv[1:]
DEPTH_SCALE = 300
do_uvw = False
do_latlon = True

variables = []

while len(args) > 0:
  if args[0] == 'd':
    DEPTH_SCALE = float(args[1])  # vertical extent to exagerate veritcal dimension
    args = args[2:]
  elif args[0] == '+uvw':
    do_uvw = True  # uvw is vector zonal, meridiional, vertical vectors
    args = args[1:]
  elif args[0] == '-uvw':
    do_uvw = False
    args = args[1:]
  elif args[0] == '+latlon':
    do_latlon = True  # latlon project
    args = args[1:]
  elif args[0] == '-latlon':
    do_latlon = False
    args = args[1:]
  else:
    variables.append(args[0]) 
    args = args[1:]

# files coming in from Xylar's post-processing tool
prdr = vtkXMLPolyDataReader()
prdr.SetFileName('data/staticFieldsOnVertices.vtp')

c2p = vtkCellDataToPointData()
c2p.SetInputConnection(prdr.GetOutputPort())
c2p.Update()

pgrid = dsa.WrapDataObject(c2p.GetOutput())

crdr = vtkXMLPolyDataReader()
crdr.SetFileName('data/staticFieldsOnCells.vtp')
crdr.Update()

# create python wrapper for vtk object
cgrid = dsa.WrapDataObject(crdr.GetOutput())

# bottom depth is now assigned to the triangular vertex points
pgrid.PointData.append(cgrid.CellData['bottomDepth'], 'bottomDepth')

nPoints    = pgrid.Points.shape[0]
nTriangles = pgrid.Polygons.shape[0]/4

lat = pgrid.PointData['latVertex']
lon = pgrid.PointData['lonVertex']

print('got latlon')

# radial vectors through each surface point

rxyz = pgrid.Points / np.sqrt((pgrid.Points*pgrid.Points).sum(axis=1)).reshape(len(pgrid.Points),1)

if do_uvw:
  # compute zonal/meridonal frame of reference for each point on the surface
  # computes the position vectors

  rx = rxyz[:,0]
  ry = rxyz[:,1]
  rz = rxyz[:,2]

  theta = np.arctan2(ry, rx)
  d = np.sqrt(rx*rx + ry*ry)
  phi = np.arctan2(rz, d)

  ijhat = rx[:,np.newaxis]*[1.0, 0.0, 0.0] + ry[:,np.newaxis]*[0.0, 1.0, 0.0]
  d = np.linalg.norm(ijhat, axis=1)
  ijhat = ijhat / d[:,np.newaxis]

  # zonal, meridional, vertical 3D coordinate vector on the sphere
  U = np.column_stack((-np.sin(theta), np.cos(theta), [0]*len(theta)))
  V = -np.sin(phi)[:,np.newaxis]*ijhat + np.cos(phi)[:,np.newaxis]*[0.0, 0.0, 1.0]
  W = rxyz 

  print 'uvw ready'
else:
  print 'not doing uvw'

# and points at sea level

nPoints = len(pgrid.Points)

timesteps = glob('data/time_series/fieldsOnCells.*.vtp')
timesteps = [timesteps[0]]


# produces grid to be shared for each timestep and assign values on the mesh
# for each timestep
first = True
for tf in timesteps:

  ts = tf.split('.')[1]

  print 'timestep:', ts, tf

  rdr1 = vtkXMLPolyDataReader()
  rdr1.SetFileName(tf)
  rdr1.Update()

  cgrid = dsa.WrapDataObject(rdr1.GetOutput())

  print 'loaded'

  if first:

    first = False

    # get number of layers (checks last two digits from Xylar's vtk output)
    nlayers = -1
    for i in cgrid.CellData.keys():
      if i[:30] == 'timeMonthly_avg_layerThickness':
        l = int(i[-2:])
        if nlayers < i:
          nlayers = l

    # create list of slab thicknesses

    thicknesses = [cgrid.CellData['timeMonthly_avg_layerThickness_%02d' % i] for i in range(nlayers+1)]

    print len(thicknesses), 'thicknesses'

    # BottomDepth is positive - eg. units below the sea level.   Slab thicknesses are also 
    # positive.  Therefore, the top level is the *negative* bottom depth plus the sum
    # of the layer thicknesses.

    # this is equivalent to the ssh (something that could be checked with an assert)
    toplevel = -pgrid.PointData['bottomDepth'] + np.sum(thicknesses, axis=0) 

    # arangement to correspond to interior interpolation of the scalar field
    # (ssh,ssh - dz/2) half top layer
    # ...
    # interior layers
    # ...
    # (dz/2, bottomDepth) half bottom layer

    # To handle the cell-centeredness of the data, we force a layer at the top level, then
    # one 1/2 thickness[0] below that, which would be the center of the first slab.  Then we add
    # a layer at the center of each *subsequent* slab, then a layer at the bottom.   This results
    # in n+2 layers, so for cell-centered variables (that have the same number of vertical layers
    # as the layerThickness variable) we copy the first and last data slices.  For the P1 variables,
    # these are top-centered, so we assign the first slice, then interpolate values for each slab
    # center, then dup the last slab-center slice for the bottom slice.

    # depths[i] are the depths of the i'th slice at each point.   The 0'th slice is at the top
    # level; then there's a slice at the center of each slab, then there's the bottom.   The center
    # of slice k (k > 1) is the center of the previous slab (k-1) + the rest of slab that slice
    # (thickness[k-1]/2) plus the top half of the current slab.

    
    # (ssh,ssh - dz/2) half top layer (builds out whole layer)
    depths = [toplevel, toplevel - (thicknesses[0]/2)]

    # interior layers (including top of the very bottom layer)
    for i in range(1, len(thicknesses)):
      depths.append(depths[-1] - ((thicknesses[i-1] + thicknesses[i]) / 2))

    # (dz/2, bottomDepth) half bottom layer  (close it out)
    depths.append(-pgrid.PointData['bottomDepth'])

    # diagnostic to check to make sure depths are reasonable
    print 'depths at random point on the surface'
    for i,d in enumerate(depths):
      print i, depths[i][10000]

    # projection with vertical exageration to produce vtk points
    points = []
    for i,depth in enumerate(depths):
      x = pgrid.Points + (DEPTH_SCALE*depth)[:,np.newaxis]*rxyz
      y = np.array(x).astype('f4')
      points.append(y)

    # point values being converted in prep for vtk inputs
    points = np.ascontiguousarray(np.vstack(points)).astype('f4')

    print 'points done'

    triangles = pgrid.GetPolygons().reshape((-1,4))[:,[1,2,3]]

    # wedge ("prism") vtk container topology
    prisms = []
    for i in range(len(depths)-1):
      prisms.append(np.hstack(([[6]]*len(triangles), triangles+[i*nPoints]*3, triangles+[(i+1)*nPoints]*3)))

    prisms = np.vstack(prisms).astype('i4')

    nPrisms = prisms.shape[0]
    print 'nPrisms', nPrisms

    # build vtk type as wedges
    cellTypes = dsa.VTKArray([VTK_WEDGE]*nPrisms).astype('u1')
    # range doesn't include the last point (e.g., a wedge is defined by 6 points)
    cellLocations = dsa.VTKArray(np.array(range(0, 7*nPrisms, 7)).astype('i4'))

    ogrid = dsa.WrapDataObject(vtkUnstructuredGrid())

    ogrid.SetCells(cellTypes, cellLocations, prisms.flatten())
    ogrid.SetPoints(points)

    print nPrisms, 'prisms on', len(points), 'points', 'max ID', ogrid.GetCells().max(), '...', len(ogrid.Cells)

    # appends lat/lon data to produced file
    if do_latlon:
      lat = 90.0 * ((2*lat) / 3.1415926)
      lat = np.concatenate([lat]*len(depths)).astype('f4')
      ogrid.PointData.append(lat, 'lat')
      lon = np.where(lon > 180, lon-360, lon)
      lon = np.concatenate([lon]*len(depths)).astype('f4')
      ogrid.PointData.append(lon, 'lon')
      print 'stacked latlon done'
    else:
      print 'not doing uvw'


    # vectors to project scalar values into vector
    # multiply velocityZonal, velocityMeridional, etc to project in 3D to build VTK/ParaView vector 
    if do_uvw:
      U = np.concatenate([U]*len(depths)).astype('f4')
      V = np.concatenate([V]*len(depths)).astype('f4')
      W = np.concatenate([W]*len(depths)).astype('f4')
      ogrid.PointData.append(U, 'u')
      ogrid.PointData.append(V, 'v')
      ogrid.PointData.append(W, 'w')
      print 'uvw done'
    else:
      print 'not doing uvw'

  otf = 'TEST-world-%d-%s.vtu' % (DEPTH_SCALE, ts)

  # get variable names
  if len(variables) == 0:
    variables = [i[:-3] for i in cgrid.CellData.keys() if i[-3:] == '_00']

  # for v in variables:
  for v in []:
    print 'starting', v
    vnames = ['%s_%02i' % (v, i) for i in range(len(thicknesses))]
    slices = [np.array(cgrid.CellData[vname]).astype('f4') for vname in vnames]
    if len(slices) == len(thicknesses):

      # Replicate first slab value at top, last at bottom
      data = [slices[0]] + slices + [slices[-1]]    

    elif len(slices) == (len(thicknesses)+1):

      # then the data is centered with the *top* of each slab, plus the bottom
      # of the lowest slab.   First slice is correct.  Then, for each *slab*, interpolate
      # between top and bottom values.   Then add in the final slice which,
      # again, is correct
      
      data = [slices[0]]

      # vertically interpolate a slice at each slab center

      for i in range(len(slices)-1):
        data.append(np.sum([slices[i], slices[i+1]], axis=0) / 2.0)

      # and add the bottom

      data.append(slices[-1])
    else:
      print "unknown data length", v,  len(slices)
      continue

    data = np.concatenate(data)
    data = np.nan_to_num(data).flatten()  # nans go to zero (should replace with -1E34)
    # needed to do streamlines / particle tracking / particle advection
    # correctly (but note that this won't be a physical value in the output --
    # probably should rename so that this is clear)
    if v == 'timeMonthly_avg_vertVelocityTop':
      data = DEPTH_SCALE * data
    ogrid.PointData.append(data, v)
    print v, 'done,', len(data), 'elements'

  print 'OGRID ========================================='
  print ogrid.VTKObject

#  # code that follows is experimental and problem specific (e.g., vis of ice sheets)
#  # this helps reduce output  ("clip") in order to produce a smaller file size
#
#  # carve out ice sheets
#  threshold1 = vtkThreshold()
#  threshold1.SetInputData(ogrid.VTKObject)
#  threshold1.SetInputArrayToProcess(1, 0, 0, 0, 'lat')
#  threshold1.ThresholdBetween(-1.48876, -1.261)
#  threshold1.Update()
#
#  print 'T1 ========================================='
#  print threshold1.GetOutput()
#
#  threshold2 = vtkThreshold()
#  threshold2.SetInputConnection(threshold1.GetOutputPort())
#  threshold2.SetInputArrayToProcess(1, 0, 0, 0, 'lon')
#  threshold2.ThresholdBetween(4.84, 5.75)
#  threshold2.Update()
#
#  print 'T2 ========================================='
#  print threshold2.GetOutput()

  print 'writing', otf
  wrtr = vtkXMLUnstructuredGridWriter()
  wrtr.SetFileName(otf)
  #wrtr.SetInputData(threshold2.GetOutput())  # need to use clipped output
  wrtr.SetInputData(ogrid.VTKObject)
  wrtr.Write()
  print ts, tf, 'done'

# end the loop: "for tf in timesteps:"
@mark-petersen
Copy link
Collaborator

There are a few options. We could put this script as-is in MPAS-QuickViz:
https://github.com/MPAS-Dev/MPAS-QuickViz/tree/master/ocean

Or, we could test it on a few meshes, make sure it is general, test in python-3, and put it in MPAS-Tools.

@pwolfram
Copy link
Contributor Author

@mark-petersen, I'm thinking, pending discussion with @xylar (I emailed him), that we would want to have this capability moved into the paraview extractor. You are right-- minimally, we can merge to QuickViz, but I think we want something more robust in the long term-- hence this tread to open up the discussion.

@xylar
Copy link
Collaborator

xylar commented Jun 25, 2019

@pwolfram, I agree that this should be possible. I can't quite remember if we're already telling the extractor enough in order for the water-tight bathymetry to work for it to also know how to construct a vertical coordinate. My guess is not yet.

The extractor should work for 3D fields from all MPAS components that will use it, so I guess we need a way of specifying which field contains layer thickness?

The vtk extractor works with a completely different library (pyevtk) than the example code above so I don't think it would be possible to do more than use it as a rough pseudocode for the changes that would be needed in the extractor.

Anyway, I'll be around starting in early August and that's likely the soonest I'd be able to help with this.

@bradyrx
Copy link
Contributor

bradyrx commented Jun 27, 2019

@pwolfram , do you have any example visualizations of this? Trying to wrap my head around what this implementation would look like. Would it be helpful for passive/isopycnal LIGHT trajectories?

@pwolfram
Copy link
Contributor Author

pwolfram commented Jun 27, 2019

@xylar, that is correct -- we will need another flag to specify something like layerThickness for use in the vertical coordinate.

@GregAbram has the examples for this code-- @bradyrx it is to get 3D Eulerian fields and it could be a very useful background field (e.g., volumetric rendered cloud) that particle trajectories and streamtubes could advect through.

@dpugmire
Copy link

I'm trying to generate 3D files for volume rendering. Is the code above what I should be using? Has something else been added somewhere?
Thanks.

@xylar
Copy link
Collaborator

xylar commented Feb 15, 2023

We have had no time to implement 3D vtk files yet. It's been on my to-do list but I don't think I will be able to work on it anytime soon.

https://github.com/MPAS-Dev/MPAS-Tools/blob/master/conda_package/mpas_tools/viz/paraview_extractor.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants