Fast, parallelized molecular dynamics trajectory data analysis.
CPPTRAJ is a program designed to process and analyze molecular dynamics trajectories and relevant data sets derived from their analysis. CPPTRAJ supports many popular MD software packages including Amber, CHARMM, Gromacs, and NAMD.
CPPTRAJ is also distributed as part of the freely available AmberTools software package. The official AmberTools release version of CPPTRAJ can be found at the Amber website.
For those wanting to use CPPTRAJ in their Python scripts, see Pytraj.
See what's new in CPPTRAJ. For those just starting out you may want to check out some CPPTRAJ tutorials or Amber-Hub which contains many useful "recipes" for CPPTRAJ.
For more information (or to cite CPPTRAJ) see the following publication:
For more information regarding trajectory/ensemble parallelism via MPI in CPPTRAJ see the following publication:
CPPTRAJ is Copyright (c) 2010-2023 Daniel R. Roe. The terms for using, copying, modifying, and distributing CPPTRAJ are specified in the file LICENSE.
The /doc
subdirectory contains PDF and LyX versions of the CPPTRAJ manual.
The latest version of the manual is available for download
here.
An HTML version can be found here. There
is also limited help for commands in interactive mode via help [<command>]
;
help
with no arguments lists all known commands.
Code documentation generated by Doxygen can be generated with the command make docs
. A
limited developers guide is available here
and limited HTML-formatted documentation is available
here.
Some examples are available in the examples
subdirectory.
Run ./configure --help
for a short list of configure options. ./configure --full-help
will list all available configure options. For full functionality, CPPTRAJ makes use of
the following libraries:
- NetCDF
- BLAS
- LAPACK
- Gzip
- Bzip2
- Parallel NetCDF (-mpi build only, for NetCDF trajectory output in parallel)
- CUDA (-cuda build only)
- HIP (-hip build only)
- FFTW (mostly optional; required for PME functionality and very large FFTs)
CPPTRAJ also makes use of the following libraries that are bundled with CPPTRAJ. External ones can be used in place of these if desired.
- ARPACK; without this diagonalization of sparse matrices in
diagmatrix
will be slow. - helPME by Andy Simmonett, required for PME functionality.
- XDR for reading GROMACS XTC trajectories.
- TNG for reading GROMACS TNG trajectories.
C++11 support is required to enable particle mesh Ewald (PME) calculation support via helPME. CPPTRAJ also uses the PCG32 and Xoshiro 128++ pseudo-random number generators.
./configure gnu
should be adequate to set up compilation for most systems.
For systems without BLAS/LAPACK, FFTW, and/or NetCDF libraries installed,
CPPTRAJ's configure can attempt to download and install any enabled library
into $CPPTRAJHOME. By default CPPTRAJ will ask if these should be installed;
the '--buildlibs' option can be specified to try to automatically install any
missing enabled library. For example, ./configure -fftw3 --buildlibs gnu
will tell CPPTRAJ to build missing libraries including FFTW (if it is not
available). To prevent CPPTRAJ from asking about building external libraries,
use the '--nobuildlibs' option.
If Amber is installed and $AMBERHOME
is properly set, the -amberlib
flag
can be specified to use the libraries already compiled in an AmberTools
installation, e.g. ./configure -amberlib gnu
.
For multicore systems, the -openmp
flag can
be specified to enable OpenMP parallelization, e.g. ./configure -openmp gnu
.
An MPI-parallelized version of CPPTRAJ can also be built using the -mpi
flag.
CPPTRAJ can be built with both MPI and OpenMP; when running this build users
should take care to properly set OMP_NUM_THREADS if using more than 1 MPI
process per node (the number of processes * threads should not be greater than
the number of physical cores on the machine).
A CUDA build is now also available via the -cuda
configure flag,
a HIP build is available via the -hip
flag, they are mutually exclusive. However, currently
only a few commands benefit from this (see the manual for details). By default CPPTRAJ
will be configured for multiple shader models; to restrict the CUDA build to a single
shader model set the SHADER_MODEL environment variable before running configure
.
Any combination of -cuda
(or -hip
), -mpi
, and -openmp
may be used. The configure script by
default sets everything up to link dynamically. The -static
flag can be used to force
static linking. If linking errors are encountered you may need to specify library locations
using the --with-LIB=
options. For example, to use NetCDF compiled in /opt/netcdf
use the option --with-netcdf=/opt/netcdf
. Alternatively, individual libraries can be
disabled with the -no<LIB>
options. The -libstatic
flag can be used to static link
only libraries that have been specified.
CPPTRAJ can also be built with support for OpenMM by specifying '--with-openmm=PATH', where PATH is the OpenMM directory containing the OpenMM library, i.e. PATH/lib/libOpenMM.so. Currently the only command that uses OpenMM is emin, so compiling with OpenMM is typically not required at this time.
After configure
has been successfully run, make install
will
compile and place the cpptraj binary in the $CPPTRAJHOME/bin
subdirectory. Note that
on multithreaded systems make -j X install
(where X is an integer > 1
and less than the max # cores on your system) will run much faster.
After installation, It is highly recommended that make check
be run as
well to test the basic functionality of CPPTRAJ.
There is an independently-maintained VIM syntax file for CPPTRAJ by Emmett Leddin available here.
Lead Author: Daniel R. Roe (daniel.r.roe@gmail.com) Laboratory of Computational Biology National Heart Lung and Blood Institute National Institutes of Health, Bethesda, MD.
CPPTRAJ began as a C++ rewrite of PTRAJ by Thomas E. Cheatham, III (Department of Medicinal Chemistry, University of Utah, Salt Lake City, UT, USA) and many routines from PTRAJ were adapted for use in CPPTRAJ, including code used in the following classes: Analysis_CrankShaft, Analysis_Statistics, Action_DNAionTracker, Action_RandomizeIons, Action_Principal, Action_Grid, GridAction, Action_Image, and ImageRoutines.
-
James Maier (Stony Brook University, Stony Brook, NY, USA) Code for calculating J-couplings (used in Action_Jcoupling).
-
Jason M. Swails (University of Florida, Gainesville, FL, USA) Action_LIE, Analysis_RunningAvg, Action_Volmap, Grid OpenDX output.
-
Jason M. Swails (University of Florida, Gainesville, FL, USA) Guanglei Cui (GlaxoSmithKline, Upper Providence, PA, USA) Action_SPAM.
-
Mark J. Williamson (Unilever Centre for Molecular Informatics, Department of Chemistry, Cambridge, UK) Action_GridFreeEnergy.
-
Hannes H. Loeffler (STFC Daresbury, Scientific Computing Department, Warrington, WA4 4AD, UK) Action_Density, Action_OrderParameter, Action_PairDist.
-
Crystal N. Nguyen (University of California, San Diego) Romelia F. Salomon (University of California, San Diego) Original Action_Gist.
-
Pawel Janowski (Rutgers University, NJ, USA) Normal mode wizard (nmwiz) output, original code for ADP calculation in Action_AtomicFluct.
-
Zahra Heidari (Faculty of Chemistry, K. N. Toosi University of Technology, Tehran, Iran) Original code for Analysis_Wavelet.
-
Chris Lee (University of California, San Diego) Support for processing force information in NetCDF trajectories.
-
Steven Ramsey (CUNY Lehman College, Bronx, NY) Enhancements to entropy calculation in original Action_Gist.
-
Amit Roy (University of Utah, UT) Code for the CUDA version of the 'closest' Action.
-
Andrew Simmonett (National Institutes of Health) Code for the reciprocal part of the particle mesh Ewald calculation (electrostatic and Lennard-Jones).
-
Christina Bergonzo (National Institute of Standards and Technology, Gaithersburg, MD) Fixes and improvements to nucleic acid dihedral angle definitions (DihedralSearch).
-
David S. Cerutti (Rutgers University, Piscataway, NJ, USA) Original code for the 'xtalsymm' Action.
-
Johannes Kraml, Franz Waibl & Klaus R. Liedl (Department of General, Inorganic, and Theoretical Chemistry, University of Innsbruck) Improvements and enhancements for GIST.
- David A. Case (Rutgers University, Piscataway, NJ, USA)
- Hai Nguyen (Rutgers University, Piscataway, NJ, USA)
- Robert T. McGibbon (Stanford University, Stanford, CA, USA)
-
Holger Gohlke (Heinrich-Heine-University, Düsseldorf, Germany) Alrun N. Koller (Heinrich-Heine-University, Düsseldorf, Germany) Original implementation of matrix/vector functionality in PTRAJ, including matrix diagonalization, IRED analysis, eigenmode analysis, and vector time correlations.
-
Michael Crowley (University of Southern California, Los Angeles, CA, USA) Original code for dealing with truncated octahedral unit cells.
-
Viktor Hornak (Merck, NJ, USA) Original code for mask expression parser.
-
John Mongan (UCSD, San Diego, CA, USA) Original implementation of the Amber NetCDF trajectory format.
-
Hannes H. Loeffler (STFC Daresbury, Scientific Computing Department, Warrington, WA4 4AD, UK) Diffusion calculation code adapted for use in Action_STFC_Diffusion.
-
CPPTRAJ makes use of the GNU readline library for the interactive command line.
-
CPPTRAJ uses the ARPACK library to calculate eigenvalues/eigenvectors from large sparse matrices.
-
CPPTRAJ uses the xdrfile library for reading XTC files; specifically a somewhat updated version from MDTRAJ that includes some bugfixes and enhancements. See
src/xdrfile/README
for details. -
CPPTRAJ uses the GROMACS TNG library for reading TNG files. See
sec/tng/README
for details. -
The reciprocal part of the PME calculation is handled by the helPME library by Andy Simmonett.
-
Support for reading DTR trajectories uses the VMD DTR plugin.
-
CPPTRAJ uses code for the permuted congruent pseudo-random number generator PCG32 by Melissa O'Neill and the Xoshiro 128++ pseudo-random number generator by David Blackman and Sebastino Vigna.
-
The code for quaternion RMSD calculation was adapted from code in qcprot.c originally written by Douglas L. Theobald and Pu Lio (Brandeis University).
-
The code for reading numpy arrays in
src/libnpy
is from libnpy written by Leon Merten Lohse et al. (Universität Göttingen).