Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
190 commits
Select commit Hold shift + click to select a range
6ff195e
bump isl for isl::{multi_,}union_pw_aff::{min,max}_{multi_,}val
Jun 21, 2018
fc63411
move warning on positive minimal mapping to MappedScop::map
Jun 21, 2018
daac7ae
MappedScop::map: add extra sanity check on mappings
Jun 21, 2018
701ab91
tightenLaunchBounds: pass in MappedScop
Jun 21, 2018
bb697dc
tightenLaunchBounds: use mapping schedule instead of mapping filters
Jun 21, 2018
c805194
Update to support upcoming Caffe2 API
nicolasvasilache Jul 2, 2018
995a808
Use proper int division operator
nicolasvasilache Jul 5, 2018
f306bee
Merge pull request #550 from facebookresearch/pr/tighten
skimo-openhub Jul 5, 2018
553203c
Update test_caffe2 to latest python bindings
Jul 6, 2018
379a450
Update caffe2_benchmak.py to latest python API
Jul 6, 2018
b77b583
Merge pull request #545 from nicolasvasilache/pr/caffe2-update
nicolasvasilache Jul 6, 2018
3516e62
Drop caffe2 benchmark from OSS
nicolasvasilache Jul 9, 2018
7dbb65b
Move to trunk LLVM
nicolasvasilache Jul 10, 2018
2243364
Merge pull request #565 from nicolasvasilache/pr/llvm-trunk
nicolasvasilache Jul 11, 2018
3dfab71
Merge pull request #559 from nicolasvasilache/pr/fbcode-update
nicolasvasilache Jul 12, 2018
20d3d4f
bump isl for replacing isl_space_{un}named_set_from_params
Jul 10, 2018
1618c89
schedule_print.cc: drop dead code
Jul 12, 2018
f045fc1
Merge pull request #561 from facebookresearch/pr/space
skimo-openhub Jul 12, 2018
d02a9e9
Merge pull request #567 from facebookresearch/pr/dead
skimo-openhub Jul 12, 2018
d1e47ff
promoteToSharedGreedy: do not intersect schedule with active points
ftynse Jun 22, 2018
5f384ce
promoteToSharedGreedy: extract promoteToSharedBelow
ftynse Jun 22, 2018
a2aecd3
promotionImprovesCoalescing: use partial schedule instead of full
ftynse Jun 22, 2018
e3c26cb
memory_promotion_heuristic.cc: drop fullSchedule
ftynse Jun 22, 2018
7c89338
promoteToSharedBelow: take into account the mapping to blocks
ftynse Jun 22, 2018
d9aef27
promoteToSharedBelow: rename argument from bandNode to node
ftynse Jun 22, 2018
56a0343
promoteToSharedBelow: disallow promotion below sequence/set
ftynse Jul 4, 2018
02e6781
promoteToSharedBelow: disallow promotion anywhere below thread mapping
ftynse Jul 4, 2018
fe8f519
promoteToSharedGreedy: drop unused argument
ftynse Jun 22, 2018
5b448f0
merge promoteToSharedGreedy and promoteGreedilyAtDepth into one function
ftynse Jun 28, 2018
b0b73cb
promoteToSharedBelow: extract out isInThreadMappedScope
ftynse Jun 28, 2018
900cb42
promoteToSharedAtDepth: ignore scopes below thread mapping
ftynse Jun 28, 2018
04f1062
promoteToSharedAtDepth: do not throw at depth 0
ftynse Jun 28, 2018
18ace39
introduce shared_depth to mapping options
ftynse Jun 28, 2018
fe11d2a
do not look up the new outer band after shared memory promotion
ftynse Jun 28, 2018
c91026a
make sharedDepth mapping option tunable
ftynse Jun 28, 2018
a145a43
Merge pull request #537 from facebookresearch/shared-promotion-anywhere
ftynse Jul 13, 2018
289570c
[autotuning] User defined tile,block, grid sizes
Jul 11, 2018
5b58a19
Merge pull request #564 from facebookresearch/user_tile_autotuner
ftynse Jul 13, 2018
4d23ce0
move isl_interface/isl/stdint.h to isl_interface/include/isl/stdint.h
Jul 9, 2018
63aaab3
do not rely on CMakeLists.txt in isl submodule
Jul 9, 2018
4aad2b1
Merge pull request #557 from facebookresearch/pr/isl
skimo-openhub Jul 13, 2018
d15cb75
Update writing_layers.rst
nicolasvasilache Jul 13, 2018
bd7c15b
Merge pull request #569 from facebookresearch/nicolasvasilache-patch-1
nicolasvasilache Jul 13, 2018
7e56a94
Add ReLU + masked convolution
lvdmaaten Jul 16, 2018
14d1c8c
Merge pull request #571 from lvdmaaten/patch-1
nicolasvasilache Jul 16, 2018
99c0043
bump isl for schedule_nonneg_var_coefficient option
Jul 2, 2018
7a92509
use isl option to force non-negative coefficients
Jul 2, 2018
bb59391
Merge pull request #560 from facebookresearch/pr/schedule
skimo-openhub Jul 19, 2018
d7f9688
insertCopiesUnder: simplify before specializing
Jul 13, 2018
ade2dbf
insertCopiesUnder: use isl::space::set_set_tuple_id
Jul 13, 2018
c305abc
insertCopiesUnder: use isl::space::get_map_range_tuple_id
Jul 13, 2018
e4ced9c
insertCopiesUnder: use isl::map::set_range_tuple_id
Jul 13, 2018
ede77d7
addSingletonReferenceGroups: use isl::map::get_range_tuple_id
Jul 13, 2018
47c99df
TensorReferenceGroup::makeSingleton: use isl::space::get_map_range_tu…
Jul 13, 2018
0ec5785
CodegenStatementContext: use isl::pw_multi_aff::get_range_tuple_id
Jul 13, 2018
a10ebf8
emitRegisterAccess: use isl::multi_pw_aff::get_range_tuple_id
Jul 13, 2018
33f51a0
emitMappedTensorAccess: use isl::multi_aff::set_range_tuple_id
Jul 16, 2018
d7442e5
Merge pull request #570 from facebookresearch/pr/tuple
skimo-openhub Jul 20, 2018
897352e
bump isl for dropping dim_type
Jul 18, 2018
c0bd4d1
updateTopLevelContext: remove construction of universe set
Jul 20, 2018
a3bac39
updateTopLevelContext: drop redundant const_cast
Jul 20, 2018
96a5ba8
[Experimental] Debug apparent race condition on top1
nicolasvasilache Jul 20, 2018
b7dcfe6
reformat schedule_tree_elem.h
ftynse Jul 9, 2018
16e506f
ScheduleTreeContext: hide constructor
ftynse Jul 9, 2018
58ff8f1
ScheduleTreeDomain: hide constructor
ftynse Jul 9, 2018
0384a55
ScheduleTreeExtension: hide constructor
ftynse Jul 9, 2018
5390b25
ScheduleTreeFilter: hide constructor
ftynse Jul 9, 2018
bf31528
ScheduleTreeMapping: uninline constructor
ftynse Jul 9, 2018
58b9cd4
ScheduleTreeMapping: hide constructor
ftynse Jul 9, 2018
f3913c4
ScheduleTreeSequence: hide constructor
ftynse Jul 9, 2018
2b41d2b
ScheduleTreeSet: hide constructor
ftynse Jul 9, 2018
a03ad7d
ScheduleTreeBand: rename fromMultiUnionPwAff to make
ftynse Jul 9, 2018
bc150db
ScheduleTreeThreadSpecificMarker: hide constructor
ftynse Jul 9, 2018
ef1aeac
ScheduleTreeBand::make take more arguments
ftynse Jul 9, 2018
066affa
ScheduleTreeContext: hide copy constructor
ftynse Jul 11, 2018
ddef6bc
ScheduleTreeDomain: hide copy constructor
ftynse Jul 11, 2018
55ecf11
ScheduleTreeExtension: hide copy constructor
ftynse Jul 11, 2018
e326124
ScheduleTreeFilter: hide copy constructor
ftynse Jul 11, 2018
ac7fee4
ScheduleTreeMapping: hide copy constructor
ftynse Jul 11, 2018
6d09b28
ScheduleTreeSequence: hide copy constructor
ftynse Jul 11, 2018
ff810a8
ScheduleTreeSet: hide copy constructor
ftynse Jul 11, 2018
b66ba3e
ScheduleTreeBand: hide copy constructor
ftynse Jul 11, 2018
c4e612c
ScheduleTreeThreadSpecificMarker: introduce (static) copy constructor
ftynse Jul 11, 2018
c555112
ScheduleTree: introduce virtual "clone" method
ftynse Jul 11, 2018
827ed00
Move tc/core/polyhedral/functional.h tc/core/functional.h
nicolasvasilache Jul 23, 2018
71c013c
Remove best options fro autotuner and always recover from cache
nicolasvasilache Jul 23, 2018
6d24f99
Merge pull request #576 from nicolasvasilache/pr/debug
nicolasvasilache Jul 23, 2018
33527b1
Retire Tapir path for now
nicolasvasilache Jul 19, 2018
5f1bfc0
Kill dead code
nicolasvasilache Jul 19, 2018
0443af1
Start using Halide to emit LLVM IR
nicolasvasilache Jul 19, 2018
4130be2
More general usage of makeHalideExpr
nicolasvasilache Jul 20, 2018
2830c07
Stop pretending we support multiple LLVM versions
nicolasvasilache Jul 20, 2018
e5c6778
Add primitive CPU mapper
nicolasvasilache Jul 20, 2018
8f3659f
Drop emitBasicBlock
nicolasvasilache Jul 20, 2018
6eb026f
Split string
nicolasvasilache Jul 20, 2018
37fe91b
Merge pull request #574 from facebookresearch/pr/clean-up
skimo-openhub Jul 24, 2018
ace0aac
bump isl for change in export of isl_schedule_node_band_member_set_as…
Jul 18, 2018
2ced786
Merge pull request #573 from facebookresearch/pr/enum
skimo-openhub Jul 24, 2018
3a064d0
TuningHarness<Backend>::runOneIteration: use TC_CHECK_GT instead of C…
Jul 24, 2018
563e793
Merge pull request #577 from facebookresearch/pr/fix-576
skimo-openhub Jul 24, 2018
5b6157a
MappedScop::findBestSync: add missing params() call
Jul 20, 2018
87ab5fb
tensorElementsSet: drop spurious range() call
Jul 24, 2018
2e0ba43
tensorElementsSet: drop spurious params() call
Jul 21, 2018
c52b574
Scop::makeContext: drop spurious params() call
Jul 24, 2018
1e7572d
halide2isl.cc: extractAccess: drop redundant cast
Jul 21, 2018
9ee4c33
halide2isl.cc: extractAccess: construct expressions on parameter space
Jul 21, 2018
84ec837
tensorElementsSet: construct expressions on parameter space
Jul 24, 2018
fc8f5fb
generalize operator&(isl::*, isl::*) to template form
Jul 20, 2018
7e84859
generalize operator+(isl::*, isl::*) to template form
Jul 24, 2018
255e251
Make insertion points explicit in LLVM IR
nicolasvasilache Jul 23, 2018
5e425ac
operator+(int i, isl::aff A): call isl::aff:add_constant_si
Jul 24, 2018
9904d30
generalize operator+(int i, T A) to template form
Jul 24, 2018
21ed914
Merge pull request #575 from nicolasvasilache/pr/cpu-mapper
nicolasvasilache Jul 24, 2018
288fcdc
Introduce CompilerOptions
ftynse Jul 24, 2018
54ed7a6
lang/error_report: use CompilerOptions to control the warning emission
ftynse Jul 24, 2018
b13de92
Compiler API: properly put TreeRef-based compile into detail namespace
ftynse Jul 24, 2018
207ea4b
tc2halide: put throwWarnings in CompilerOptions and use the latter
ftynse Jul 24, 2018
fb45cf6
compiler API: take CompilerOptions as an optional argument
ftynse Jul 24, 2018
d4a569e
Sema: put a member variable after functions
ftynse Jul 24, 2018
3bbf8d8
Sema: respect CompilerOptions
ftynse Jul 24, 2018
7a909c3
Supress warnings during the autotuner run
ftynse Jul 24, 2018
012e970
Expose dump_ptx flag to Python
nicolasvasilache Jul 10, 2018
6a38863
Make Halide output standard types
nicolasvasilache Jul 10, 2018
d2caf9d
Reconcile builtins and cuda
nicolasvasilache Jul 10, 2018
71b01c2
Add tum.py example based on LengthsCosineCoherence
nicolasvasilache Jul 10, 2018
dcf4d83
Use NO_CUDA_SDK rather than undefine CUDA_HOME
nicolasvasilache Jul 12, 2018
2d728cc
Add tc_config.h.in
nicolasvasilache Jul 12, 2018
c83c36d
Stop putting generated protos in source tree
nicolasvasilache Jul 12, 2018
052a3ef
Generate PTX with LLVM trunk
nicolasvasilache Jul 11, 2018
ecd85a1
Generate PTX with NVCC
nicolasvasilache Jul 12, 2018
4e37c4d
Factor out system calls
nicolasvasilache Jul 24, 2018
a1b6cc6
Drop NVRTC_CUB in non-RTC paths
nicolasvasilache Jul 24, 2018
5c00a16
cuda libraries: fix operator precedence issue in macro
nicolasvasilache Jul 24, 2018
031cf48
Merge pull request #579 from facebookresearch/compiler-options
ftynse Jul 24, 2018
1499725
Merge pull request #566 from nicolasvasilache/pr/tum
nicolasvasilache Jul 25, 2018
03512da
Merge pull request #558 from facebookresearch/schedule-tree-evolution
ftynse Jul 25, 2018
e75086d
Merge pull request #578 from facebookresearch/pr/pre-template
skimo-openhub Jul 25, 2018
7fafcc1
revert rewind of isl submodule
Jul 25, 2018
e1895fc
Merge pull request #581 from facebookresearch/pr/fix-566
nicolasvasilache Jul 25, 2018
0169dd7
accessSubscriptsAreUnrolledLoops: use isl::space::add_unnamed_tuple_ui
Jul 25, 2018
e3dac83
Merge pull request #582 from facebookresearch/pr/clean-up
skimo-openhub Jul 25, 2018
7ecfcc2
Namespace cpu::MappedScop to avoid collisions
nicolasvasilache Jul 25, 2018
b596539
Scope relevant polyhedral/cuda classes under cuda namespace
nicolasvasilache Jul 25, 2018
46c9cd7
bump isl for fix of isl_*_aff_*_val exports
Jul 25, 2018
87a64bb
Merge pull request #585 from facebookresearch/pr/fix_aff_val_export
skimo-openhub Jul 25, 2018
9bd04d4
Merge pull request #584 from nicolasvasilache/pr/namespace
ftynse Jul 25, 2018
c4588e9
boundInstancesAndMarkUnroll: do not call domain() on a set space
Jul 25, 2018
71fb117
ScheduleTreeBand::memberRange: do not call domain() on a set space
Jul 25, 2018
eccf8cd
Merge pull request #586 from facebookresearch/pr/pre-template
skimo-openhub Jul 25, 2018
8cf2f43
bump isl for fix of isl_map_from_union_map export
Jul 30, 2018
1854621
Merge pull request #590 from facebookresearch/pr/fix_map_from_union_m…
skimo-openhub Jul 30, 2018
de70264
schedule_utils.cc: drop dead code
Jul 31, 2018
e181cf8
schedule_utils.cc: drop dead code
Jul 31, 2018
469317f
Merge pull request #591 from facebookresearch/pr/dead
skimo-openhub Jul 31, 2018
33caff2
bump isl for fix of isl_union_pw_aff_mod_val export
Aug 2, 2018
69a9b34
Merge pull request #592 from facebookresearch/pr/fix_mod_val_export
skimo-openhub Aug 2, 2018
e0e7f5f
fix return type of operator+(int i, T A)
Jul 27, 2018
b4ee22e
generalize operator+(T A, int i) to template form
Jul 27, 2018
017f6b2
generalize operator-(T A, int i) to template form
Jul 31, 2018
c6edb23
generalize operator-(int i, T A) to template form
Jul 31, 2018
4cd6b79
generalize operator/(S left, T right) to template form
Jul 30, 2018
940516d
bump isl for export of isl_aff_add_constant_val
Jul 26, 2018
dd5952a
operator+(isl::aff A, isl::val v): call isl::aff::add_constant
Aug 3, 2018
8775082
generalize operator+(T A, isl::val v) to template form
Aug 3, 2018
7789b79
generalize operator*(T A, isl::val v) to template form
Aug 3, 2018
3be2511
MappedScop::insertMappingContext: drop redundant universe() call
Jul 26, 2018
ed82d3e
ScheduleTreeBand::drop: do not call domain() on a set space
Jul 27, 2018
a6adf84
memory_promotion.cc: referenceOriginalAccessesImpl: add missing param…
Aug 2, 2018
5900b06
Scop::makeScop: automatically deduce type of space variable
Jul 31, 2018
340abcf
halide2isl.cc: extractAccess: automatically deduce type of space vari…
Jun 21, 2018
3a8e622
promotionImprovesCoalescing: use get_map_list
Jul 27, 2018
5651b42
addSingletonReferenceGroups: use get_map_list
Jul 27, 2018
0036cd2
launchBounds: avoid use of to_str() isl methods
Aug 3, 2018
7c172e5
test_core: drop construction of schedule
Aug 8, 2018
5eef9d7
test_cuda_mapper.cc: drop removal of user pointers from isl_id objects
Aug 8, 2018
6865ad8
Update installation.rst
nicolasvasilache Aug 9, 2018
fbaf4cd
Merge pull request #596 from facebookresearch/pr/dead
skimo-openhub Aug 9, 2018
41e5334
Merge pull request #595 from facebookresearch/pr/clean-up
skimo-openhub Aug 9, 2018
08902aa
Merge pull request #594 from facebookresearch/pr/pre-template
skimo-openhub Aug 9, 2018
6668481
Scop::makeScop: only copy dependences if they have been computed
Aug 6, 2018
5281e66
bump isl for merge of C++ bindings
Aug 6, 2018
0979b42
Merge pull request #598 from facebookresearch/nicolasvasilache-patch-1-1
ftynse Aug 13, 2018
38e3032
Merge pull request #599 from facebookresearch/pr/merge_master
skimo-openhub Aug 16, 2018
c752590
Fix a broken link in README
tosaka2 Aug 21, 2018
00caa7d
Merge pull request #600 from tosaka2/patch-1
skimo-openhub Aug 21, 2018
744d35e
ScheduleTreeMapping::ScheduleTreeMapping: drop redundant initialization
Aug 21, 2018
220b590
Merge pull request #601 from facebookresearch/pr/pre-template
skimo-openhub Aug 31, 2018
10be3f3
Add file headers for OSS requirement
Oct 7, 2019
fd01443
Merge pull request #625 from facebookresearch/add-headers
prigoyal Oct 7, 2019
680f8c9
Remove references to unmaintained website
zpao Apr 28, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ jobs:
steps:
- checkout
- run:
name: conda_tapir_halide
name: conda_llvm_halide
command: |
. /opt/conda/anaconda/bin/activate
source activate tc_build
conda install -y -c nicolasvasilache llvm-tapir50 halide
conda install -y -c nicolasvasilache llvm-trunk halide

- run:
name: check_formatting
Expand Down
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Tensor Comprehensions Github Issues Guidelines
----------------------------------------------

If you have a feature request or a bug report (build issue), please open an issue on Github and fill the template below so we can help you better and faster. If you have some general
questions about Tensor Comprehensions, please visit our [slack channel](https://tensorcomprehensions.herokuapp.com/) or email us at tensorcomp@fb.com
questions about Tensor Comprehensions, please email us at tensorcomp@fb.com

For build issues, please add `[Build]` at the beginning of issue title.

Expand All @@ -16,7 +16,7 @@ When submitting a bug report, please include the following information (where re
- GCC/GXX version (if compiling from source):
- LLVM/Tapir git hash used (if compiling from source):

To get the hash, run: `$HOME/clang+llvm-tapir5.0/bin/clang --version`
To get the hash, run: `$CONDA_PREFIX/bin/clang --version`

- Commit hash of our repo and submodules (if compiling from source):

Expand Down
5 changes: 0 additions & 5 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,6 @@ __pycache__/
*~
build/*
docs/build/*
tc/proto/*.py
tc/proto/*.cc
tc/proto/*.h
third-party/*_cache
third-party/llvm_sources*
third-party-install/*
Expand All @@ -17,8 +14,6 @@ conda
*/.nfs*
tensor_comprehensions.egg-info/
tensor_comprehensions/version.py
tensor_comprehensions/*.proto
tensor_comprehensions/*_pb2.py
slurm-*
examples/results*
*.pyc
Expand Down
2 changes: 1 addition & 1 deletion .jenkins/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ cd /var/lib/jenkins/workspace
git submodule update --init --recursive

source activate tc_build
conda install -y -c nicolasvasilache llvm-tapir50 halide
conda install -y -c nicolasvasilache llvm-trunk halide
conda install -y -c conda-forge eigen
conda install -y -c nicolasvasilache caffe2

Expand Down
26 changes: 17 additions & 9 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -139,9 +139,6 @@ if(WITH_CUDA)
find_package(CUDA REQUIRED)
include_directories(BEFORE ${CUDA_TOOLKIT_ROOT_DIR}/include)

# modified CUB
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DWITH_CUDA -DCUDA_HOME=\"\\\"${CUDA_INCLUDE_DIRS}\\\"\" -DCUB_HOME=\"\\\"${CUB_INSTALL_DIR}\\\"\" ")

# Inherited from Torch, see
# https://github.com/torch/cutorch/blob/master/lib/THC/cmake/select_compute_arch.cmake
INCLUDE(cmake/select_compute_arch.cmake)
Expand Down Expand Up @@ -226,13 +223,11 @@ message(STATUS "Found ATen.so file: ${ATEN_LIBRARIES}")
################################################################################
# isl
################################################################################
set(ISL_INT "gmp" CACHE STRING "Which package to use to represent multi-precision integers (gmp|imath)")
# use locally generated C++ bindings
include_directories(AFTER ${PROJECT_SOURCE_DIR}/isl_interface/include)
include_directories(AFTER ${PROJECT_SOURCE_DIR}/third-party/islpp/include)
include_directories(AFTER ${CMAKE_CURRENT_BINARY_DIR}/third-party/islpp/include)
add_subdirectory(third-party/islpp)
set(ISL_LIBRARIES isl-static)
add_subdirectory(external/isl)

################################################################################
# Halide
Expand Down Expand Up @@ -261,6 +256,20 @@ include(cmake/GetGitRevisionDescription.cmake)
################################################################################
# Finally, build
################################################################################
# Variables for tc_config.h.in
set(TC_DIR ${TC_DIR})
execute_process(COMMAND ${CLANG_PREFIX}/bin/llvm-config --bindir OUTPUT_VARIABLE LLVM_BIN_DIR OUTPUT_STRIP_TRAILING_WHITESPACE)
set(TC_LLVM_BIN_DIR ${LLVM_BIN_DIR})
if (WITH_CUDA)
# CUDA-specific variables for tc_config.h.in
set(TC_WITH_CUDA 1)
set(TC_CUB_INCLUDE_DIR ${CUB_INSTALL_DIR})
set(TC_CUDA_TOOLKIT_ROOT_DIR ${CUDA_TOOLKIT_ROOT_DIR})
set(TC_CUDA_INCLUDE_DIR ${CUDA_INCLUDE_DIRS})
else()
set(TC_WITH_CUDA 0)
endif()
configure_file("tc/tc_config.h.in" "${CMAKE_CURRENT_BINARY_DIR}/tc/tc_config.h")

################################################################################
# Compile flags
Expand All @@ -283,13 +292,12 @@ elseif(${CMAKE_BUILD_TYPE} MATCHES "Release")
endif()
message(STATUS "CMAKE_INSTALL_PREFIX is ${CMAKE_INSTALL_PREFIX}")

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DTC_DIR=\"\\\"${TC_DIR}\\\"\" ")

include_directories(BEFORE ${PROJECT_SOURCE_DIR})
include_directories(BEFORE ${CMAKE_CURRENT_BINARY_DIR})
add_subdirectory(tc)

# At the moment pybind is only supported in CUDA mode and compilation fails
# for non-CUDA mode (CUDA_HOME and CUB_HOME undefined error).
# for non-CUDA mode (CUDA_INCLUDE_DIR and CUB_INCLUDE_DIR undefined error).
# Once the core CPU mapper is stabilized we can worry about pybind, deactivate
# conditionally for now
if (WITH_CUDA)
Expand Down
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ After a few generations of `autotuning` on a 2-GPU P100 system, we see results r

![Autotuning Sample](docs/source/_static/img/autotuning.png)

In C++ a minimal autotuning example resembles the [following](example/example_tensordot.cc):
In C++ a minimal autotuning example resembles the [following](tc/examples/tensordot.cc):
```cpp
TEST(TensorDot, SimpleAutotune) {
// 1. Define and setup the TC compilation unit with CUDA memory
Expand Down Expand Up @@ -131,7 +131,6 @@ You can find documentation [here](https://facebookresearch.github.io/TensorCompr

* **Email**: tensorcomp@fb.com
* **GitHub issues**: bug reports, feature requests, install issues, RFCs, thoughts, etc.
* **Slack**: For discussion around framework integration, build support, collaboration, etc. join our slack channel https://tensorcomprehensions.herokuapp.com/.

# Code of Conduct
See the [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) file for more details.
Expand Down
14 changes: 14 additions & 0 deletions build.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
# Copyright (c) 2017-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
#! /bin/bash
set -ex

Expand Down
14 changes: 14 additions & 0 deletions check_and_fix_format.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
# Copyright (c) 2017-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
#! /bin/bash

CLANG=${CLANG:=clang-format-4.0}
Expand Down
16 changes: 15 additions & 1 deletion cmake/FindCuDNN.cmake
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
# Copyright (c) 2017-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
# Taken from Caffe2

# - Try to find cuDNN
Expand Down Expand Up @@ -52,4 +66,4 @@ if(CUDNN_FOUND)
set(CUDNN_LIBRARIES ${CUDNN_LIBRARY})
message(STATUS "Found cuDNN: v${CUDNN_VERSION} (include: ${CUDNN_INCLUDE_DIR}, library: ${CUDNN_LIBRARY})")
mark_as_advanced(CUDNN_ROOT_DIR CUDNN_LIBRARY CUDNN_INCLUDE_DIR)
endif()
endif()
14 changes: 14 additions & 0 deletions cmake/GetGitRevisionDescription.cmake
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
# Copyright (c) 2017-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
# - Returns a version string from Git
#
# These functions force a re-configure on each git commit so that you can
Expand Down
14 changes: 14 additions & 0 deletions cmake/GetGitRevisionDescription.cmake.in
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
# Copyright (c) 2017-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
#
# Internal file for GetGitRevisionDescription.cmake
#
Expand Down
16 changes: 15 additions & 1 deletion cmake/select_compute_arch.cmake
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
# Copyright (c) 2017-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
# Synopsis:
# CUDA_SELECT_NVCC_ARCH_FLAGS(out_variable [target_CUDA_architectures])
# -- Selects GPU arch flags for nvcc based on target_CUDA_architectures
Expand Down Expand Up @@ -197,4 +211,4 @@ function(CUDA_SELECT_NVCC_ARCH_FLAGS out_variable)
string(REPLACE ";" " " nvcc_archs_readable "${nvcc_archs_readable}")
set(${out_variable} ${nvcc_flags} PARENT_SCOPE)
set(${out_variable}_readable ${nvcc_archs_readable} PARENT_SCOPE)
endfunction()
endfunction()
2 changes: 1 addition & 1 deletion conda_recipes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ nvidia-docker run --rm -i -t tc-cuda9.0-cudnn7.1-ubuntu16.04-devel

We are ready to build conda package for TC.
To simplify the build process we ship TC dependencies as conda packages.
We need to build packages for `llvm-tapir50`, `Halide`, `Caffe2` (optional) and finally `Tensor Comprehensions`.
We need to build packages for `llvm-trunk`, `Halide`, `Caffe2` (optional) and finally `Tensor Comprehensions`.

For building each package, we need to specify a `build version`, `build number` and
`git hash`. This information is used to build each package.
Expand Down
14 changes: 14 additions & 0 deletions conda_recipes/caffe2/build.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
# Copyright (c) 2017-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
#!/usr/bin/env bash

set -e
Expand Down
45 changes: 31 additions & 14 deletions conda_recipes/conda_build_tc.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
# Copyright (c) 2017-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
#!/usr/bin/env bash

set -e
Expand Down Expand Up @@ -28,27 +42,31 @@ time conda build -c $ANACONDA_USER --python 3.6 caffe2
echo "Caffe2 packaged Successfully"

###############################################################################
# LLVM_TAPIR settings
LLVM_TAPIR_BUILD_VERSION="1.0.0"
LLVM_TAPIR_BUILD_NUMBER=1
LLVM_TAPIR_GIT_HASH="1482504e234a65bffc8c54de8de9fc877822345d"
# LLVM_TRUNK settings
LLVM_TRUNK_BUILD_VERSION="1.0.0"
LLVM_TRUNK_BUILD_NUMBER=1
LLVM_TRUNK_SOURCE_DIR=$(mktemp -d /tmp/d.XXXXXX)
trap 'rm -rf "${LLVM_TRUNK_SOURCE_DIR}"' EXIT

svn co http://llvm.org/svn/llvm-project/llvm/trunk ${LLVM_TRUNK_SOURCE_DIR}
svn co http://llvm.org/svn/llvm-project/cfe/trunk ${LLVM_TRUNK_SOURCE_DIR}/tools/clang

echo "Building llvm-tapir50"
echo "LLVM_TAPIR_BUILD_VERSION: $LLVM_TAPIR_BUILD_VERSION LLVM_TAPIR_BUILD_NUMBER: ${LLVM_TAPIR_BUILD_NUMBER}"
echo "Building llvm-trunk"
echo "LLVM_TRUNK_BUILD_VERSION: $LLVM_TRUNK_BUILD_VERSION LLVM_TRUNK_BUILD_NUMBER: ${LLVM_TRUNK_BUILD_NUMBER}"

export LLVM_TAPIR_BUILD_VERSION=$LLVM_TAPIR_BUILD_VERSION
export LLVM_TAPIR_BUILD_NUMBER=$LLVM_TAPIR_BUILD_NUMBER
export LLVM_TAPIR_GIT_HASH=$LLVM_TAPIR_GIT_HASH
export LLVM_TRUNK_BUILD_VERSION=$LLVM_TRUNK_BUILD_VERSION
export LLVM_TRUNK_BUILD_NUMBER=$LLVM_TRUNK_BUILD_NUMBER
export LLVM_TRUNK_SOURCE_DIR=$LLVM_TRUNK_SOURCE_DIR

time conda build -c $ANACONDA_USER --python 3.6 llvm-tapir50
time conda build -c $ANACONDA_USER --python 3.6 llvm-trunk

echo "llvm-tapir50 packaged Successfully"
echo "llvm-trunk packaged Successfully"

##############################################################################
# Halide settings
HALIDE_BUILD_VERSION="1.0.0"
HALIDE_BUILD_NUMBER=0
HALIDE_GIT_HASH="35be67b3a3e4c4461f79949109ff35c54cf307de"
HALIDE_BUILD_NUMBER=1
HALIDE_GIT_HASH="0b29cacf636852933892bbaa61dd2050c8dcaff2"

echo "Packaging HALIDE ==> HALIDE_BUILD_VERSION: ${HALIDE_BUILD_VERSION} HALIDE_BUILD_NUMBER: ${HALIDE_BUILD_NUMBER}"

Expand All @@ -75,4 +93,3 @@ echo "HALIDE packaged Successfully"
#time conda build -c $ANACONDA_USER --python 3.6 tensor_comprehensions
#
#echo "Tensor Comprehensions packaged Successfully"
#
14 changes: 14 additions & 0 deletions conda_recipes/halide/build.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
# Copyright (c) 2017-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
#!/usr/bin/env bash

set -e
Expand Down
4 changes: 2 additions & 2 deletions conda_recipes/halide/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@ source:

requirements:
build:
- llvm-tapir50==1.0.0
- llvm-trunk==1.0.0
- cmake
run:
- llvm-tapir50==1.0.0
- llvm-trunk==1.0.0
- cmake

build:
Expand Down
Loading