-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setting tensorflow source path in Windows #31
Comments
I do not use Windows, that's why it states
replace PYTHON_EXECUTABLE with the binary to your python path and please report back if that works or not.
|
I also wanted to try it on Windows 10 with Visual Studio 2017, however I'm facing a similar issue. First I built Tensorflow from source to have the C and C++ APIs using the following steps
At this point I also checked Tensorflow Python APIs worked running a small python script creating a Tensor Then I wanted to try the tensorflow-cmake/inference example, so I first exported the model running
However when trying to configure the project with CMake I get this error
Do you have some ideas on how to adjust FindTensorflow.cmake to make it work on Windows? Thanks in advance! |
Have you tried
Python is required to find the TF library as this might be easier than typing the paths by hand. Cmake really tries to run
and |
@PatWie I tried on Windows 10 with conda virtualenv with
Then I build it with
Do you have any idea to solve this problem? |
CUDA must be installed on your machine. I have to admit I never did CUDA stuff on Windows. Did you followed something like |
@PatWie Thanks a lot for your reply! |
Please refer to the TensorFlow issue: You should locate the file: Option AThe file I already mentioned the exact same issue there. One workaround is to add the line include_directories(SYSTEM "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0") below
Option BThe file include_directories(SYSTEM "my/symbolic/link") Or you just copy the missing files. Please report back, that I can try to hack a workaround for Windows. |
@PatWie I also tried using
However CMake fails with the following output
Note that it says it cannot find libtensorflow_cc.so within the TENSORFLOW_BUILD_DIR, however I checked the libraries are there |
@PatWie I really appreciate your patience.
I find a similar problem solution here but I don't know how to adapt it in this situation, could you help me with this? |
My understanding of using Tensorflow in Visual Studio / Windows is that you need a .dll, and not a .so. I've been able to use the libtensorflow.dll file from google, in conjunction with the inference.c / cpp examples in this repository successfully on Windows 10 + Visual Studio + CUDA. The steps are a little detailed so I will cover them soon in a different post. |
@iamsurya did you finish it your different post? |
I haven't. I'm at a conference this week. I'm going to try and summarize broken concepts here and link to them if you're in a hurry to try this. This only for Visual Studio on Windows 10. Steps:
I know all you're using is his C example, but I'm unaware of any easier way to run inference. |
@iamsurya do you think extending the FindTensorflow.cmake to support windows is doable or at least favorable? Especially step 3 and 4 seems to be the windows way without any CMake involved at all. |
It depends. Compiling from scratch has benefits for speed up. You can use specific optimizations for the target computer or gpu. For example, you can use a specific version of the CUDA or CUDNN library. However, the drawback is you have to setup a build environment. This is usually not too complicated on linux. On Windows, its often time consuming due to how DLLs (tensorflow, Cuda, Cudnn, Intel) work. (Some coders prefer compiling for windows using a linux host!). The benefits of using these optimizations are not worth the effort needed to build from scratch. I'm okay installing the required versions of CUDA and CUDNN and benefiting from the standard speed up a GPU gives, as that's the biggest one, so in my opinion if you're just testing (and not deploying), it might be easier to just use the windows DLLs. |
@PatWie looks like it is possible to create a dll using bazel, but it still names it .so (and not .dll). See this tensorflow issue: migueldeicaza/TensorFlowSharp#389 (comment) Do you think you could generate a .dll for me since you have the build system for it? Just a plain gpu vanilla version. If yes, I'll confirm if your cmake system works and we can get back to supporting this. |
Meanwhile, I've finished the guide on how to use google's libtensorflow pre-compiled dlls and run inference, that relies heavily on the inference code from this repository. |
I use anaconda and establish the virtual environment 3.6.3 tensorflow 1.12 with GPU
can you tell me where to set all the paths?
The text was updated successfully, but these errors were encountered: