Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Torch.jl without a GPU? #20

Open
DilumAluthge opened this issue May 12, 2020 · 7 comments
Open

Using Torch.jl without a GPU? #20

DilumAluthge opened this issue May 12, 2020 · 7 comments
Assignees
Labels
good first issue Good for newcomers help wanted Extra attention is needed

Comments

@DilumAluthge
Copy link
Member

Would it be possible to add support for using Torch.jl on a machine without a GPU?

@DilumAluthge
Copy link
Member Author

For example, this is what I currently get when I try to run import Torch in a machine without a GPU:

julia> import Torch
[ Info: Precompiling Torch [6a2ea274-3061-11ea-0d63-ff850051a295]
ERROR: LoadError: LoadError: could not load library "libdoeye_caml"
dlopen(libdoeye_caml.dylib, 1): image not found
Stacktrace:
 [1] macro expansion at /Users/dilum/.julia/packages/Torch/Q8Y45/src/error.jl:12 [inlined]
 [2] at_grad_set_enabled(::Int64) at /Users/dilum/.julia/packages/Torch/Q8Y45/src/wrap/libdoeye_caml_generated.jl:70
 [3] top-level scope at /Users/dilum/.julia/packages/Torch/Q8Y45/src/tensor.jl:6
 [4] include(::Function, ::Module, ::String) at ./Base.jl:380
 [5] include at ./Base.jl:368 [inlined]
 [6] include(::String) at /Users/dilum/.julia/packages/Torch/Q8Y45/src/Torch.jl:1
 [7] top-level scope at /Users/dilum/.julia/packages/Torch/Q8Y45/src/Torch.jl:25
 [8] include(::Function, ::Module, ::String) at ./Base.jl:380
 [9] include(::Module, ::String) at ./Base.jl:368
 [10] top-level scope at none:2
 [11] eval at ./boot.jl:331 [inlined]
 [12] eval(::Expr) at ./client.jl:467
 [13] top-level scope at ./none:3
in expression starting at /Users/dilum/.julia/packages/Torch/Q8Y45/src/tensor.jl:6
in expression starting at /Users/dilum/.julia/packages/Torch/Q8Y45/src/Torch.jl:25
ERROR: Failed to precompile Torch [6a2ea274-3061-11ea-0d63-ff850051a295] to /Users/dilum/.julia/compiled/v1.6/Torch/2cR1S_xGAhl.ji.
Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1290
 [3] _require(::Base.PkgId) at ./loading.jl:1030
 [4] require(::Base.PkgId) at ./loading.jl:928
 [5] require(::Module, ::Symbol) at ./loading.jl:923

@DhairyaLGandhi
Copy link
Member

Yes, this is due to using the GPU binaries from torch and having the wrapper rely on cuda being available. We just need to build a Torch_cpu artifact to do this correctly.

@Sundaravelpandian
Copy link

Sundaravelpandian commented Jul 2, 2020

@DhairyaLGandhi Is the following error also linked to the absence of GPU?

`julia> using Torch
[ Info: Precompiling Torch [6a2ea274-3061-11ea-0d63-ff850051a295]
ERROR: InitError: could not load library "/home/ssing/.julia/artifacts/d6ce2ca09ab00964151aaeae71179deb8f9800d1/lib/libtorch.so"

/home/ssing/.julia/artifacts/d6ce2ca09ab00964151aaeae71179deb8f9800d1/lib/libtorch.so: undefined symbol: _ZN3c1016C10FlagsRegistryB5cxx11Ev
Stacktrace:
[1] dlopen(::String, ::UInt32; throw_error::Bool) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Libdl/src/Libdl.jl:109
[2] dlopen at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/Libdl/src/Libdl.jl:109 [inlined] (repeats 2 times)
[3] init() at /home/ssing/.julia/packages/Torch_jll/sFQc0/src/wrappers/x86_64-linux-gnu-cxx11.jl:64
[4] _include_from_serialized(::String, ::Array{Any,1}) at ./loading.jl:697
[5] _require_search_from_serialized(::Base.PkgId, ::String) at ./loading.jl:781
[6] _tryrequire_from_serialized(::Base.PkgId, ::UInt64, ::Nothing) at ./loading.jl:712
[7] _require_from_serialized(::String) at ./loading.jl:743
[8] _require(::Base.PkgId) at ./loading.jl:1039
[9] require(::Base.PkgId) at ./loading.jl:927
[10] require(::Module, ::Symbol) at ./loading.jl:922
during initialization of module Torch_jll`

@freddycct
Copy link

I mainly develop on my laptop, it doesn't have a cuda gpu, so.... i can't import this unless the CPU version is compiled for.

@ordicker
Copy link

I tried to compile cpu-only version of pytorch and Torch.jl, using the wizard.
I have download this pytorch version: https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-1.5.1%2Bcpu.zip

I followed this "script": https://github.com/JuliaPackaging/Yggdrasil/blob/master/T/Torch/build_tarballs.jl

I ran this line:
cmake -DCMAKE_PREFIX_PATH=$prefix -DTorch_DIR=$prefix/share/cmake/Torch ..

but got this error
Re-run cmake no build system arguments CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:137 (message): Could NOT find CUDNN (missing: CUDNN_INCLUDE_DIR CUDNN_LIBRARIES) Call Stack (most recent call first): /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE) FindCUDNN.cmake:6 (find_package_handle_standard_args) CMakeLists.txt:6 (find_package)

@DhairyaLGandhi
Copy link
Member

You might need to make it so that the CUDA deps don't leak into the build process. The CMakeLists, and the CUDA related packages would need to be put behind a boolean GPU flag.

@ordicker
Copy link

I get that, but I don't understand which line caused the CUDA deps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

6 participants