Skip to content

Torch 2.7 and CUDA 12.8 compile failing how to fix? X-Pose\models\UniPose\ops>python setup.py build_ext bdist_wheel #45

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
FurkanGozukara opened this issue Mar 15, 2025 · 5 comments

Comments

@FurkanGozukara
Copy link

(venv) E:\LP_V17\X-Pose\models\UniPose\ops>python setup.py build_ext bdist_wheel
running build_ext
building 'MultiScaleDeformableAttention' extension
E:\LP_V17\LivePortrait\venv\lib\site-packages\torch\utils\cpp_extension.py:2330: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
Emitting ninja build file E:\LP_V17\X-Pose\models\UniPose\ops\build\temp.win-amd64-cpython-310\Release\build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/1] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcc --generate-dependencies-with-compile --dependency-output E:\LP_V17\X-Pose\models\UniPose\ops\build\temp.win-amd64-cpython-310\Release\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.obj.d -std=c++17 --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /wd4624 -Xcompiler /wd4067 -Xcompiler /wd4068 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -DWITH_CUDA -IE:\LP_V17\X-Pose\models\UniPose\ops\src -IE:\LP_V17\LivePortrait\venv\lib\site-packages\torch\include -IE:\LP_V17\LivePortrait\venv\lib\site-packages\torch\include\torch\csrc\api\include "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include" -IE:\LP_V17\LivePortrait\venv\include -IC:\Python310\include -IC:\Python310\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" -c E:\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.cu -o E:\LP_V17\X-Pose\models\UniPose\ops\build\temp.win-amd64-cpython-310\Release\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=MultiScaleDeformableAttention -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_120,code=sm_120 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86
FAILED: E:/LP_V17/X-Pose/models/UniPose/ops/build/temp.win-amd64-cpython-310/Release/LP_V17/X-Pose/models/UniPose/ops/src/cuda/ms_deform_attn_cuda.obj
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcc --generate-dependencies-with-compile --dependency-output E:\LP_V17\X-Pose\models\UniPose\ops\build\temp.win-amd64-cpython-310\Release\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.obj.d -std=c++17 --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /wd4624 -Xcompiler /wd4067 -Xcompiler /wd4068 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -DWITH_CUDA -IE:\LP_V17\X-Pose\models\UniPose\ops\src -IE:\LP_V17\LivePortrait\venv\lib\site-packages\torch\include -IE:\LP_V17\LivePortrait\venv\lib\site-packages\torch\include\torch\csrc\api\include "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include" -IE:\LP_V17\LivePortrait\venv\include -IC:\Python310\include -IC:\Python310\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.26100.0\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" -c E:\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.cu -o E:\LP_V17\X-Pose\models\UniPose\ops\build\temp.win-amd64-cpython-310\Release\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=MultiScaleDeformableAttention -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_120,code=sm_120 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.8/include\cuda/std/detail/libcxx/include/cmath(1032): warning #221-D: floating-point value does not fit in required floating-point type
if (__r >= ::nextafter(static_cast<_RealT>(_MaxVal), ((float)(1e+300))))
^

Remark: The warnings can be suppressed with "-diag-suppress "

E:\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.cu(64): error: no suitable conversion function from "const at::DeprecatedTypeProperties" to "c10::ScalarType" exists
[&] { const auto& the_type = value.type(); constexpr const char* at_dispatch_name = "ms_deform_attn_forward_cuda"; at::ScalarType st = ::detail::scalar_type(the_type); ; switch (st) { case at::ScalarType::Double: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Double)) { if (!(false)) { ::c10::detail::torchCheckFail( func, "E:\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.cu", static_cast<uint32_t>(74), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)","dtype '", toString(at::ScalarType::Double), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeTat::ScalarType::Double; return ([&] { ms_deformable_im2col_cuda(at::cuda::getCurrentCUDAStream(), value.data<scalar_t>() + n * im2col_step * per_value_size, spatial_shapes.data<int64_t>(), level_start_index.data<int64_t>(), sampling_loc.data<scalar_t>() + n * im2col_step * per_sample_loc_size, attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size, batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point, columns.data<scalar_t>()); })(); } case at::ScalarType::Float: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Float)) { if (!(false)) { ::c10::detail::torchCheckFail( func, "E:\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.cu", static_cast<uint32_t>(74), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)","dtype '", toString(at::ScalarType::Float), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeTat::ScalarType::Float; return ([&] { ms_deformable_im2col_cuda(at::cuda::getCurrentCUDAStream(), value.data<scalar_t>() + n * im2col_step_ * per_value_size, spatial_shapes.data<int64_t>(), level_start_index.data<int64_t>(), sampling_loc.data<scalar_t>() + n * im2col_step_ * per_sample_loc_size, attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size, batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point, columns.data<scalar_t>()); })(); } default: if (!(false)) { ::c10::detail::torchCheckFail( func, "E:\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.cu", static_cast<uint32_t>(74), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)",'"', at_dispatch_name, "" not implemented for '", toString(_st), "'"))); }; } }();

                                            ^

E:\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.cu(134): error: no suitable conversion function from "const at::DeprecatedTypeProperties" to "c10::ScalarType" exists
[&] { const auto& the_type = value.type(); constexpr const char* at_dispatch_name = "ms_deform_attn_backward_cuda"; at::ScalarType st = ::detail::scalar_type(the_type); ; switch (st) { case at::ScalarType::Double: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Double)) { if (!(false)) { ::c10::detail::torchCheckFail( func, "E:\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.cu", static_cast<uint32_t>(147), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)","dtype '", toString(at::ScalarType::Double), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeTat::ScalarType::Double; return ([&] { ms_deformable_col2im_cuda(at::cuda::getCurrentCUDAStream(), grad_output_g.data<scalar_t>(), value.data<scalar_t>() + n * im2col_step * per_value_size, spatial_shapes.data<int64_t>(), level_start_index.data<int64_t>(), sampling_loc.data<scalar_t>() + n * im2col_step * per_sample_loc_size, attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size, batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point, grad_value.data<scalar_t>() + n * im2col_step_ * per_value_size, grad_sampling_loc.data<scalar_t>() + n * im2col_step_ * per_sample_loc_size, grad_attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size); })(); } case at::ScalarType::Float: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Float)) { if (!(false)) { ::c10::detail::torchCheckFail( func, "E:\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.cu", static_cast<uint32_t>(147), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)","dtype '", toString(at::ScalarType::Float), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeTat::ScalarType::Float; return ([&] { ms_deformable_col2im_cuda(at::cuda::getCurrentCUDAStream(), grad_output_g.data<scalar_t>(), value.data<scalar_t>() + n * im2col_step_ * per_value_size, spatial_shapes.data<int64_t>(), level_start_index.data<int64_t>(), sampling_loc.data<scalar_t>() + n * im2col_step_ * per_sample_loc_size, attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size, batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point, grad_value.data<scalar_t>() + n * im2col_step_ * per_value_size, grad_sampling_loc.data<scalar_t>() + n * im2col_step_ * per_sample_loc_size, grad_attn_weight.data<scalar_t>() + n * im2col_step_ * per_attn_weight_size); })(); } default: if (!(false)) { ::c10::detail::torchCheckFail( func, "E:\LP_V17\X-Pose\models\UniPose\ops\src\cuda\ms_deform_attn_cuda.cu", static_cast<uint32_t>(147), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)",'"', at_dispatch_name, "" not implemented for '", toString(_st), "'"))); }; } }();

                                             ^

2 errors detected in the compilation of "E:/LP_V17/X-Pose/models/UniPose/ops/src/cuda/ms_deform_attn_cuda.cu".
ms_deform_attn_cuda.cu
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\torch\utils\cpp_extension.py", line 2480, in _run_ninja_build
subprocess.run(
File "C:\Python310\lib\subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "E:\LP_V17\X-Pose\models\UniPose\ops\setup.py", line 64, in
setup(
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools_init_.py", line 87, in setup
return distutils.core.setup(**attrs)
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools_distutils\core.py", line 185, in setup
return run_commands(dist)
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools_distutils\dist.py", line 968, in run_commands
self.run_command(cmd)
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools\dist.py", line 1217, in run_command
super().run_command(command)
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools_distutils\dist.py", line 987, in run_command
cmd_obj.run()
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
_build_ext.run(self)
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools_distutils\command\build_ext.py", line 346, in run
self.build_extensions()
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1007, in build_extensions
build_ext.build_extensions(self)
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools_distutils\command\build_ext.py", line 466, in build_extensions
self._build_extensions_serial()
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools_distutils\command\build_ext.py", line 492, in _build_extensions_serial
self.build_extension(ext)
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools\command\build_ext.py", line 246, in build_extension
_build_ext.build_extension(self, ext)
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\setuptools_distutils\command\build_ext.py", line 547, in build_extension
objects = self.compiler.compile(
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\torch\utils\cpp_extension.py", line 975, in win_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\torch\utils\cpp_extension.py", line 2133, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "E:\LP_V17\LivePortrait\venv\lib\site-packages\torch\utils\cpp_extension.py", line 2496, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

(venv) E:\LP_V17\X-Pose\models\UniPose\ops>

@FurkanGozukara
Copy link
Author

pip freeze

(venv) E:\LP_V17\X-Pose\models\UniPose\ops>pip freeze
aiofiles==23.2.1
albucore==0.0.24
albumentations==1.4.10
annotated-types==0.7.0
anyio==4.8.0
certifi==2025.1.31
charset-normalizer==3.4.1
click==8.1.8
colorama==0.4.6
coloredlogs==15.0.1
contourpy==1.3.1
cycler==0.12.1
deepspeed @ https://files.pythonhosted.org/packages/3d/65/1a6394f5d6dee851e9ea59e385f6d6428e3bfe36f83c06e0336e14dcfd11/deepspeed-0.16.4-cp310-cp310-win_amd64.whl#sha256=09e0b4f10c22b8054f340d90077a8d382092491fcf90fe1779c4e6180314e234
docstring_parser==0.16
einops==0.8.1
exceptiongroup==1.2.2
fastapi==0.115.11
ffmpeg-python==0.2.0
ffmpy==0.5.0
filelock==3.16.1
flash-attn @ https://huggingface.co/MonsterMMORPG/SECourses_Premium_Flash_Attention/resolve/main/flash_attn-2.7.4.post1-cp310-cp310-win_amd64.whl#sha256=5ec4ae0106ad9c50713ace5690b6e2c7c9efb7e91629e779e3f2459693bd1167
flatbuffers==25.2.10
fonttools==4.56.0
fsspec==2024.10.0
future==1.0.0
gradio==5.1.0
gradio_client==1.4.0
h11==0.14.0
hjson==3.1.0
httpcore==1.0.7
httpx==0.28.1
huggingface-hub==0.29.3
humanfriendly==10.0
idna==3.10
imageio==2.34.2
imageio-ffmpeg==0.5.1
Jinja2==3.1.4
joblib==1.4.2
kiwisolver==1.4.8
lazy_loader==0.4
lmdb==1.4.1
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.9.0
mdurl==0.1.2
mpmath==1.3.0
msgpack==1.1.0
MultiScaleDeformableAttention @ https://huggingface.co/MonsterMMORPG/Generative-AI/resolve/main/MultiScaleDeformableAttention-1.0-cp310-cp310-win_amd64.whl#sha256=cbe970f6b39d6b2545e9bf4d2836256332f67bd3b2de3a07b55d8c408c562c5a
networkx==3.4.2
ninja==1.11.1.3
numpy==1.26.4
nvidia-ml-py==12.570.86
onnx==1.16.1
onnxruntime-gpu==1.21.0
opencv-python==4.10.0.84
opencv-python-headless==4.11.0.86
orjson==3.10.15
packaging==24.2
pandas==2.2.3
pillow==10.4.0
protobuf==6.30.1
psutil==7.0.0
py-cpuinfo==9.0.0
pydantic==2.10.6
pydantic_core==2.27.2
pydub==0.25.1
Pygments==2.19.1
pykalman==0.9.7
pyparsing==3.2.1
pyreadline3==3.5.4
python-dateutil==2.9.0.post0
python-multipart==0.0.20
pytz==2025.1
PyYAML==6.0.1
regex==2024.11.6
requests==2.32.3
rich==13.7.1
ruff==0.11.0
safetensors==0.5.3
scikit-image==0.24.0
scikit-learn==1.6.1
scipy==1.13.1
semantic-version==2.10.0
shellingham==1.5.4
shtab==1.7.1
simsimd==6.2.1
six==1.17.0
sniffio==1.3.1
starlette==0.46.1
stringzilla==3.12.3
sympy==1.13.3
threadpoolctl==3.6.0
tifffile==2025.3.13
tokenizers==0.15.2
tomlkit==0.12.0
torch==2.8.0.dev20250314+cu128
torchao==0.9.0
torchaudio==2.6.0.dev20250315+cu128
torchvision==0.22.0.dev20250315+cu128
tqdm==4.66.4
transformers==4.38.0
triton-windows==3.3.0a0.post14
typer==0.15.2
typing_extensions==4.12.2
tyro==0.8.5
tzdata==2025.1
urllib3==2.3.0
uvicorn==0.34.0
websockets==12.0

(venv) E:\LP_V17\X-Pose\models\UniPose\ops>

@FurkanGozukara
Copy link
Author

any chance you can help? ty so much

@yangjie-cv @ailingzengzzz @guoqincode

@JustAnalyze
Copy link

Hi Furkan Have you found a way to fix this? Im having this error:
No module named 'MultiScaleDeformableAttention'

I just installed all of the dependencies in the requirements.txt and used conda to create the environment.

@FurkanGozukara
Copy link
Author

Hi Furkan Have you found a way to fix this? Im having this error: No module named 'MultiScaleDeformableAttention'

I just installed all of the dependencies in the requirements.txt and used conda to create the environment.

nope not for this one

but I compiled for LivePortrait within itself and shared with followers : https://www.patreon.com/posts/119254105

so my live portrait now works with RTX 5000 series for both human live portrait and animal live portrait

I assume that one is different than what on this site

@Owlsun321
Copy link

dirty env

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants