We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After clicking on live, the following error appears :
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'cudnn_conv_algo_search': 'EXHAUSTIVE', 'device_id': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'has_user_compute_stream': '0', 'gpu_external_alloc': '0', 'enable_cuda_graph': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0'}, 'CPUExecutionProvider': {}} find model: C:\Users\NeoX/.insightface\models\buffalo_l\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'cudnn_conv_algo_search': 'EXHAUSTIVE', 'device_id': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'has_user_compute_stream': '0', 'gpu_external_alloc': '0', 'enable_cuda_graph': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0'}, 'CPUExecutionProvider': {}} find model: C:\Users\NeoX/.insightface\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'cudnn_conv_algo_search': 'EXHAUSTIVE', 'device_id': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'has_user_compute_stream': '0', 'gpu_external_alloc': '0', 'enable_cuda_graph': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0'}, 'CPUExecutionProvider': {}} find model: C:\Users\NeoX/.insightface\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'cudnn_conv_algo_search': 'EXHAUSTIVE', 'device_id': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'has_user_compute_stream': '0', 'gpu_external_alloc': '0', 'enable_cuda_graph': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0'}, 'CPUExecutionProvider': {}} find model: C:\Users\NeoX/.insightface\models\buffalo_l\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'cudnn_conv_algo_search': 'EXHAUSTIVE', 'device_id': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'has_user_compute_stream': '0', 'gpu_external_alloc': '0', 'enable_cuda_graph': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0'}, 'CPUExecutionProvider': {}} find model: C:\Users\NeoX/.insightface\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5 set det-size: (640, 640) 2024-11-18 03:15:04.8553061 [E:onnxruntime:Default, cuda_call.cc:116 onnxruntime::CudaCall] CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=DESKTOP-TG7I910 ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\nn\conv.cc ; line=335 ; expr=cudnnFindConvolutionForwardAlgorithmEx( GetCudnnHandle(context), s_.x_tensor, s_.x_data, s_.w_desc, s_.w_data, s_.conv_desc, s_.y_tensor, s_.y_data, 1, &algo_count, &perf, algo_search_workspace.get(), max_ws_size); 2024-11-18 03:15:04.8688189 [E:onnxruntime:, sequential_executor.cc:514 onnxruntime::ExecuteKernel] Non-zero status code returned while running FusedConv node. Name:'Conv_0' Status Message: CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=DESKTOP-TG7I910 ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\nn\conv.cc ; line=335 ; expr=cudnnFindConvolutionForwardAlgorithmEx( GetCudnnHandle(context), s_.x_tensor, s_.x_data, s_.w_desc, s_.w_data, s_.conv_desc, s_.y_tensor, s_.y_data, 1, &algo_count, &perf, algo_search_workspace.get(), max_ws_size); 2024-11-18 03:15:04.8836061 [E:onnxruntime:Default, cuda_call.cc:116 onnxruntime::CudaCall] CUDA failure 700: an illegal memory access was encountered ; GPU=0 ; hostname=DESKTOP-TG7I910 ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=395 ; expr=cudaStreamSynchronize(static_cast<cudaStream_t>(stream_)); Exception in Tkinter callback Traceback (most recent call last): File "C:\Users\NeoX\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "G:\Deep-Live-Cam-main\env\Lib\site-packages\customtkinter\windows\widgets\ctk_button.py", line 554, in _clicked self._command() File "G:\Deep-Live-Cam-main\modules\ui.py", line 336, in <lambda> command=lambda: webcam_preview( ^^^^^^^^^^^^^^^ File "G:\Deep-Live-Cam-main\modules\ui.py", line 750, in webcam_preview create_webcam_preview(camera_index) File "G:\Deep-Live-Cam-main\modules\ui.py", line 809, in create_webcam_preview source_image = get_one_face(cv2.imread(modules.globals.source_path)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Deep-Live-Cam-main\modules\face_analyser.py", line 28, in get_one_face face = get_face_analyser().get(frame) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Deep-Live-Cam-main\env\Lib\site-packages\insightface\app\face_analysis.py", line 59, in get bboxes, kpss = self.det_model.detect(img, ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Deep-Live-Cam-main\env\Lib\site-packages\insightface\model_zoo\retinaface.py", line 224, in detect scores_list, bboxes_list, kpss_list = self.forward(det_img, self.det_thresh) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Deep-Live-Cam-main\env\Lib\site-packages\insightface\model_zoo\retinaface.py", line 152, in forward net_outs = self.session.run(self.output_names, {self.input_name : blob}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Deep-Live-Cam-main\env\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 220, in run return self._sess.run(output_names, input_feed, run_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running FusedConv node. Name:'Conv_0' Status Message: CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=DESKTOP-TG7I910 ; file=D:\a\_work\1\s\onnxruntime\core\providers\cuda\nn\conv.cc ; line=335 ; expr=cudnnFindConvolutionForwardAlgorithmEx( GetCudnnHandle(context), s_.x_tensor, s_.x_data, s_.w_desc, s_.w_data, s_.conv_desc, s_.y_tensor, s_.y_data, 1, &algo_count, &perf, algo_search_workspace.get(), max_ws_size);
GPU : Gigabyte GTX 1060 6GB
The text was updated successfully, but these errors were encountered:
No branches or pull requests
After clicking on live, the following error appears :
GPU : Gigabyte GTX 1060 6GB
The text was updated successfully, but these errors were encountered: