Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

一import paddle 就打印一堆信息 #439

Open
zzzzzz0407 opened this issue Jan 15, 2024 · 1 comment
Open

一import paddle 就打印一堆信息 #439

zzzzzz0407 opened this issue Jan 15, 2024 · 1 comment
Assignees

Comments

@zzzzzz0407
Copy link

如图,也不影响使用,但很奇怪期望可以被解决
In [1]: import paddle
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0115 16:42:38.097568 1452196 dynamic_loader.cc:194] Set paddle lib path : /home/zhangrufeng/anaconda3/envs/LuckyPaddle/lib/python3.8/site-packages/paddle/libs
I0115 16:42:38.219532 1452196 init.cc:96] Before Parse: argc is 2, Init commandline: dummy --tryfromenv=use_mkldnn,enable_exit_when_partial_worker,einsum_opt,eager_delete_tensor_gb,graph_load_in_parallel,allocator_strategy,enable_all2all_use_fp16,cublaslt_exhaustive_search_times,gpu_allocator_retry_time,use_cuda_managed_memory,check_kernel_launch,static_executor_perfstat_filepath,cudnn_exhaustive_search,gpu_memory_limit_mb,add_dependency_for_communication_op,new_executor_use_cuda_graph,reader_queue_speed_test_mode,gpugraph_dedup_pull_push_mode,gpugraph_enable_segment_merge_grads,convert_all_blocks,low_precision_op_list,initial_cpu_memory_in_mb,eager_delete_scope,gpugraph_load_node_list_into_hbm,check_nan_inf,use_shm_cache,tracer_mkldnn_ops_off,embedding_deterministic,free_idle_chunk,gpugraph_enable_gpu_direct_access,gemm_use_half_precision_compute_type,cudnn_batchnorm_spatial_persistent,dygraph_debug,host_trace_level,gpugraph_storage_mode,fuse_parameter_groups_size,max_inplace_grad_add,use_autotune,communicator_max_merge_var_num,gpugraph_enable_hbm_table_collision_stat,conv_workspace_size_limit,new_executor_use_inplace,local_exe_sub_scope_limit,fuse_parameter_memory_size,selected_gpus,rpc_send_thread_num,tensor_operants_mode,graph_metapath_split_opt,init_allocated_mem,dist_threadpool_size,new_executor_static_build,graph_get_neighbor_id,enable_cublas_tensor_op_math,enable_gpu_memory_usage_log,multiple_of_cupti_buffer_size,reallocate_gpu_memory_in_mb,use_stream_safe_cuda_allocator,rocksdb_path,use_system_allocator,jit_engine_type,gpugraph_debug_gpu_memory,get_host_by_name_time,enable_opt_get_features,use_fast_math,gpugraph_hbm_table_load_factor,gpugraph_sparse_table_storage_mode,trt_ibuilder_cache,run_kp_kernel,executor_log_deps_every_microseconds,communicator_is_sgd_optimizer,enable_auto_detect_gpu_topo,tracer_profile_fname,search_cache_max_number,gpugraph_merge_grads_segment_size,paddle_num_threads,conv2d_disable_cudnn,fraction_of_cuda_pinned_memory_to_use,new_executor_sequential_run,allreduce_record_one_event,communicator_send_queue_size,check_nan_inf_level,enable_unused_var_check,enable_rpc_profiler,enable_auto_rdma_trans,fleet_executor_with_standalone,npu_storage_format,cpu_deterministic,free_when_no_cache_hit,fraction_of_cpu_memory_to_use,initial_gpu_memory_in_mb,cudnn_deterministic,apply_pass_to_program,sort_sum_gradient,use_pinned_memory,call_stack_level,cudnn_exhaustive_search_times,new_executor_use_local_scope,use_virtual_memory_auto_growth,pe_profile_fname,enable_tracker_all2all,nccl_blocking_wait,enable_sparse_inner_gather,new_executor_serial_run,memory_fraction_of_eager_deletion,enable_api_kernel_fallback,enable_gpu_memory_usage_log_mb,set_to_1d,print_sub_graph_dir,prim_enabled,inner_op_parallelism,tracer_mkldnn_ops_on,cache_inference_while_scope,sync_nccl_allreduce,new_executor_log_memory_stats,fast_eager_deletion_mode,gpugraph_slot_feasign_max_num,auto_growth_chunk_size_in_mb,fraction_of_gpu_memory_to_use,benchmark
I0115 16:42:38.219650 1452196 init.cc:104] After Parse: argc is 1

@warrentdrew
Copy link
Collaborator

您好,看起来应该是paddle框架环境配置相关的问题,建议在paddle框架repo中提出issue
https://github.com/PaddlePaddle/Paddle/issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants