You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems to me that there is no advantage in using GPU in autokeras. For example, with low-end PC during fit the step duration is 4ms, while with GPU for thousands of dollars (NVIDIA T4) the step duration is 3ms and GPU is used for 20-25%. It could be good if there could be some profiler that can optimize parameters to use GPU.
I also tried the code below, but it did not improve performance.
Code Example
###### My special code here ##############
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.compat.v1.Session(config=config)
##########################################
reg = ak.StructuredDataRegressor(project_name=modelName)
##########################################
with tf.device('/gpu:0'):
##########################################
reg.fit(X_train, y_train, validation_split=vSplit)
Reason
Solution
The text was updated successfully, but these errors were encountered:
Feature Description
It seems to me that there is no advantage in using GPU in autokeras. For example, with low-end PC during fit the step duration is 4ms, while with GPU for thousands of dollars (NVIDIA T4) the step duration is 3ms and GPU is used for 20-25%. It could be good if there could be some profiler that can optimize parameters to use GPU.
I also tried the code below, but it did not improve performance.
Code Example
Reason
Solution
The text was updated successfully, but these errors were encountered: