Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preserve node naming/namespacing #47

Open
matemijolovic opened this issue May 20, 2024 · 4 comments
Open

Preserve node naming/namespacing #47

matemijolovic opened this issue May 20, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@matemijolovic
Copy link

Hello,

Thanks for developing this great tool! I have a question regarding layer/node naming in the converted Keras model.

For the context, I have a usecase where I postprocess the converted graph and I expect the nodes to conform to some custom naming schema (more specifically, to be "namespaced" or to contain some specific keywords). In the current Nobuco setup, module names in the Keras graph are autogenerated and thus not connected to Pytorch module names.

Do you think it would make sense to support some sort of naming/namespacing here?
For 1-1 mappings PT module name could be used for Keras layer name, and for 1-N cases I'd propose using the PT module name as a prefix to the autogenerated name.

If you like this idea, I can offer some help with the implementation. Thanks in advance!

@AlexanderLutsenko
Copy link
Owner

AlexanderLutsenko commented May 24, 2024

Hi! I'm not sure how to implement that. tf.name_scope seems to have no effect.

with tf.name_scope('block'):
    dense = tf.keras.layers.Dense(10, input_shape=(2, ))

inputs = tf.keras.Input((1, 2))
outputs = dense(inputs)
model = tf.keras.Model(inputs, outputs)

for w in model.weights:
    print(w.name)
dense/kernel:0
dense/bias:0

Any ideas?

@AlexanderLutsenko AlexanderLutsenko added the enhancement New feature or request label May 24, 2024
@matemijolovic
Copy link
Author

matemijolovic commented May 27, 2024

It's super weird how this works in TF. Usually, name_scope is not reflected in the actual in-memory node names, but only in the serialized graph (which is what I'm trying to achieve). But if you add tf.saved_model.save(model, 'test-model') at the end of your snippet and inspect the saved model in Netron, turns out it doesn't work (more specifically, it works only if you instantiate Tensorflow primitives, and Keras layers ignore it).

Then I stumbled upon this: keras-team/tf-keras#269

TL;DR it works if you enable tf.keras.__internal__.apply_name_scope_on_model_declaration(True)

Screenshot 2024-05-27 at 10 36 33

I'm still trying to find the proper place to apply the named scopes in Nobuco.

@matemijolovic
Copy link
Author

I managed to hack it out but it's far from ideal: microblink@8b5a142

@AlexanderLutsenko
Copy link
Owner

AlexanderLutsenko commented May 29, 2024

I managed to hack it out but it's far from ideal: microblink@8b5a142

Hey, could you give me an example script? I tried the patch, and I see no effect on my machine.

My other concern is that disabling decorate_all() is really not desirable, as it damages Nobuco's tracing capabilities.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants