Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModelWrapper.transform() mode returning "model_was_changed" flag #163

Open
Tobi-Alonso opened this issue Jun 23, 2020 · 2 comments
Open

Comments

@Tobi-Alonso
Copy link
Contributor

Processes like streamlining may require iteration over a sequence of transforms.
It would be useful to have a flag indicating that the model has not changed within a transform to know if for ex. streamlining has converged (no more streamlining can be done)

For backward compatibility, this could be done using an extra optional argument for ModelWrapper.transform(), returning just the model by default.

@maltanar
Copy link
Collaborator

maltanar commented Jun 23, 2020

From https://github.com/Xilinx/finn/blob/dev/src/finn/transformation/__init__.py#L30

* Your transformation's apply function should take in a ModelWrapper, and return
  a tuple with (transformed_model: ModelWrapper, model_was_changed: Bool)
* model_was_changed indicates whether your transformation made any changes to
  the model. If you know your transformation needs to be called only once and
  repeated calls have no further effect, you can return False even if the model
  was changed.
* You MUST return model_was_changed=False at some point when your transformation
  is called multiple times, otherwise apply_repeated() will loop infinitely.
* If you cannot guarantee that the transformation will reach a fixed point,
  you must declare this, return model_was_changed = False and let the user
  manually re-apply the transform.

We do "cheat" sometimes by returning model_was_changed=False even though it was changed as we really use that variable as needs_to_run_again. Many other transformations actually keep track of whether they have modified the model or not and return that.

Do you have a specific use case that needs to distinguish between the two?

@Tobi-Alonso
Copy link
Contributor Author

My general goal was to have a general streamlining transformation.
The number of iterations over the sequence of transforms can vary from one net to another. We may set the iterations to a large number and most nets would be streamlined, but this process takes a while. Particularly for large nets. The minimum number of iterations would depend, among other things, on the graph topology.
To determine whether the sequence of transforms may need to be re applied or not, it would be useful to know if they changed the model.
Alternatively, a model comparator could be used, but old.graph == new.graph was failing me.
I have now detected that the problem is that some transforms replace tensors without deleting the old one. So, when calling GiveReadableTensorNames(), used tensors are renamed by GiveRandomTensorNames() (used tensors are then renamed in accordance to graph topology).
Not only for the presented reason (been able to compare graphs), it would be nice to keep the graph tidy.

Here is a Q&D code that I used to test this. The input model has 1425 tensors, and new_model 349. After this process, new_model was verified through exec comparison.

from finn.util.basic import remove_by_name

model = model.transform(GiveReadableTensorNames())
new_model = model.transform(GiveReadableTensorNames())
t1_names = model.get_all_tensor_names()
t2_names = new_model.get_all_tensor_names()

print("Before:",model.graph == new_model.graph)
for t1,t2 in zip(t1_names,t2_names):
    if t1 != t2:
        remove_by_name(new_model.graph.value_info,t2)
        remove_by_name(new_model.graph.input,t2)
        remove_by_name(new_model.graph.output,t2)

new_model_v2 = new_model.transform(GiveReadableTensorNames())
print("After:"new_model_v2.graph == new_model.graph)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants