You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the analyzer just iterates over the raw data and applies the analysis in the local python interpreter. Instead it should run it in Spark, the thing i have to think about is what happens if a file is corrupt. Data corruption should not raise an analyzer error, instead it should notify the core that it is not happy with the data. (Also see mami-project/pto-core#20)
Example: download the ipfix file from the measurement server without having stopped QoF beforehand. the ipfix reader will raise an exception because there is no end signature(?) in the file.
The text was updated successfully, but these errors were encountered:
Currently the analyzer just iterates over the raw data and applies the analysis in the local python interpreter. Instead it should run it in Spark, the thing i have to think about is what happens if a file is corrupt. Data corruption should not raise an analyzer error, instead it should notify the core that it is not happy with the data. (Also see mami-project/pto-core#20)
Example: download the ipfix file from the measurement server without having stopped QoF beforehand. the ipfix reader will raise an exception because there is no end signature(?) in the file.
The text was updated successfully, but these errors were encountered: