Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated parse_data to deal with more than 200 directional bins aand zeros #21

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ajsmale
Copy link

@ajsmale ajsmale commented Aug 11, 2018

Updated parse_data to deal with more than 200 directional bins as well as spectra with all zeros

Apparently a maximum of 200 directional bins is written by SWAN to one line (this is hardcoded somewhere, not part of a swan input/use settings. With more than 200 bins, the data is written in multiple lines. Code changes allows for this.

Furthermore, if for some reason all data is zero (or more specifically all integer), the np.asarray makes and integer array, which does not allow multiplication with the floating value of factor f. Changed np.asarray to also hardcode float64.

@codecov-io
Copy link

Codecov Report

Merging #21 into master will decrease coverage by 0.44%.
The diff coverage is 22.22%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #21      +/-   ##
==========================================
- Coverage   76.53%   76.08%   -0.45%     
==========================================
  Files           9        9              
  Lines        1189     1196       +7     
  Branches      248      250       +2     
==========================================
  Hits          910      910              
- Misses        168      174       +6     
- Partials      111      112       +1
Impacted Files Coverage Δ
oceanwaves/swan.py 72.7% <22.22%> (-1.06%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d71a3da...cf1ee14. Read the comment docs.

@hoonhout
Copy link
Contributor

Hi Alfons,

Thanks for the improvement. The fix seems appropriate, but seems to work only up to 400 bins. I think it is trivial to make the fix work for an unlimited number of bins. Untested example code:

n = len(self.frequencies)
m = int(np.ceil(len(self.directions)/MAX_BINS_PER_LINE))
q_splitted = self.lines.read_blockbody(n*m)
q_concatenated =[]
for i in range(len(self.frequencies)):
    q_concatenated.append(sum(q_splitted[m*i:m*(i+1)]))
q = np.asarray(q_concatenated, dtype=np.float64) * f

Further, I have some minor requests regarding readability and testing (see also the code snippet above).

Readability

  • Can you make a constant for the 200 number, something like MAX_BINS_PER_LINE? Then we instantly know what this code snippet is doing.
  • Can you avoid using non-descriptive names like temp? You can use subscripting, for example q_splitted and q_concatenated.
  • Can you use i as counter variable rather than ii? For readability I prefer the use of different characters, for example j, but here there is no need to use anything else than i.

Testing

  • Can you add an example file to data/swan that has more than 200 directional bins?
  • Can you add a test to tests/test_swan.py that tries to read this file?

Acknowledgment

Finally, can you add a line to docs/whatsnew.rst either under Bug fixes or Improvements under the unreleased section describing your fix and acknowledging yourself?

Bas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants