Replies: 14 comments 21 replies
-
You probably need to add some details on the speed improvements so others can verity your findings. |
Beta Was this translation helpful? Give feedback.
-
I will, these are unit tests on large amounts of randomized data using currently ugly optimizations and my absolutely awful implementations of memos in c. They aren't in my codebase yet, but it looks promising, if 0.05%-0.09% errors don't compound too much.
… On May 28, 2023, at 6:05 AM, dickinsonre_Autodesk ***@***.***> wrote:
You probably need to add some details on the speed improvements so others can verity your findings.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
|
Beta Was this translation helpful? Give feedback.
-
@swmm-js I moved this from an issue to a discussion. Feel free to keep us updated on your progress. Thanks! |
Beta Was this translation helpful? Give feedback.
-
I've stabilized the computations for USER5 to near identical results. I'll see what else I can do. Standard 5.23: ************************** Volume Depth ************************** Volume Volume Highest Continuity Errors Node 3SB22 (61.19%) Updated 5.23: ************************** Volume Depth ************************** Volume Volume Highest Continuity Errors Node 3SB22 (61.20%) |
Beta Was this translation helpful? Give feedback.
-
@cbuahin I'm of the mindset that experimentation is good. If nothing else it is constructive to see what @swmm-js can achieve using a memoization approach. It may prove useful elsewhere in the project or inspire other ideas. |
Beta Was this translation helpful? Give feedback.
-
@swmm-js, I always find that it's a good idea to wear a hard hat when working in open source. This mainly helps prevent me from knocking myself out on my own laptop when I bang my head against it after I push my bugged code. Writing code and sharing it with the public is an art form and exposes you to a lot of vulnerability. I don't think this audience is going to see that you get parking tickets. Some of us (here) have been working in open source for 10 years. It's a very respectable place to work and play.. and it's way more fun than just watching other people do it. Barely anyone actually engages in any form of open source SWMM! But when you do, you'll be thankful that you have a hard hat on. Anyway, if you feel open to it, sharing code is something tangible that we can have a conversation around. :-) |
Beta Was this translation helpful? Give feedback.
-
@swmm-js Getting the same CE is good but what about speed? If your code is faster than native SWMM5 then assuming the inp files are the same the difference has to be the average time stepa and the average number of iterations - when you show a comparison you might also show the routing time step summary Routing Time Step Summary Minimum Time Step : 0.50 sec |
Beta Was this translation helpful? Give feedback.
-
In your code did the this value decrease from 0.07 or vice versa "% of Steps Not Converging : 0.06" |
Beta Was this translation helpful? Give feedback.
-
odesolve uses rkqs for integrating the runoff depth and groundwater depth. It is not a source of slowness in SWMM5 as the bulk of SWMM5's computational efforts deal with the link hydraulics. Hydrology time steps as well have longer computational time steps whereas the links are often solved at second time steps or even smaller. |
Beta Was this translation helpful? Give feedback.
-
I think I'm going to move on to memoizing functions, but I just want to check some error results I'm seeing on my end. Do I have my math wrong? Is A_Circ in xsect.dat modified in some way? I know there isn't much of a difference, but I want to know what the acceptable error is on static tables and dynamic memo storage and I just thought I'd pull up A_Circ to get a comparison. in published code: double A_Circ[51] = // A/Afull v. Y/Yfull calcs: double A_Circ[51] = |
Beta Was this translation helpful? Give feedback.
-
What I am doing right now if deciding if my gap tolerances of table entries for static and dynamic tables are acceptable. I just want to make clear: I am not error-checking current tables, I am finding the current acceptable gap error. I am not looking to correct any code here, I am trying to find a point where differences between the new results and the traditional results agree within acceptable error. Widening the table entries for A_Circ to 201 entries instead of 51 entries gives the following result changes for 849_1 of user5.inp: published code results (51 entries): index ID179 849_1
|
Beta Was this translation helpful? Give feedback.
-
As a test, I'm showing one of the static table updates, which is probably a bad idea on my end - feel free to dismiss it entirely: funcs.h: double altfunc1(double val); dwflow, findconduitsflow, line 208:
altfuncs.c (new): //============================================================================= double altfunc1(double val) int N_W_altfunc1 = valcount; so for valcount 801, which has much better error result tolerance than A_Circ, that (static) table would start like this: {0, 0.000855, 0.002154, 0.003699, 0.005429, 0.00731, 0.009322, 0.011449, |
Beta Was this translation helpful? Give feedback.
-
Others have suggested a power function for the xsect variables - which essentially gives you the 10.000 cross section points. |
Beta Was this translation helpful? Give feedback.
-
I have tidied up these changes and created a pull request. I have also created a similar update for xsections, but it is not in a pull request yet. |
Beta Was this translation helpful? Give feedback.
-
My internal testing shows some large speed improvements at a cost of about 0.1% error. I'm sure I'm doing something wrong, I'll finish up the basics of the full web interface and then do some more testing on EPA-SWMM. Thanks for pointing me in the right direction.
Beta Was this translation helpful? Give feedback.
All reactions