Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Determing Risky expectations grid for Spark #213

Open
sbenthall opened this issue Apr 20, 2023 · 2 comments · May be fixed by #236
Open

Determing Risky expectations grid for Spark #213

sbenthall opened this issue Apr 20, 2023 · 2 comments · May be fixed by #236
Assignees

Comments

@sbenthall
Copy link
Owner

See #162 ...

For Spark SHARK, we will need a good grid over risky expectations.
These will only come into play when the expectations regime is STRANGE.

One way we could do this is set the STRANGE threshold low, and cover the full range -- this should be much like the WHITE SHARK case.

@alanlujan91
Copy link
Collaborator

Is it possible to get a hark-less run and cover that range? then see if hark presence changes the range much.

@sbenthall
Copy link
Owner Author

RiskyAvg range: from 1 to .. (what's implied by the usual dividend) ... maxRiskyAvg.

Get maxRiskyAvg by:

  • Generate returns based on the USUAL process. (How much?)
  • Get small chunks of that data. (How small?)
  • Compute the mean rate of return from those chunks
  • Use the max as the maxRiskyAvg.

Or:

  • Look at the usual distribution
  • Find the value of the tail at some high confidence level.

RiskyStd range: from 0 to ... (the usual dividend $\sigma$) ...

(something similar to set the max).

There are many more clever ways to do the calibration but we probably don't have time to implement them in SPARK.

@alanlujan91 alanlujan91 linked a pull request May 26, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants