Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

personal feature question. #363

Open
turnerrocks1 opened this issue Sep 2, 2022 · 0 comments
Open

personal feature question. #363

turnerrocks1 opened this issue Sep 2, 2022 · 0 comments

Comments

@turnerrocks1
Copy link
Contributor

turnerrocks1 commented Sep 2, 2022

I have preposition of adding a new feature though it isn't nothing great or mind blowing just another aspect to go even further with fuzzying. There were two ways of how I imported this feature the first was adding it as template to the code generator which had very poor success rate after 5 Million execution the reported success rate I had when implemented as a Program Template was around 7-8% correctness rate. After manually opening the corpus JS files in Sublime Text to inspect it and see why the correctness rate was so low I seen it wasn't generating as I Wanted it to.

Here's an example below of how ideally the "deoptimization fuzzer" that I came up with Refer to a random typed up sample that I just made "Side note: This sample is dependent on what the fuzzer settings are this only works if fuzzilli default settings are same as ""--thresholdForJITSoon=10",
"--thresholdForJITAfterWarmUp=10",
"--thresholdForOptimizeAfterWarmUp=100",
"--thresholdForOptimizeAfterLongWarmUp=100",
"--thresholdForOptimizeSoon=100",
"--thresholdForFTLOptimizeAfterWarmUp=1000",
"--thresholdForFTLOptimizeSoon=1000","
Otherwise if the settings are changed or isn't this then it wont work."

dummy.js file :
function forcejitfunction() { return 0; }

for(var i = 0; i < 10; i++) { forcejitfunction(); } //this would obviously force llint or baseline jit? forgot if it's the same or different hit levels but when doing this with option --logJIT=true it shows Baseline JIT compilation with (DidTryToEnter).
for(var i = 0; i < 100; i++) { forcejitfunction(); } //this triggers DFG JIT Optimization
for(var i = 0; i < 1000; i++) { forcejitfunction(); } //this triggers FTL JIT Optimization

My point in bringing up this is that everyone is probably already aware of JIT optimization bugs and how the different levels work and what they are

From my standpoint after deep google searches I don't think I ranned into a case of there being a JIT "Deoptimization" bug it usually be the complete opposite for example...

This is the normal typical JIT bugs :
function...
do something or declare variable do something...
end function...

change variable or variable types etc ...
pass function a type or arguments ...

force jit compilation of function and call with different args...

boom bug happens here which results In type confusion because of jit levels making wrong assumptions or graphing or
optimizations...

I haven't ranned into a POC/exploit/writeup where it happened in this following order :

function...
do something or declare variable do something...
end function...

change variable or variable types etc ...
pass function a type or arguments ...

force jit compilation of function and call with different args...

make new variables or change types of previous args...

deoptimize/lower JIT tier on function with a for loop...

call function in lower tier level with different args or same args with changes...

boom bug happens which results In type confusion because of jit levels making wrong assumptions or graphing or
optimizations...

But the idea is to go up in JIT tiers then force deoptimize back down.

like so ...

for(var i = 0; i < 1000; i++) { forcejitfunction(); } //force FTLJIT
for(var i = 0; i < 100; i++) { forcejitfunction(); } //force DFGJIT
for(var i = 0; i < 10; i++) { forcejitfunction(); } //force JIT

Now how I had this implemented like stated I had this in two ways the first was as a ProgramTemplate which failed miserably because it would make cases like this

for(var i = 0; i < 1000; i++) { } // supposed to force FTLJIT but does nothing since it doesn't actually call or do anything
for(var i = 0; i < 100; i++) { } // supposed to force FTLJIT but does nothing since it doesn't actually call or do anything
for(var i = 0; i < 10; i++) { } // supposed to force FTLJIT but does nothing since it doesn't actually call or do anything

Which was made like so in ProgramTemplates.swift

"ProgramTemplate("JITdeoptimizer") { b in
let genSize = 6

    // Generate random function signatures as our helpers
    var functionSignatures = ProgramTemplate.generateRandomFunctionSignatures(forFuzzer: b.fuzzer, n: 10)

    // Generate random property types
    ProgramTemplate.generateRandomPropertyTypes(forBuilder: b)

    // Generate random method types
    ProgramTemplate.generateRandomMethodTypes(forBuilder: b, n: 5)

    b.generate(n: genSize)

    // Generate some small functions
    for signature in functionSignatures {
        // Here generate a random function type, e.g. arrow/generator etc
        b.buildPlainFunction(withSignature: signature) { args in
            b.generate(n: genSize)
        }
    }

    // Generate a larger function
    let signature = ProgramTemplate.generateSignature(forFuzzer: b.fuzzer, n: 4)
    let f = b.buildPlainFunction(withSignature: signature) { args in
        // Generate (larger) function body
        b.generate(n: 40)
    }

    // Generate some random instructions now
    b.generate(n: genSize)

    // trigger JIT
    b.buildForLoop(b.loadInt(0), .lessThan, b.loadInt(100), .Add, b.loadInt(1)) { args in
        b.callFunction(f, withArgs: b.generateCallArguments(for: signature))
    }

    // more random instructions
    b.generate(n: genSize)
    b.callFunction(f, withArgs: b.generateCallArguments(for: signature))

    // maybe trigger recompilation
    b.buildForLoop(b.loadInt(0), .lessThan, b.loadInt(1000), .Add, b.loadInt(1)) { args in
        b.callFunction(f, withArgs: b.generateCallArguments(for: signature))
    }

    //deoptimize down here...

    // trigger JIT and deoptimo
    b.buildForLoop(b.loadInt(0), .lessThan, b.loadInt(100), .Add, b.loadInt(1)) { args in
        b.callFunction(f, withArgs: b.generateCallArguments(for: signature))
        //deoptimize down to DFG JIT after we tiered up to FTL!
    }

    // maybe trigger recompilation and deoptimo
    b.buildForLoop(b.loadInt(0), .lessThan, b.loadInt(10), .Add, b.loadInt(1)) { args in
        b.callFunction(f, withArgs: b.generateCallArguments(for: signature))
        //should deoptimize here to llint baseline? 
    }

    // more random instructions
    b.generate(n: genSize)

    b.callFunction(f, withArgs: b.generateCallArguments(for: signature))

}"

I then moved on to simply adding some of this to the JIT1Function. My correctness rate for JIT1Function after Total Execs: 24716630 with coverage as 22.72% for the JSC engine. The correctness rate was 34.44% for all 4 jobs that I had running for fuzzilli.

Any input would be great Should I have kept the template seperate? Also is this a waste or no point in doing this is their even a such thing as Deoptimization bugs, only made this template out of assumption that if their were JIT optimization bugs in the past then there could be potentially a thing for Deoptimo bugs?

Thanks in advance for any answers or Takes on the matter

@turnerrocks1 turnerrocks1 changed the title Dropping a new *small* feature personal feature question. Sep 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant