Skip to content
View carlini's full-sized avatar

Organizations

@evaluating-adversarial-robustness

Block or report carlini

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. nn_robust_attacks nn_robust_attacks Public

    Robust evasion attacks against neural network to find adversarial examples

    Python 803 231

  2. anishathalye/obfuscated-gradients anishathalye/obfuscated-gradients Public

    Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

    Jupyter Notebook 885 171

  3. js13k2019-yet-another-doom-clone js13k2019-yet-another-doom-clone Public

    Yet another doom clone (in 13kb of JavaScript)

    JavaScript 292 67

  4. printf-tac-toe printf-tac-toe Public

    tic-tac-toe in a single call to printf

    C 2.2k 53

  5. google-research/deduplicate-text-datasets google-research/deduplicate-text-datasets Public

    Rust 1.1k 112

  6. yet-another-applied-llm-benchmark yet-another-applied-llm-benchmark Public

    A benchmark to evaluate language models on questions I've previously asked them to solve.

    Python 934 67