-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
57 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,57 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Expanding on Modeling with Random Variables\n", | ||
"\n", | ||
"When modeling the Guesser, I made the decision to represent the keywords as random variables. This decision may seem a bit contrived because the Guesser only has one keyword. It is in the case of the Intercepter where this decision becomes more clear. The Intercepter does not know what their opponent's keywords are, but they know each of the opponent's keywords is *a* word. Therefore, the Intercepter may find utility in thinking of each keyword as having a certain probability of being any possible word.\n", | ||
"\n", | ||
"Furthermore, when an Intercepter learns about random variables through clues, it's impression of each opponent keyword (the distribution it has for each random variable) may change. We may realize that if our Encryptor is naive and always gives revealing clues, the Interceptor's representation approaches the Guesser's in the limit as rounds continue!\n", | ||
"\n", | ||
"Therefore, if we can appropriately modify the Interceptor's distribution for each random variable, we may use the same guessing algorithm for it to make guesses!\n", | ||
"\n", | ||
"Interestingly, this evokes notions of entropy and information. We may be able to quantify how many bits of information a clue gave the Intercepter by comparing the entropy of its random variables before and after the round. The difference in entropy should correspond to the gained information.\n", | ||
"\n", | ||
"As an Encryptor, we might recognize that our clues are giving the opponent Intercepter information, and give a bad clue to increase the entropy of the opponent Intercepter's random variables. A good Encryption strategy might be to keep track of how much entropy you expect the Intercepter has to deal with, and to present a misleading or revealing clue accordingly." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Initial Distributions\n", | ||
"\n", | ||
"If our Intercepter is well-versed in Decrypto, it may recognize that there is an almost equal probability of the opponent's keywords being each of the official keywords. That feels a bit like cheating, though; I usually don't know the official keywords when I play. So, our Intercepter may guess less omnipotently that it may be any word. As we have seen, it may make sense to only include nouns, but we may consider that later.\n", | ||
"\n", | ||
"It may also make sense to scale according to frequency. That is, extremely rare words are probably less likely to be a keyword than more common words." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Updating Distributions\n", | ||
"\n", | ||
"How should our Intercepter update it's internal representation of each distribution upon learning new clues? This is key to how it will improve its guesses over time.\n", | ||
"\n", | ||
"An astute Intercepter may recognize that an astute opposing Encryptor may be misleading at times, and change its distribution-updating strategy accordingly (man this is getting meta). We will ignore this for now, and come up with strategies that assume clues are \"good\".\n", | ||
"\n", | ||
"A general update strategy may look like multiplying probabilities elementwise, and renormalizing. That is, an Intercepter will multiply each current keyword probability by the likelihood that the clue was for the keyword. Then, it will scale the distribution by a constant factor so that it still adds to 1. The difference in strategies will reside in how the likelihood that the clue was for each keyword is derived.\n", | ||
"\n", | ||
"One strategy the Intercepter might try is comparing similarity. That is, it might use a heuristic like the square_cosine_similarity or normalized_cosine_similarity to derive these probabilities, where more similar words are more likely to be chosen.\n", | ||
"\n", | ||
"One strategy the Intercepter might try is comparing ranked similarity. That is, it might use a word's ranked similarity to the clue to derive these probabilities, where better ranks correspond to higher probabilities." | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"language_info": { | ||
"name": "python" | ||
}, | ||
"orig_nbformat": 4 | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 2 | ||
} |