-
Notifications
You must be signed in to change notification settings - Fork 22
Home
- A paper on TensorLog.
- About the TensorLog Database
- About the TensorLog Program
- About TensorLog Dataset
To issue queries interactive to a TensorLog program, put the tensorlog source directory on your PYTHONPATH and then use the command:
python -i -m tensorlog --prog foo.ppr [--proppr] --db bar.cfacts
For example in the src directory you can say:
python -i -m tensorlog --prog test/textcat.ppr --proppr --db test/textcattoy.cfacts
This is just a Python shell, but a variable ti
has been bound to a tensorlog.Interp instance,
and ti.prog
and ti.db
. You can get help with the method <coded>ti.help()</code></coded>,
and you can inspect the source code for a predicate, or the function that it's compiled to, eg:
>>> ti.list("predict/2") predict(X,Pos) :- assign(Pos,pos), hasWord(X,W), posPair(W,F), weighted(F). predict(X,Neg) :- assign(Neg,neg), hasWord(X,W), negPair(W,F), weighted(F). >>> ti.list("predict/io") SoftmaxFunction | SumFunction | | w_Pos = OpSeqFunction(['X']) # predict(X,Pos) :- assign(Pos,pos), hasWord(X,W), posPair(W,F), weighted(F). | | | f_1_Pos = U_[pos] # assign(Pos,pos) -> Pos | | | f_2_W = X * M_[hasWord(i,o)] # hasWord(X,W) -> W | | | f_3_F = f_2_W * M_[posPair(i,o)] # posPair(W,F) -> F | | | b_4_F = V_[weighted(i)] # weighted(F) -> F | | | fb_F = f_3_F o b_4_F # F -> PSEUDO | | | w_Pos = f_1_Pos * fb_F.sum() # fb_F -> PSEUDO | | w_Neg = OpSeqFunction(['X']) # predict(X,Neg) :- assign(Neg,neg), hasWord(X,W), negPair(W,F), weighted(F). | | | f_1_Neg = U_[neg] # assign(Neg,neg) -> Neg | | | f_2_W = X * M_[hasWord(i,o)] # hasWord(X,W) -> W | | | f_3_F = f_2_W * M_[negPair(i,o)] # negPair(W,F) -> F | | | b_4_F = V_[weighted(i)] # weighted(F) -> F | | | fb_F = f_3_F o b_4_F # F -> PSEUDO | | | w_Neg = f_1_Neg * fb_F.sum() # fb_F -> PSEUDO
You can't evaluate the program yet - in this case we have an untrained program with undefinef rule weights, which you need to initialize, eg with
ti.prog.setWeights(ti.db.ones())
You can then run a function with the eval
command:
>>> ti.eval("predict/io","dh") {'neg': 0.49999979211790663, 'pos': 0.49999979211790663, '__NULL__': 4.1576418669185313e-07}
You can also use the debug
method which pops up a TkInter window that lets you inspect the compiled function and the messages passed around for this input, and, if they existed, the deltas (errors that are backpropagated in training).
You can run a very simple experiment from the command line, eg
python -m expt --prog test/textcat.ppr --proppr --db test/textcattoy.cfacts --trainData test/toytrain.exam --testData test/toytest.exam
This does a train and test cycle and saves the result to a new database, expt-model.db
. The new database includes the trained parameters (as well as all the fixed ones!) so you can try it out with commands like:
python -i -m tensorlog --prog test/textcat.ppr --proppr --db expt-model.db ti.debug("predict/io","dh")
More generally, you'll want to configure the parameters for an experiment and run it from Python. Here's one example, from the datasets/wordnet directory. First, we'll need to import what we need:
import sys import expt import declare import tensorlog import learn
Then in the main of the program, we'll set some options:
pred = 'hypernym' if len(sys.argv)<=1 else sys.argv[1] epochs = 30 if len(sys.argv)<=2 else int(sys.argv[2])
To load the right database and so on, we'll use the tensorlog.parseCommandLine
static function, which is what the interpreter uses when you invoke it to parse the sys.argv[1:] options from the command line.
optdict,args = tensorlog.parseCommandLine("--db wnet.db|wnet.cfacts".split())