Skip to content
wwcohen edited this page Jul 1, 2016 · 49 revisions

Background

Using TensorLog interactively

To issue queries interactive to a TensorLog program, put the tensorlog source directory on your PYTHONPATH and then use the command:

 python -i -m tensorlog --prog foo.ppr [--proppr] --db bar.cfacts

For example in the src directory you can say:

 python -i -m tensorlog --prog test/textcat.ppr --proppr --db test/textcattoy.cfacts

This is just a Python shell, but a variable ti has been bound to a tensorlog.Interp instance, and ti.prog and ti.db. You can get help with the method <coded>ti.help()&lt;/code&gt;</coded>, and you can inspect the source code for a predicate, or the function that it's compiled to, eg:

 >>> ti.list("predict/2")
 predict(X,Pos) :- assign(Pos,pos), hasWord(X,W), posPair(W,F), weighted(F).
 predict(X,Neg) :- assign(Neg,neg), hasWord(X,W), negPair(W,F), weighted(F).
 >>> ti.list("predict/io")
 SoftmaxFunction
 | SumFunction
 | | w_Pos = OpSeqFunction(['X']) # predict(X,Pos) :- assign(Pos,pos), hasWord(X,W), posPair(W,F), weighted(F).
 | | | f_1_Pos = U_[pos] # assign(Pos,pos) -> Pos
 | | | f_2_W = X * M_[hasWord(i,o)] # hasWord(X,W) -> W
 | | | f_3_F = f_2_W * M_[posPair(i,o)] # posPair(W,F) -> F
 | | | b_4_F = V_[weighted(i)] # weighted(F) -> F
 | | | fb_F = f_3_F o b_4_F # F -> PSEUDO
 | | | w_Pos = f_1_Pos * fb_F.sum() # fb_F -> PSEUDO
 | | w_Neg = OpSeqFunction(['X']) # predict(X,Neg) :- assign(Neg,neg), hasWord(X,W), negPair(W,F), weighted(F).
 | | | f_1_Neg = U_[neg] # assign(Neg,neg) -> Neg
 | | | f_2_W = X * M_[hasWord(i,o)] # hasWord(X,W) -> W
 | | | f_3_F = f_2_W * M_[negPair(i,o)] # negPair(W,F) -> F
 | | | b_4_F = V_[weighted(i)] # weighted(F) -> F
 | | | fb_F = f_3_F o b_4_F # F -> PSEUDO
 | | | w_Neg = f_1_Neg * fb_F.sum() # fb_F -> PSEUDO

You can't evaluate the program yet - in this case we have an untrained program with undefinef rule weights, which you need to initialize, eg with

 ti.prog.setWeights(ti.db.ones())

You can then run a function with the eval command:

 >>> ti.eval("predict/io","dh")
 {'neg': 0.49999979211790663, 'pos': 0.49999979211790663, '__NULL__': 4.1576418669185313e-07}

You can also use the debug method which pops up a TkInter window that lets you inspect the compiled function and the messages passed around for this input, and, if they existed, the deltas (errors that are backpropagated in training).

Running a learning experiment

You can run a very simple experiment from the command line, eg

 python -m expt --prog test/textcat.ppr --proppr --db test/textcattoy.cfacts --trainData test/toytrain.exam --testData test/toytest.exam

This does a train and test cycle and saves the result to a new database, expt&#45;model.db. The new database includes the trained parameters (as well as all the fixed ones!) so you can try it out with commands like:

 python -i -m tensorlog --prog test/textcat.ppr --proppr --db expt-model.db
 ti.debug("predict/io","dh")

More generally, you'll want to configure the parameters for an experiment and run it from Python. Here's one example, from the datasets/wordnet directory. First, we'll need to import what we need:

 import sys
 import expt
 import declare
 import tensorlog
 import learn

Then in the main of the program, we'll set some options:

 pred = 'hypernym' if len(sys.argv)<=1 else sys.argv[1]
 epochs = 30 if len(sys.argv)<=2 else int(sys.argv[2])

To load the right database and so on, we'll use the tensorlog.parseCommandLine static function, which is what the interpreter uses when you invoke it to parse the sys.argv[1:] options from the command line.

 optdict,args &#61; tensorlog.parseCommandLine(&#91;
  &#39;&#45;&#45;db&#39;, &#39;wnet.db&#124;wnet.cfacts&#39;,
  &#39;&#45;&#45;prog&#39;,&#39;wnet&#45;learned.ppr&#39;, &#39;&#45;&#45;proppr&#39;,
  &#39;&#45;&#45;train&#39;,&#39;%s&#45;train.dset&#124;%s&#45;train.examples&#39; % (pred,pred),
  &#39;&#45;&#45;test&#39;, &#39;%s&#45;test.dset&#124;%s&#45;test.examples&#39; % (pred,pred)&#93;)
Clone this wiki locally