-
Notifications
You must be signed in to change notification settings - Fork 22
Experiment
You can run a very simple experiment from the command line, eg
python -m expt --prog test-data/textcat.ppr --proppr --db test-data/textcattoy.cfacts --trainData test-data/toytrain.exam --testData test-data/toytest.exam
In version 1.2.3, you can add some additional arguments like this:
python -m expt --prog test-data/textcat.ppr --proppr --db test-data/textcattoy.cfacts --trainData test-data/toytrain.exam --testData test-data/toytest.exam +++ --savedModel xxx --learner yyyy --learnerOpts '{zzz:...}'
This does a train and test cycle and saves the result to a new database, expt-model.db
. The new database includes the trained parameters (as well as all the fixed ones!) so you can try it out with commands like:
python -i -m tensorlog --prog test-data/textcat.ppr --proppr --db expt-model.db ti.debug("predict/io","dh")
More generally, you'll want to configure the parameters for an experiment and run it from Python. Here's one example,
from the datasets/wordnet directory.
I'm using the tensorlog.parseCommandLine
method, which is used to parse the command line when you invoke the interpreter, to initialize the program and such. It returns not the string options from the command line but their interpretations as data structures: eg optdict
, under the key 'prog', contains a Python object which is an instance of the class tensorlog.Program
.
In version 1.1, there is a parallel version of TensorLog. To use it configure one of the parallel learners in plearn.py. It uses multiprocessing (i.e. process, not thread-level) to get around Python's global interpreter lock.