Skip to content

Commit

Permalink
modify .gitignore and readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Zardinality committed Jun 10, 2017
1 parent 35ec75b commit 9b44620
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 6 deletions.
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -99,3 +99,9 @@ ENV/

# mypy
.mypy_cache/

# apple file
.DS_Store

# vscode
.vscode
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,10 @@ This is a repository for a [Deformable Convolution](https://arxiv.org/abs/1703.0

Tensorflow(with GPU configured)

cuda 8.0
Cuda 8.0

g++ 4.9.2



*Note*: Only tested on platform where corresponding version of g++ and cuda installed, other version might generally be fine, but may need to modify the compile script.

## Usage
Expand All @@ -21,8 +19,6 @@ g++ 4.9.2
3. `import lib.deform_conv_op as deform_conv_op` in your python script (make sure PYTHON_PATH was set currectly).




## TODO

- [x] Basic test with original implementation.
Expand All @@ -32,4 +28,8 @@ g++ 4.9.2

- [ ] Some demo and visualization.
- [ ] Backward time costs too much.
- [ ] Other ops.
- [ ] Other ops.

## Benchmark

Benchmark script was borrowed from [here](https://github.com/soumith/convnet-benchmarks/blob/master/tensorflow/benchmark_alexnet.py). The forward time is fine, for 100x3x224x224 data, it runs about in 0.077s. But backward time generaly undesired, it cost 0.558s to run a batch of same data. Note I write all backward of three inputs(data, offset, kernels) together, rather than like many tensorflow conv ops spliting input_backwards and kernel_backwards to two ops, so this might be one of the reason. In addition, because sometimes I find it hard to manipulate `tensorflow::Tensor` , I write a simple cuda kernel that does nothing but add one tensor to another, for accumulating gradients along batch in kernel gradient implementation, don't know whether it affects performance.

0 comments on commit 9b44620

Please sign in to comment.