-
Notifications
You must be signed in to change notification settings - Fork 5
Future Works
This section contains the work that has been set to be done in the future / Post-GSoC period. Some of the these work are here because I couldn't complete it during Google summer of Code Period, while some are here because of some technical difficulties in YAP. The complete work can monitored in the kanban board here.
Assignee coder3101 , bassoy , amitsingh19975
Priority High
Description
All the Boost Libraries have detail
implementation in a separate directory called detail
. It is so kept to keep things organized and also let user know that some header belong to detail directory and should not be included. Our current tensor source does not have any such directory rather everything is kept at one place. We are keeping our detail
in a separate nested namespace. Due to this it becomes very difficult to maintain the library and add features to it.
Resulting Directory Tree
include/
boost/
numeric/
ublas/
..
tensor/
detail/
..
..
..
..
Link to the Issue is here
Priority Low
Description
The new tensor expression that uses YAP is copy/movable, this allows us to reliably move and copy the expression allowing us to use it as a function argument. Due to those features we were able to add many utilities such as ublas::apply
and expression_optimizer
. These utilities cannot be used with ublas::expressions
because they can't be copied or moved. In order to add full interoperability between the tensor and ublas::expression
we need to make them copy and movable.
Example
auto m1 = ublas::matrix<int>(5,5,2);
auto m2 = ublas::matrix<int>(5,5,1);
auto expr = 2*m1 + 3*m2; // At this moment expr holds garbage.
// No way to copy construct ublas expressions.
matrix<int> ans = expr;
Link to the Issue is here
Assignee coder3101, amitsingh19975
Priority High
Description
Current tensor evaluation is carried out using Open-MP parallel for
loop. We initially presented in our proposal for the idea of adding a device based execution policy just like Eigen. Due to some issues with tensor-expression optimizer, we failed to achieve this goal. This features however has been now moved to future work sections. We both will do this together, this involves adding documentation and tests as well.
Example
Just for demonstration purpose Interfaces may change
auto some_expr = m1 + t2*3;
tensor_type ans = some_expr.via(/* device name */ ublas::device::gpu{});
tensor_type res<ublas::device::cpu< /* num threads */ 4>> = some_expr;
Link to the Issue is here
Assignee coder3101
Priority High
Description
Expression optimization was not possible to be done at compile-time for dynamic tensors. We are now using std::variant
to optimize the expression at runtime this is a experimental feature and more information could be found at this(add) page. Once we find the this implementation is beneficial we will add this to main branch.
Optimization Properties
// Scalars
a + a = 2*a;
5*a + 3*a = 8*a;
8*a - 2*a + 6*b = 3*a + 6*b
// Distibution
a*b + a*c = a*(b+c)
b*f - g*f = f*(b-f)
Assignee coder3101
Priority Normal
Description
Currently we are able to build up expression representing Einstein Notation but we cannot evaluate those lazy expressions. We plan to add make those expression evaluate-able, once we understand how those operations are performed for tensors involving multiple tensors.
Example
tensor_type ans = t1(_i, _j, _k) * t2(_i, _x, _y) * t3(_x, _k);
// This should evaluate into a tensor.
Priority Low
Description
We are able to lazy-ify tensor-contractions involving multiple tensors, now we wish to evaluate the contraction in such a way that minimum possible operations are performed during evaluation. This can be done using Dynamic Programming. We wish to add this subroutine into main branch later this year.
Assignee amitsingh19975
Priority Low
Description
Right now slice is provided using the struct boost::numeric::ublas::span::basic_slice<...>
or boost::numeric::ublas::span::slice<...>
which is very hectic to write, so string-based slice will reduce the time consumed and complexity. The string-based slice is also friendly and easy to use.
auto t = tensor{dynamic_extents<>{10,10},1.f};
auto s = t(" :5; : : 4 ");
Assignee amitsingh19975
Priority High
Description
The current implementation of subtensor can only be used to create subtensor from tensor which needs to be able to use features of Tensor lib such as tensor contraction, etc. But due to time constraint, I was unable to do it.
Assignee amitsingh19975
Priority High
Description
Due to the current tensor's expression template, I was unable to fully integrate these features. Which deprived them of operator overloaded operations between different types of the tensor. For example, you cannot add two tensors with the same extents but a different type of extents.
Assignee amitsingh19975
Priority Medium
Description
The current implementation is not fully integrated into the tensors because of the structure of Tensor lib and how tensor contraction is done and the same problem as static extents and static strides.
Assignee amitsingh19975
Priority Medium
Description
The slice in python and other languages support negative step which in turn makes reverse iteration possible. Due time constraints I was unable to implement this feature into the tensor. I hope soon tensor will be able to do it.
Assignee coder3101
Priority Medium
Description
As per the request of the mentor the tensor expression should be more hidden from the user and this is possible if we make the tensor expression copy constructor private so that expressions are treated as tensor(means evaluated) instead of being un-evaluated.
Example
auto result = t1 + t2; // t1 and t2 are tensors
static_assert(ublas::is_tensor<result>::value); // at this moment it fails
static_assert(ublas::is_tensor_expression<result>::value); // at this moment it passes
Link to the issue is here
We both would like to thank our mentor Cem for his constant support and help in achieving our goals. We always find him helpful and he was always easy to reach for help or discussion regarding the work. We would also like to thank Google for the Google Summer of Code Programme, without which all these wouldn't be possible. Lastly, we express our gratitude to our parents for helping and providing us with all the indirect or direct needs for carrying out our work nicely in our homes.
- Home
- Project Proposal
- Milestones and Tasks
- Implementation
- Documentation
- Discussions
- Examples
- Experimental features
- Project Proposal
- Milestones and Tasks
- Implementation
- Documentation
- Discussions
- Example Code