-
Notifications
You must be signed in to change notification settings - Fork 5
Introduction to static strides and static extents for static Tensor
Introduction to static strides
and static extents
is one step closer to making static Tensor which will make Tensor more efficient.
First Cem Bassoy and I decided to integrate kokkos mdspan but the problem with this implementation is that it is made with keeping array in mind as you cannot define how dimensions you required at compile time to do so you have to hard code the extents as you do in arrays and there was few functions used for array manipulation so I had to them because of which it was very difficult to integrate with Tensor. To get rid of the problems I implemented the whole static strides and static extents from ground up for keeping tensor in mind or refactored some part of the code.
Problems with Kokkos mdspan
- You cannot create extents with specific size but you have to provide extents which made the construction of
static rank but dynamic extents impossible.
i.e
extents<2,3,4,5>
,extents<2,dynamic_extent,4,5>
extents<5>
extents of size 5 is not possible with this way so the alternative to this in mdspan is extents< dynamic_extent, dynamic_extent, dynamic_extent, dynamic_extent, dynamic_extent >
is very haptic for example lets say you want extents with size 100 then you have repeat dynamic_extent for hundred times.
-
Few unnecessary functions and data members related to arrays or not related to tensor. i.e
extent<...>(T*)
,struct mdspan_prop
, etc -
Unnecessary code which was not related to tensor such as props, etc. i.e
struct slices_impl
,struct mdspan_prop
, etc
How I mitigated the Problems
- Problem related to static rank and dynamic extents, I created shape_helper similar to
std::index_sequence
which creates the sequence and fills the repeated sequence of -1 with specific size intobasic_extents_impl
, so now it's possible to specify size at compile time.
i.e
auto s1 = make_basic_shape_t<5>{};// basic_shape<-1,-1,-1,-1,-1>
auto s2 = make_basic_shape_t<3,1,2,3>{};// basic_shape<1,2,3>
You can still use mdspan's way of creating extents for example
auto e1 = basic_static_extents<size_t,4, dynamic_extent, dynamic_extent, dynamic_extent, dynamic_extent >{};
auto e2 = basic_static_extents<size_t,4>{};
auto e3 = dynamic_extents<4>{};// they are equivalent
Advantage of this implementation is that their is size or rank checking and checking for initialising extents
-
Second problem was easily mitigated as I removed the unused methods and data members
-
For the third problem, I chose only
extents_impl
,layout_right
andlayout_left
but modified them according to the needs of tensor
- Removed the functions from current extents and made them free functions
valid
is_scalar
is_vector
is_matrix
is_tensor
squeeze
product
So now if you want to use them you have to write your_function(pass_extents)
i.e is_scalar(/*Extents*/)
-
Added the support for
static_extents
andstatic_strides
infunctions.hpp
and few other headers -
Few new headers
static_extents.hpp
static_strides.hpp
-
dynamic_strides.hpp
this header containsbasic_extents
-
dynamic_extents.hpp
this header containsbasic_strides
extents_functions.hpp
extents_helper.hpp
shape_helper.hpp
If you want to use both static and dynamic extents to use header extents.hpp
Similarly, if you want to use both static and dynamic strides use header strides.hpp
It is in detail namespace of ublas and it is an engine for static extents as it has all the logic. It decides how to create the extents with the given parameter pack
template <ptrdiff_t R, ptrdiff_t... E>
using extents = boost::numeric::ublas::detail::basic_extents_impl<0,boost::numeric::ublas::detail::make_basic_shape_t<R, E...>>;
auto e1 = extents<2,1,2>{};
auto e2 = extents<3,1,3,1>{};
auto e3 = extents<4,1,4,1,1>{};
auto e4 = extents<5,5,1,1,1,1>{};
auto e5 = extents<6>{6,1,1,1,1,1};
It helps you to create extents at compile and runtime as you can create whole extents at compile time or create extents at runtime or a mix of both but with compile time rank so you cannot change the rank afterwards if you want dynamic rank then choose basic_extents
.
auto e1 = static_extents <1,2>{};
auto e2 = dynamic_extents<3>{1,3,1};
auto e3 = dynamic_extents<>{1,4,1,1};
auto e4 = static_extents <5,1,1,1,1>{};
auto e5 = static_extents<6>{6,1,1,1,1,1}; // dynamic_extents<6>{6,1,1,1,1,1} they both equivalent
It helps you to create strides at compile using static_extents
using e1 = static_extents<1,2>;
using e2 = dynamic_extents<3>;
using e3 = dynamic_extents<>;
using e4 = static_extents <5,1,1,1,1>;
using e5 = static_extents<6>; // dynamic_extents<6> they both equivalent
auto s1 = stride_t<e1,first_order>{}; //static_strides
auto s2 = stride_t<e2,first_order>{1,3,1};//static_strides
auto s3 = stride_t<e3,last_order>{dynamic_extents<>{1,4,1,1}};//dynamic strides of type basic_strides
auto s4 = stride_t<e4,last_order>{};//static_strides
auto s5 = stride_t<e4,last_order>{6,1,1,1,1,1};//static_strides
using namespace boost::numeric::ublas;
auto s_e = static_extents<1,2,3,4,5>{};
auto d_e = dynamic_extents<>{1,2,3,4,5};
auto v1 = valid(s_e);
auto v2 = valid(d_e);
auto s1 = to_string(s_e);
auto s2 = to_string(d_e);
auto sc1 = is_scalar(s_e);
auto sc2 = is_scalar(d_e);
auto vc1 = is_vector(s_e);
auto vc2 = is_vector(d_e);
auto m1 = is_matrix(s_e);
auto m2 = is_matrix(d_e);
/**
* ....
* @tparam E type of basic_extents or static_extents defaults to shape<dynamic_rank>
* ....
*/
template<class T, class E, class F, class A>
class tensor;
// tensor with dynamic extents and dynamic strides
auto t1 = tensor<int>{};
// tensor with static rank but dynamic extents and static_strides
auto t2 = tensor<int,dynamic_extents<4>>{dynamic_shape<4>{1,2,3,4}};
//same as above but with deduction guide
auto t3 = tensor{dynamic_extents<4>{1,2,3,4},4};
// tensor with static rank, static extents and static strides
auto t4 = tensor<int,static_extents<1,2,3,4>>{};
// if you don't pass second parameter the type will be of float and also using deduction guide
auto t5 = tensor{static_extents<1,2,3,4>{)};
We both would like to thank our mentor Cem for his constant support and help in achieving our goals. We always find him helpful and he was always easy to reach for help or discussion regarding the work. We would also like to thank Google for the Google Summer of Code Programme, without which all these wouldn't be possible. Lastly, we express our gratitude to our parents for helping and providing us with all the indirect or direct needs for carrying out our work nicely in our homes.
- Home
- Project Proposal
- Milestones and Tasks
- Implementation
- Documentation
- Discussions
- Examples
- Experimental features
- Project Proposal
- Milestones and Tasks
- Implementation
- Documentation
- Discussions
- Example Code