Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support sparse output for sandwich products #17

Open
ElizabethSantorellaQC opened this issue Jul 13, 2020 · 4 comments
Open

Support sparse output for sandwich products #17

ElizabethSantorellaQC opened this issue Jul 13, 2020 · 4 comments
Labels
enhancement New feature or request on hold

Comments

@ElizabethSantorellaQC
Copy link
Contributor

There are situations where we can expect ahead of time that the output of a sandwich product will be sparse:

  • If the data is dominated by one very high-dimensional categorical. For example, say there are D dense columns, and M categorical columns. The fraction of nonzeros in the resulting sandwich product will be at most (D^2 + 2 DM + M) / (D + M)^2. If M >> D this matrix will be quite sparse. This could happen in an e-commerce pricing context, if features are a categorical product ID that is very high-dimensional and a small number of scalars.
  • Perhaps we have computed x.sandwich(d1) and now we want to know x.sandwich(d2). The latter will have the same sparsity pattern as the former.
  • We can analytically upper-bound the number of nonzero elements in a sandwich product of categoricals using the number of rows. If the data is made up mainly of categoricals and is very wide relative to its length, its sandwich product will be fairly sparse. This would be true in, for example, the German employer-employee data set used in the AKM papers. Since a typical worker will usually not have worked for a randomly-chosen firm, the sandwich product would be very sparse.
@zmbc
Copy link

zmbc commented Nov 1, 2024

From @stanmart in Quantco/glum#543 (comment):

as a first step I would not worry about using heuristics to try to guess whether the hessian will be sparse, but just rely on the user telling glum whether it should be

Actually, what might make even more sense is to have a class like SplitMatrix that allows the matrix to be split along both rows and columns, and continue to store parts of the sandwich product as sparse and other parts as dense. I don't know enough of the math to know what downstream effects this would have, however.

In any case, it seems like the very first step would be to make the categorical-categorical operation here use a sparse matrix. This would involve the C++ implementation here:

<%def name="sandwich_cat_cat(type)">
template <typename Int, typename F>
void _sandwich_cat_cat_${type}(
F* d,
const Int* i_indices,
const Int* j_indices,
Int* rows,
Int len_rows,
F* res,
Int res_n_col,
Int res_size
% if type == 'complex':
, bool i_drop_first
, bool j_drop_first
% endif
)
{
#pragma omp parallel
{
std::vector<F> restemp(res_size, 0.0);
# pragma omp for
for (Py_ssize_t k_idx = 0; k_idx < len_rows; k_idx++) {
Int k = rows[k_idx];
% if type == 'complex':
Int i = i_indices[k] - i_drop_first;
if (i < 0) {
continue;
}
% else:
Int i = i_indices[k];
% endif
% if type == 'complex':
Int j = j_indices[k] - j_drop_first;
if (j < 0) {
continue;
}
% else:
Int j = j_indices[k];
% endif
restemp[(Py_ssize_t) i * res_n_col + j] += d[k];
}
for (Py_ssize_t i = 0; i < res_size; i++) {
# pragma omp atomic
res[i] += restemp[i];
}
}
}
</%def>

Unfortunately I'm not very familiar with C++. Does SciPy have a C++ API for sparse matrices?

@stanmart
Copy link
Collaborator

stanmart commented Nov 5, 2024

Actually, what might make even more sense is to have a class like SplitMatrix that allows the matrix to be split along both rows and columns

Creating a matrix class that can be split along both dimensions is a great end goal, but seems very complex. Having the option to return a fully sparse matrix would already be super helpful, and a great first step IMO.

Does SciPy have a C++ API for sparse matrices?

I'm not sure. What should work though is that in scipy, sparse matrices are basically a collection of 1d vectors representing the data. We could construct these vectors in C++ using the numpy API, and then convert those into a sparse matrix in python using regular scipy functions. I think it should be quite performant.

First we should figure out which kind of sparse matrix format would be the best fit for a sandwich product. After that, you could look into how those matrices are represented, and try to construct the necessary vectors. Do you think this makes sense?

@zmbc
Copy link

zmbc commented Nov 6, 2024

We could construct these vectors in C++ using the numpy API

Isn't this difficult, because the length of the vectors is not known upfront? What size would we make the numpy arrays?

@stanmart
Copy link
Collaborator

stanmart commented Nov 8, 2024

Yeah that's a good point. So we will need to make a guess at the density of the sandwich product at some point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request on hold
Projects
None yet
Development

No branches or pull requests

4 participants