The pykernsformer module extends the torch.nn.TransformerEncoderLayer class to include custom attention formulas.
You can install the pykernsformer package using pip as
pip install pykernsformer
pykernsformer comes with the following built in attention kernels.
| pykernsformer | Attention | Formula | Citation | 
|---|---|---|---|
attention | 
Regular | Vaswani et al. | |
attention_linear | 
Linear | ||
attention_periodic | 
Periodic | ||
attention_LP | 
Locally Periodic | ||
attention_RQ | 
Rational Quadratic | 
 | 
You can also implement your own attention function with the following signature:
def attention_custom(query, key, value, mask=None, dropout=None):
    
    [...]
    
    p_attn = [...] # the attention matrix
    
    [...]
    return torch.matmul(p_attn, value), p_attn