Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Advanced Activation Layers Bugs and Create New Test #16

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

lucaskdc
Copy link

@lucaskdc lucaskdc commented Feb 3, 2023

The advanced activation tests were not testing the case when those activation layers were being used as a hidden layer. That way the behavior dealing with other layers weights and using output tensors as their input were never tested. This commit add a test and fix some bugs.

@lucaskdc
Copy link
Author

lucaskdc commented Feb 3, 2023

Bug 1 detected by test and fixed:

_write_weights_ThresholdedReLU was overriding stack_vars intead of concatenate new vars.
As you can see, leaky_re_lu_alpha doesn't exists when we run this new test without weights2c.py changes:

Bug 2 detected by test and fixed: set correct input

Because usually a reference to the k2c_tensor is passed, an ampersand prefix is added to that variables, so it fails passing the correct array.

This is the code generated by test before the changes:

`#include <math.h>
#include <string.h>
#include "./include/k2c_include.h"
#include "./include/k2c_tensor_include.h"

void test___AdvancedActivationLayers_NonInputLayers1675441953(k2c_tensor* input_1_input, k2c_tensor* re_lu_output) {

float thresholded_re_lu_theta = 0.30000001192092896;

float re_lu_max_value = 1.0;
float re_lu_negative_slope = 1.0;
float re_lu_threshold = 0.3;

k2c_LeakyReLU(input_1_input->array,input_1_input->numel,leaky_re_lu_alpha);
k2c_tensor leaky_re_lu_output;
leaky_re_lu_output.ndim = input_1_input->ndim; // copy data into output struct
leaky_re_lu_output.numel = input_1_input->numel;
memcpy(leaky_re_lu_output.shape,input_1_input->shape,K2C_MAX_NDIMsizeof(size_t));
leaky_re_lu_output.array = &input_1_input->array[0]; // rename for clarity
k2c_LeakyReLU(&leaky_re_lu_output.array,&leaky_re_lu_output.numel,leaky_re_lu_1_alpha);
k2c_tensor leaky_re_lu_1_output;
leaky_re_lu_1_output.ndim = leaky_re_lu_output.ndim; // copy data into output struct
leaky_re_lu_1_output.numel = leaky_re_lu_output.numel;
memcpy(leaky_re_lu_1_output.shape,leaky_re_lu_output.shape,K2C_MAX_NDIM
sizeof(size_t));
leaky_re_lu_1_output.array = &leaky_re_lu_output.array[0]; // rename for clarity
k2c_PReLU(&leaky_re_lu_1_output.array,&leaky_re_lu_1_output.numel,p_re_lu_alpha.array);
k2c_tensor p_re_lu_output;
p_re_lu_output.ndim = leaky_re_lu_1_output.ndim; // copy data into output struct
p_re_lu_output.numel = leaky_re_lu_1_output.numel;
memcpy(p_re_lu_output.shape,leaky_re_lu_1_output.shape,K2C_MAX_NDIMsizeof(size_t));
p_re_lu_output.array = &leaky_re_lu_1_output.array[0]; // rename for clarity
k2c_ELU(&p_re_lu_output.array,&p_re_lu_output.numel,elu_alpha);
k2c_tensor elu_output;
elu_output.ndim = p_re_lu_output.ndim; // copy data into output struct
elu_output.numel = p_re_lu_output.numel;
memcpy(elu_output.shape,p_re_lu_output.shape,K2C_MAX_NDIM
sizeof(size_t));
elu_output.array = &p_re_lu_output.array[0]; // rename for clarity
k2c_ThresholdedReLU(&elu_output.array,&elu_output.numel,thresholded_re_lu_theta);
k2c_tensor thresholded_re_lu_output;
thresholded_re_lu_output.ndim = elu_output.ndim; // copy data into output struct
thresholded_re_lu_output.numel = elu_output.numel;
memcpy(thresholded_re_lu_output.shape,elu_output.shape,K2C_MAX_NDIMsizeof(size_t));
thresholded_re_lu_output.array = &elu_output.array[0]; // rename for clarity
k2c_ReLU(&thresholded_re_lu_output.array,&thresholded_re_lu_output.numel,re_lu_max_value,
re_lu_negative_slope,re_lu_threshold);
re_lu_output->ndim = thresholded_re_lu_output.ndim; // copy data into output struct
re_lu_output->numel = thresholded_re_lu_output.numel;
memcpy(re_lu_output->shape,thresholded_re_lu_output.shape,K2C_MAX_NDIM
sizeof(size_t));
memcpy(re_lu_output->array,thresholded_re_lu_output.array,re_lu_output->numel*sizeof(re_lu_output->array[0]));

}

void test___AdvancedActivationLayers_NonInputLayers1675441953_initialize() {

}

void test___AdvancedActivationLayers_NonInputLayers1675441953_terminate() {

} `

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant