Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OHE doc, Activation functions+doc #102

Open
wants to merge 30 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
7053fa7
OHE doc, Activation functions+doc
NandiniGera Feb 11, 2023
56af84e
updated
NandiniGera Feb 12, 2023
5eebc29
updated
NandiniGera Feb 12, 2023
8184ff7
Formatted code using clang-format
NandiniGera Feb 12, 2023
6a0fd9d
resolved conflicts
NandiniGera Feb 12, 2023
4111d84
updated
NandiniGera Feb 12, 2023
5fb3790
Merge branch 'main' of https://github.com/NandiniGera/slowmokit
NandiniGera Feb 12, 2023
5015e46
Update src/slowmokit/methods/activation_functions.hpp
NandiniGera Feb 12, 2023
357ee7b
Update src/slowmokit/methods/activation_functions.hpp
NandiniGera Feb 12, 2023
c26aba7
Update src/slowmokit/methods/activation_functions.cpp
NandiniGera Feb 12, 2023
c7438e6
updated
NandiniGera Feb 12, 2023
9459616
Merge branch 'main' of https://github.com/NandiniGera/slowmokit
NandiniGera Feb 12, 2023
41a1f6e
Formatted code using clang-format
NandiniGera Feb 12, 2023
8f287f1
Formatted code using clang-format
NandiniGera Feb 12, 2023
110d2f6
updated all changes
NandiniGera Feb 12, 2023
b1a9bba
updated all changes
NandiniGera Feb 12, 2023
643a0d6
Merge branch 'main' of https://github.com/NandiniGera/slowmokit
NandiniGera Feb 12, 2023
7c81c58
Formatted code using clang-format
NandiniGera Feb 12, 2023
a5ae5f4
Merge branch 'main' into main
Ishwarendra Feb 14, 2023
3e8af8e
fixed bracket issue
Ishwarendra Feb 14, 2023
eaa4e3a
updated
NandiniGera Feb 15, 2023
a411fdb
Merge remote-tracking branch 'upstream/main'
NandiniGera Feb 15, 2023
2b09f57
Formatted code using clang-format
NandiniGera Feb 15, 2023
ef42bac
updated
NandiniGera Feb 15, 2023
b4d5efd
Merge branch 'main' of https://github.com/NandiniGera/slowmokit
NandiniGera Feb 15, 2023
3f00e98
Formatted code using clang-format
NandiniGera Feb 15, 2023
ee101ce
minor changes
uttammittal02 Feb 15, 2023
4618b53
updated
NandiniGera Feb 16, 2023
46d1d8c
Merge branch 'main' of https://github.com/NandiniGera/slowmokit
NandiniGera Feb 16, 2023
194be70
Merge remote-tracking branch 'upstream/main'
NandiniGera Feb 16, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -60,4 +60,6 @@ add_library(slowmokit
src/slowmokit/methods/metrics/recall.hpp
src/slowmokit/methods/metrics/recall.cpp
src/slowmokit/methods/metrics/mean_squared_error.hpp
src/slowmokit/methods/metrics/mean_squared_error.cpp)
src/slowmokit/methods/metrics/mean_squared_error.cpp
src/slowmokit/methods/activation_functions.cpp
src/slowmokit/methods/activation_functions.hpp)
76 changes: 76 additions & 0 deletions docs/methods/activation_functions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Activation Functions

Sigmoid-It is computationally expensive, causes vanishing gradient problem and not zero-centred. This method is generally used for binary classification problems.

tanh- The Tanh activation function is a hyperbolic tangent sigmoid function that has a range of -1 to 1. It is often used in deep learning models for its ability to model nonlinear boundaries

tan-1h-The inverse of tanh.The ArcTan function is a sigmoid function to model accelerating and decelerating outputs but with useful output ranges.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Arctan is not the inverse of tanh but the normal trignometric tan(x) function


ReLU-This The ReLU activation function returns 0 if the input value to the function is less than 0 but for any positive input, the output is the same as the input. It is also continuous but non-differentiable at 0 and at values less than 0 because its derivative is 0 for any negative input.

leakyReLU- With Leaky ReLU there is a small negative slope so instead of that firing at all, for large gradients, our neurons do output some value and that makes our layer much more optimized too.

softmax-The softmax is a more generalised form of the sigmoid. It is used in multi-class classification problems. Similar to sigmoid, it produces values in the range of 0–1 therefore it is used as the final layer in classification models.

binaryStep-The Step activation function is used in the perceptron network. This is usually used in single-layer networks to convert to an output that is binary (0 or 1).These are called Binary Step Function.



## Parameters

| Name | Definition | Type |
|--------------|--------------------------------------------|--------------|
| x | double value on which the function is applied. | `double` |


## Methods

| Name | Definition | Return value |
|----------------------------------------|-----------------------------------------------|---------------|
| y | double value after putting x in the functions gets returned. | `double` |
Ishwarendra marked this conversation as resolved.
Show resolved Hide resolved

## Example

```cpp
int main(){
//sigmoid example
double x = 1.0;
double y = sigmoid(x);
std::cout << "sigmoid(" << x << ") = " << y << std::endl;

//tanh example
double x = -1.0;
double y = tanh(x);
std::cout << "tanh(" << x << ") = " << y << std::endl;

//tan inverse example
double x = 0.0;
double y = arctan(x);
std::cout << "arctan(" << x << ") = " << y << std::endl;

//ReLU example
double x = 1.0;
double y = ReLU(x);
std::cout << "ReLU(" << x << ") = " << y << std::endl;

//leakyReLU example
double x = -1.0;
double alpha = 0.01;
y = leakyReLU(x, alpha);
std::cout << "leakyReLU(" << x << ", " << alpha << ") = " << y << std::endl;

//binaryStep example
double x = 1.0;
double y = binaryStep(x);
std::cout << "binaryStep(" << x << ") = " << y << std::endl;

//softmax example
std::vector<double> x = {1, 2, 3};
std::vector<double> result = softmax(x);
for (double value : result) {
std::cout << value << " ";
}
return 0;

}
```
36 changes: 36 additions & 0 deletions docs/methods/preprocessing/one_hot_encoder.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# One Hot Encoder

One hot encoding is a technique to represent categorical variables as numerical values.Each unique value of a categorical variable is assigned a binary code, where a "1" in the code represents the presence of that value and a "0" represents its absence.

One hot encoding makes our training data more useful and expressive, and it can be rescaled easily.


## Parameters

| Name | Definition | Type |
|--------------|--------------------------------------------|--------------|
| data | The data that has to be encoded is passed as the data parameter in the oneHotEncoder function. | `vector<int>` |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Data would be vector<string> and not vector<int>

| nClasses | This parameter is an integer that specifies the number of classes or categories in the input data. | `int` |

## Methods

| Name | Definition | Return value |
|----------------------------------------|-----------------------------------------------|---------------|
| `oneHotEncoder(vector<T> data, nClasses)` | To encode the data into numerical values. | `vector<int>` |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

type of nClasses?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Return value of this method is vector<vector<int>>


## Example

```cpp
int main() {
std::vector<std::string> data = {"apples", "banana", "mango", "pear", "mango","apples","pear"};
int nClasses = 4;
std::vector<std::vector<int>> oneHotEncodedData = oneHotEncoder(data, nClasses);
for (const auto &row : oneHotEncodedData) {
for (const auto &column : row) {
std::cout << column << " ";
}
std::cout << std::endl;
}
return 0;
}
```
41 changes: 41 additions & 0 deletions examples/activation_functions_eg.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
//int main(){
//sigmoid example
//double x = 1.0;
//double y = sigmoid(x);
//std::cout << "sigmoid(" << x << ") = " << y << std::endl;

//tanh example
//double x = -1.0;
//double y = tanh(x);
//std::cout << "tanh(" << x << ") = " << y << std::endl;

//tan inverse example
//double x = 0.0;
//double y = arctan(x);
//std::cout << "arctan(" << x << ") = " << y << std::endl;

//ReLU example
//double x = 1.0;
//double y = ReLU(x);
//std::cout << "ReLU(" << x << ") = " << y << std::endl;

//leakyReLU example
//double x = -1.0;
//double alpha = 0.01;
//y = leakyReLU(x, alpha);
//std::cout << "leakyReLU(" << x << ", " << alpha << ") = " << y << std::endl;

//binaryStep example
//double x = 1.0;
//double y = binaryStep(x);
//std::cout << "binaryStep(" << x << ") = " << y << std::endl;

//softmax example
//std::vector<double> x = {1, 2, 3};
//std::vector<double> result = softmax(x);
// for (double value : result) {
// std::cout << value << " ";
// }
// return 0;

//}
83 changes: 83 additions & 0 deletions src/slowmokit/methods/activation_functions.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
/**
* @file methods/activation_functions.cpp
*
* Implementation of activation functions
*/
#include "activation_functions.hpp"
template<class T>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this line, we do not require templates for these functions

Suggested change
template<class T>

// sigmoid
double sigmoid(double x)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to write these functions for a vector and not a single value

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to write these functions for a vector and not a single value

I'm changing all the functions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to write these functions for a vector and not a single value

all changes updated

{
return 1 / (1 + std::exp(-x));
}
// ReLU
double ReLU(double x)
{
if (x > 0)
{
return x;
}
else
{
return 0;
}
}
// tanh
double tanh(double x)
{
double result = (std::exp(x) - std::exp(-x)) / (std::exp(x) + std::exp(-x));
return result;
}
// tan inverse
double arctan(double x) { return std::atan(x); }

// softmax
std::vector<double> softmax(const std::vector<double> &x)
{
std::vector<double> result(x.size());
double sum = 0;
for (double value : x)
{
sum += std::exp(value);
}
for (int i = 0; i < x.size(); i++)
{
result[i] = std::exp(x[i]) / sum;
}
return result;
}
// binary step
double binaryStep(double x)
{
if (x >= 0)
{
return 1; // assuming threshold value to be 0 here
}
else
{
return 0;
}
}
<<<<<<< HEAD
NandiniGera marked this conversation as resolved.
Show resolved Hide resolved
// leaky ReLU
double leakyReLU(double x, double alpha)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set default value of alpha to 0.1

{
if (x >= 0)
{
return x;
}
else
{
return alpha * x;
}
}
=======
//leaky ReLU
double leakyReLU(double x, double alpha) {
if (x >= 0) {
return x;
} else {
return alpha * x;
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a function to convert binary to bipolar and vice-versa

>>>>>>> 5eebc29054fab6686e728aca29e64e1c53dd7a8c
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
=======
//leaky ReLU
double leakyReLU(double x, double alpha) {
if (x >= 0) {
return x;
} else {
return alpha * x;
}
}
>>>>>>> 5eebc29054fab6686e728aca29e64e1c53dd7a8c

77 changes: 77 additions & 0 deletions src/slowmokit/methods/activation_functions.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
/**
* @file methods/activation_functions.hpp
*
* Easy include to add non-linearity into a neural network.
*/

#ifndef ACTIVATION_FUNCTIONS_HPP
#define ACTIVATION_FUNCTIONS_HPP
#include "../core.hpp"
template<class T>
/**
* @brief To calculate sigmoid(x)
* @param x: Number whose sigmoid value is to be calculated
* @return a double value representing sigmoid(x)
*/
double sigmoid(double);

/**
<<<<<<< HEAD
* @param x {double x} - double value on which the function is applied.
*
* @param x {vector<double>} - vector containing 'double' values of x for
* softmax activation function implementation.
*
* @return {double value} - double value after putting x in the functions gets
* returned.
=======
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
<<<<<<< HEAD
* @param x {double x} - double value on which the function is applied.
*
* @param x {vector<double>} - vector containing 'double' values of x for
* softmax activation function implementation.
*
* @return {double value} - double value after putting x in the functions gets
* returned.
=======

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ishwarendra sir the changes which you've asked to make, these are not visible in my vs code, those unwanted lines of code are already not there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i formatted all the source files using gitbash, since the clang error was there, but after commiting and pushing those changes, both the checks are coming wrong, how do i fix it?

* @brief To calculate tan(x)
* @param x: Number whose tan value is to be calculated
* @return a double value representing tan(x)
>>>>>>> 5eebc29054fab6686e728aca29e64e1c53dd7a8c
NandiniGera marked this conversation as resolved.
Show resolved Hide resolved
*/

double tanh(double);

/**
* @brief To calculate ReLU(x)
* @param x: Number whose ReLU value is to be calculated
* @return a double value representing ReLU(x)
*/

double ReLU(double);

/**
* @brief To calculate leakyReLU(x)
* @param x: Number whose leakyReLU value is to be calculated
* @return a double value representing leakyReLU(x)
*/

double leakyReLU(double, double);

/**
* @brief To calculate softmax(x)
* @param x {vector<double>} - vector containing 'double' values of x whose softmax values have to be calculated.
*
* @return vector containing 'double' values representing softmax(x)
*/

std::vector<double> softmax(const std::vector<double> &);

/**
* @brief To calculate arctan(x)
* @param x: Number whose tan inverse value is to be calculated
* @return a double value representing arctan(x)
*/

double arctan(double);

/**
* @brief To calculate binaryStep(x)
* @param x: Number whose binaryStep value is to be calculated
* @return a double value representing binaryStep(x)
*/

double binaryStep(double);

#endif // ACTIVATION_FUNCTIONS_HPP