-
Grad engine-> new task:matmul/div autograd.pow-tensor/pow-scaler ->scaler part still remaining. -
randn Generator-> with seed - still more operation is remaining on Tensors, add them.
- Make the Tensor fast:
Check the, and try to optimize them to make them faster -> still not done.tensor.c
andtensor.pyx
files again - stop using numpy -> add the reshape, and other stuff.
- Build a Tensor for Int, Double, Long, etc.
- Use the Fast matrix multiplication algorithm to reduce the time complexity.
- Make loss dir and make loss like "Tenh, ReLU, sigmoid, softmax" in a more optimistic way. -> Make the
loss
folder, but you also need to make the backward pass for it. - Make Optimizer start with SGD in C not in pyx (aka cython) -> after SGD -> Adam ...
Lightweight library for performing tensor operations. CGrad is a module designed to handle all gradient computations, and most matrix manipulation and numerical work generally required for tasks in machine learning and deep learning.
- New methods
.ones_like
,.zeros_like
,.ones
,.zeros
.sum
,.mean
,.median
- Now You can do the backprop for scaler
.sum().backward()
and also change backward pass, use your own custom size backward pass.backward(custom_grad=)
. - Try to Optimize the
Tensor
andAutoGrad
AutoGrad.no_grad()
add from stop the grad caculation.
pip install numpy
pip install cython
pip install cgrad
-
install MinGW
for Windows user install latest MinGW. -
install gcc
for Mac or Linux user install latest gcc. -
clone the repository and install manually:
git clone https://github.com/Ruhaan838/CGrad python setup.py build_ext --inplace pip install .
Hereβs a simple guide to get you started with CGrad:
import cgrad
You can create a tensor from a Python list or NumPy array:
# Creating a tensor from a list
tensor = cgrad.Tensor([1.0, 2.0, 3.0])
# Creating a tensor with a specified shape
tensor = cgrad.Tensor([[1.0, 2.0], [3.0, 4.0]])
CGrad supports basic operations like addition, multiplication, etc.:
# Tensor addition
a = cgrad.Tensor([1.0, 2.0, 3.0])
b = cgrad.Tensor([4.0, 5.0, 6.0])
result = a + b # Element-wise addition
# Tensor multiplication
c = cgrad.Tensor([[1.0, 2.0], [3.0, 4.0]])
d = cgrad.Tensor([[5.0, 6.0], [7.0, 8.0]])
result = c * d # Element-wise multiplication
CGrad supports advanced operations like matrix multiplication etc.:
a = cgrad.rand((1,2,3))
b = cgrad.rand((5,3,2))
result = a @ b
Note: cgrad.matmul
with axis
is still underdevelopment.
CGrad automatically tracks operations and computes gradients for backpropagation:
# Defining tensors with gradient tracking
x = cgrad.Tensor([2.0, 3.0], requires_grad=True)
y = cgrad.Tensor([1.0, 4.0], requires_grad=True)
# Performing operations
z = x * y
# Backpropagation to compute gradients
z.sum().backward()
# Accessing gradients
print(x.grad) # Gradients of x
print(y.grad) # Gradients of y
x = cgrad.Tensor([2.0, 3.0], requires_grad=True)
y = cgrad.Tensor([1.0, 4.0], requires_grad=True)
# Performing operations
z = x + y
z.backward(custom_grad = cgrad.ones_like(x)) # allow to do the grad with you custom grad
print(x.grad)
print(y.grad)
from cgrad import AutoGrad
x = cgrad.Tensor([2.0, 3.0], requires_grad=True)
y = cgrad.Tensor([1.0, 4.0], requires_grad=True)
with AutoGrad.no_grad():
z = x + y #only caculate the value not grad
print(z.requires_grad)
w = x * y
print(w.requires_grad)
For more detailed information, please visit our documentation website.
I β€οΈ contributions! If youβd like to contribute to CGrad, please:
-
You can contribute to code improvement and documentation editing.
-
If any issue is found, report it on the GitHub issue
- π΄ Clone the repository or fork the repository.
- π± Create a new branch for your feature or bugfix.
- βοΈ Submit a pull request.
π See LICENSE
for more details.