Skip to content

Commit 3e73861

Browse files
committed
Added more documentation
1 parent 815a19e commit 3e73861

30 files changed

+11126
-1338
lines changed

src/Enums/PredictionType.cs

+45
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,52 @@
11
namespace AiDotNet.Enums;
22

3+
/// <summary>
4+
/// Specifies the type of prediction task that a machine learning model performs.
5+
/// </summary>
6+
/// <remarks>
7+
/// For Beginners: This enum helps you tell the library what kind of prediction you're trying to make.
8+
/// Think of it as telling the AI system what type of question you're asking:
9+
///
10+
/// - Are you asking a yes/no question? Use Binary.
11+
/// - Are you asking "how much" or "what value"? Use Regression.
12+
///
13+
/// Choosing the right prediction type helps the AI model understand what you're trying to accomplish
14+
/// and use the appropriate techniques for your specific problem.
15+
/// </remarks>
316
public enum PredictionType
417
{
18+
/// <summary>
19+
/// Represents a binary classification task where the output is one of two possible classes.
20+
/// </summary>
21+
/// <remarks>
22+
/// For Beginners: Use this when your prediction has only two possible outcomes, like:
23+
/// - Yes or No
24+
/// - True or False
25+
/// - Spam or Not Spam
26+
/// - Positive or Negative
27+
///
28+
/// Binary predictions typically output a probability between 0 and 1, where:
29+
/// - Values closer to 0 indicate the first class (e.g., "No")
30+
/// - Values closer to 1 indicate the second class (e.g., "Yes")
31+
///
32+
/// Examples: Email spam detection, disease diagnosis, fraud detection
33+
/// </remarks>
534
Binary,
35+
36+
/// <summary>
37+
/// Represents a regression task where the output is a continuous numerical value.
38+
/// </summary>
39+
/// <remarks>
40+
/// For Beginners: Use this when your prediction is a number that can take any value within a range, like:
41+
/// - Price of a house
42+
/// - Temperature tomorrow
43+
/// - Number of sales next month
44+
/// - Age of a person from their photo
45+
///
46+
/// Unlike Binary prediction, Regression doesn't have fixed categories - it predicts
47+
/// actual numerical values that can be any number (like 42.5, 1000, or -3.14).
48+
///
49+
/// Examples: Price prediction, weather forecasting, age estimation, stock market prediction
50+
/// </remarks>
651
Regression
752
}

src/Enums/RegularizationType.cs

+95
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,104 @@
11
namespace AiDotNet.Enums;
22

3+
/// <summary>
4+
/// Specifies the type of regularization to apply to a machine learning model.
5+
/// </summary>
6+
/// <remarks>
7+
/// For Beginners: Regularization is like adding training wheels to your AI model.
8+
///
9+
/// When models learn too much from their training data, they might become too specialized
10+
/// (this is called "overfitting"). Regularization helps prevent this by encouraging the model
11+
/// to keep things simple.
12+
///
13+
/// Think of it like this:
14+
/// - Without regularization: The model might create very complex rules that work perfectly for
15+
/// training data but fail on new data.
16+
/// - With regularization: The model is encouraged to create simpler rules that work well enough
17+
/// for training data and are more likely to work on new data too.
18+
///
19+
/// Different regularization types use different approaches to encourage simplicity.
20+
/// </remarks>
321
public enum RegularizationType
422
{
23+
/// <summary>
24+
/// No regularization is applied to the model.
25+
/// </summary>
26+
/// <remarks>
27+
/// For Beginners: This option turns off regularization completely.
28+
///
29+
/// Use this when:
30+
/// - You have lots of training data compared to model complexity
31+
/// - Your model is already simple and unlikely to overfit
32+
/// - You want to see how the model performs without any restrictions
33+
///
34+
/// It's like removing the training wheels - sometimes it works fine,
35+
/// but there's a higher risk the model might become too specialized to your training data.
36+
/// </remarks>
537
None,
38+
39+
/// <summary>
40+
/// L1 regularization (also known as Lasso regularization) that encourages sparsity in the model parameters.
41+
/// </summary>
42+
/// <remarks>
43+
/// For Beginners: L1 regularization encourages the model to completely ignore less important features.
44+
///
45+
/// It works by penalizing the absolute size of the model's parameters, which often results in
46+
/// many parameters becoming exactly zero.
47+
///
48+
/// Think of it like a strict teacher who says: "If a feature isn't clearly helpful, don't use it at all."
49+
///
50+
/// Benefits:
51+
/// - Automatically selects the most important features
52+
/// - Creates simpler models that are easier to interpret
53+
/// - Works well when you suspect many features aren't relevant
54+
///
55+
/// Example: If you're predicting house prices with 100 features, L1 might decide that only 20 features
56+
/// (like size, location, and age) actually matter and ignore the rest.
57+
/// </remarks>
658
L1,
59+
60+
/// <summary>
61+
/// L2 regularization (also known as Ridge regularization) that discourages large parameter values.
62+
/// </summary>
63+
/// <remarks>
64+
/// For Beginners: L2 regularization encourages the model to use all features, but keep their influence small.
65+
///
66+
/// It works by penalizing the squared size of the model's parameters, which results in
67+
/// all parameters becoming smaller but rarely exactly zero.
68+
///
69+
/// Think of it like a balanced teacher who says: "Use all the information available, but don't rely too much on any single piece."
70+
///
71+
/// Benefits:
72+
/// - Handles correlated features well
73+
/// - Generally prevents overfitting without eliminating features
74+
/// - Usually the safest default choice for regularization
75+
///
76+
/// Example: For house price prediction, L2 might keep all 100 features but ensure that no single feature
77+
/// (like having a swimming pool) has an excessively large impact on the prediction.
78+
/// </remarks>
779
L2,
80+
81+
/// <summary>
82+
/// A combination of L1 and L2 regularization that balances their properties.
83+
/// </summary>
84+
/// <remarks>
85+
/// For Beginners: ElasticNet combines the best of both L1 and L2 regularization.
86+
///
87+
/// It works by applying both types of penalties at the same time, with adjustable weights
88+
/// to control how much of each to use.
89+
///
90+
/// Think of it like a flexible teacher who says: "Let's mostly keep all features but with limited influence,
91+
/// while still completely removing the least useful ones."
92+
///
93+
/// Benefits:
94+
/// - Can eliminate irrelevant features (like L1)
95+
/// - Handles groups of correlated features well (like L2)
96+
/// - Provides more flexibility through adjustable balance between L1 and L2
97+
///
98+
/// Example: For house price prediction, ElasticNet might eliminate 30 truly irrelevant features
99+
/// while keeping the remaining 70 with appropriately controlled influence.
100+
///
101+
/// This is often the best choice when you're not sure which regularization to use.
102+
/// </remarks>
8103
ElasticNet
9104
}

src/Enums/SamplingType.cs

+72
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,80 @@
11
namespace AiDotNet.Enums;
22

3+
/// <summary>
4+
/// Specifies the method used to sample or combine values when reducing data dimensions.
5+
/// </summary>
6+
/// <remarks>
7+
/// For Beginners: Sampling is how we summarize a group of numbers into a single value.
8+
///
9+
/// In AI, we often need to take a collection of values (like a grid of pixels in an image)
10+
/// and represent them with fewer values. This process is called "downsampling" or "pooling".
11+
///
12+
/// Think of it like summarizing a neighborhood on a map:
13+
/// - You could pick the tallest building (Max)
14+
/// - You could calculate the average building height (Average)
15+
/// - You could use a special mathematical formula (L2Norm)
16+
///
17+
/// Different sampling types give different results and are useful in different situations.
18+
/// </remarks>
319
public enum SamplingType
420
{
21+
/// <summary>
22+
/// Takes the maximum value from the input region.
23+
/// </summary>
24+
/// <remarks>
25+
/// For Beginners: Max sampling simply picks the largest number from a group of values.
26+
///
27+
/// For example, if you have these numbers: [2, 5, 1, 3], Max sampling would give you 5.
28+
///
29+
/// This is commonly used in neural networks for:
30+
/// - Detecting if a feature is present anywhere in the region
31+
/// - Reducing the size of images while preserving important details
32+
/// - Making the model less sensitive to the exact position of features
33+
///
34+
/// Think of it like looking at a group of mountains and recording only the height of the tallest one.
35+
/// It's good at preserving strong signals and ignoring weaker ones.
36+
/// </remarks>
537
Max,
38+
39+
/// <summary>
40+
/// Takes the average (mean) value from the input region.
41+
/// </summary>
42+
/// <remarks>
43+
/// For Beginners: Average sampling calculates the mean of all values in a group.
44+
///
45+
/// For example, if you have these numbers: [2, 5, 1, 3], Average sampling would give you 2.75.
46+
///
47+
/// This is useful for:
48+
/// - Smoothing out noise in the data
49+
/// - Capturing the general trend of all values in the region
50+
/// - Reducing the impact of outliers or extreme values
51+
///
52+
/// Think of it like measuring the average temperature across a city instead of just the hottest spot.
53+
/// It gives you a more balanced representation of the entire region.
54+
/// </remarks>
655
Average,
56+
57+
/// <summary>
58+
/// Calculates the L2 norm (Euclidean norm) of the values in the input region.
59+
/// </summary>
60+
/// <remarks>
61+
/// For Beginners: L2Norm sampling uses a special mathematical formula to combine values.
62+
///
63+
/// It works by:
64+
/// 1. Squaring each number
65+
/// 2. Adding up all the squared values
66+
/// 3. Taking the square root of the sum
67+
///
68+
/// For example, if you have these numbers: [2, 5, 1, 3], L2Norm sampling would give you:
69+
/// √(2² + 5² + 1² + 3²) = √(4 + 25 + 1 + 9) = √39 ≈ 6.24
70+
///
71+
/// This is useful for:
72+
/// - Measuring the overall "energy" or "strength" of a signal
73+
/// - Giving more weight to larger values without ignoring smaller ones
74+
/// - Certain specialized neural network architectures
75+
///
76+
/// Think of it like measuring how "impactful" a group of values is collectively,
77+
/// with larger values having more influence than smaller ones.
78+
/// </remarks>
779
L2Norm
880
}

src/Enums/SpikingNeuronType.cs

+121
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,131 @@
11
namespace AiDotNet.Enums;
22

3+
/// <summary>
4+
/// Specifies the type of spiking neuron model to use in neuromorphic computing simulations.
5+
/// </summary>
6+
/// <remarks>
7+
/// For Beginners: Spiking neurons are AI components that work more like real brain cells.
8+
///
9+
/// Traditional AI neurons output continuous values (like 0.7), but spiking neurons work with
10+
/// discrete "spikes" or pulses of activity (like a real neuron firing). This makes them more
11+
/// biologically realistic and potentially more efficient for certain tasks.
12+
///
13+
/// Think of regular AI neurons as light bulbs with dimmers that can be set to any brightness,
14+
/// while spiking neurons are more like light bulbs that either flash brightly or stay off.
15+
///
16+
/// Different spiking neuron types represent different mathematical models of how real neurons work,
17+
/// with varying levels of biological accuracy and computational complexity.
18+
/// </remarks>
319
public enum SpikingNeuronType
420
{
21+
/// <summary>
22+
/// A simplified neuron model that accumulates input and "leaks" voltage over time.
23+
/// </summary>
24+
/// <remarks>
25+
/// For Beginners: This is like a leaky bucket collecting water (electrical charge).
26+
///
27+
/// How it works:
28+
/// 1. The neuron collects incoming signals (like water filling a bucket)
29+
/// 2. The bucket slowly leaks over time (the "leaky" part)
30+
/// 3. When the water level reaches a certain height, the bucket tips over (neuron fires)
31+
/// 4. After firing, the bucket is emptied and starts collecting again
32+
///
33+
/// This model is:
34+
/// - Computationally efficient (fast to simulate)
35+
/// - Simple to understand and implement
36+
/// - Good for large-scale neural networks
37+
///
38+
/// It captures the basic behavior of real neurons while being much simpler than more detailed models.
39+
/// </remarks>
540
LeakyIntegrateAndFire,
41+
42+
/// <summary>
43+
/// A basic neuron model that accumulates input until reaching a threshold, then fires.
44+
/// </summary>
45+
/// <remarks>
46+
/// For Beginners: This is like a bucket collecting water without any leaks.
47+
///
48+
/// How it works:
49+
/// 1. The neuron collects incoming signals (like water filling a bucket)
50+
/// 2. When the water level reaches a certain height, the bucket tips over (neuron fires)
51+
/// 3. After firing, the bucket is emptied and starts collecting again
52+
///
53+
/// The key difference from LeakyIntegrateAndFire is that this model doesn't have any "leak" -
54+
/// once charge is added, it stays there until the neuron fires.
55+
///
56+
/// This model is:
57+
/// - The simplest spiking neuron model
58+
/// - Very computationally efficient
59+
/// - Less biologically accurate than other models
60+
///
61+
/// It's good for educational purposes and very basic simulations.
62+
/// </remarks>
663
IntegrateAndFire,
64+
65+
/// <summary>
66+
/// A computationally efficient model that can reproduce many behaviors of biological neurons.
67+
/// </summary>
68+
/// <remarks>
69+
/// For Beginners: This model strikes a balance between biological realism and computational efficiency.
70+
///
71+
/// Named after Eugene Izhikevich who developed it, this model can simulate many different
72+
/// firing patterns seen in real neurons (like bursting, chattering, or regular spiking)
73+
/// while being much faster to compute than fully detailed models.
74+
///
75+
/// Think of it like a sophisticated light switch that can be programmed to blink in
76+
/// different patterns that closely resemble real brain activity.
77+
///
78+
/// This model is:
79+
/// - More biologically realistic than the simpler models
80+
/// - Still computationally efficient
81+
/// - Able to reproduce many different neural firing patterns
82+
///
83+
/// It's popular for large-scale brain simulations where both biological realism and
84+
/// computational efficiency are important.
85+
/// </remarks>
786
Izhikevich,
87+
88+
/// <summary>
89+
/// A detailed biophysical model that accurately represents ion channel dynamics in neurons.
90+
/// </summary>
91+
/// <remarks>
92+
/// For Beginners: This is the most biologically accurate model, but also the most complex.
93+
///
94+
/// Named after Alan Hodgkin and Andrew Huxley who won a Nobel Prize for this work,
95+
/// this model precisely describes how ions flow through channels in the neuron's membrane.
96+
///
97+
/// Think of it like having a detailed engineering blueprint of a neuron that models
98+
/// all the important chemical and electrical processes happening inside.
99+
///
100+
/// This model is:
101+
/// - Extremely biologically accurate
102+
/// - Computationally intensive (slow to simulate)
103+
/// - Able to capture subtle details of neural behavior
104+
///
105+
/// It's primarily used in neuroscience research when biological accuracy is more important
106+
/// than computational efficiency.
107+
/// </remarks>
8108
HodgkinHuxley,
109+
110+
/// <summary>
111+
/// A model that combines exponential spike generation with adaptive threshold mechanisms.
112+
/// </summary>
113+
/// <remarks>
114+
/// For Beginners: This model adds adaptability to neuron behavior.
115+
///
116+
/// The "Adaptive" part means the neuron can change its sensitivity based on recent activity.
117+
/// The "Exponential" part refers to how quickly the neuron responds when close to firing.
118+
///
119+
/// Think of it like a smart thermostat that becomes less sensitive after detecting several
120+
/// temperature changes, preventing it from overreacting to small fluctuations.
121+
///
122+
/// This model is:
123+
/// - More biologically realistic than basic models
124+
/// - Able to capture adaptation behaviors (neurons getting "tired" after firing a lot)
125+
/// - Moderately computationally efficient
126+
///
127+
/// It's useful for simulations where you need more realistic neural behavior than simple
128+
/// models provide, but can't afford the computational cost of the most detailed models.
129+
/// </remarks>
9130
AdaptiveExponential
10131
}

0 commit comments

Comments
 (0)