-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
15 changed files
with
903 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -134,8 +134,7 @@ dmypy.json | |
#Output | ||
TP1/*.csv | ||
TP2/*.csv | ||
TP3/*.csv | ||
TP3/resources/*.csv | ||
TP4/*txt | ||
|
||
|
||
.vscode/launch.json |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
/resources/classifications.csv | ||
/resources/u_matrix.csv | ||
/resources/components.csv | ||
/resources/loadings.csv | ||
/resources/energies* | ||
/resources/hopfield* | ||
/resources/oja_errors.csv |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,113 @@ | ||
# TP4 | ||
|
||
## 72.27 - Sistemas de Inteligencia Artificial - 2º cuatrimestre 2022 | ||
|
||
### Instituto Tecnológico de Buenos Aires (ITBA) | ||
|
||
## Autores | ||
|
||
- [Sicardi, Julián Nicolas](https://github.com/Jsicardi) - Legajo 60347 | ||
- [Quintairos, Juan Ignacio](https://github.com/juaniq99) - Legajo 59715 | ||
- [Zavalia Pángaro, Salustiano Jose](https://github.com/szavalia) - Legajo 60312 | ||
|
||
## Índice | ||
- [Autores](#autores) | ||
- [Índice](#índice) | ||
- [Descripción](#descripción) | ||
- [Ejecución](#ejecución) | ||
- [Configuraciones iniciales](#configuraciones-iniciales) | ||
- [Parámetros](#parámetros) | ||
- [Ejecución del proyecto](#ejecución-del-proyecto) | ||
- [Configuraciones ejemplo](#configuraciones-ejemplo) | ||
|
||
## Descripción | ||
|
||
El proyecto fue desarrollado con Python, y esta centrado en la implementacion de algoritmos de aprendizaje no supervisado para resolver diversos tipos de problemas. En concreto se realiza una implementacion de una red de Kohonen (problemas de agrupacion), una red de Hopfield discreta (problema de asociacion) y una red que utiliza la regla de aprendizaje de Oja. Los problemas a resolver son: | ||
- Red de Kohonen: clasificacion de paises de Europa en base a sus caracteristicas socioeconomicas, utilizando el dataset `europe.csv` presente en la carpeta `resources` | ||
- Red de Hopfield: dado patrones que representan letras del alfabeto de 5x5 pixeles, asociacion de variantes de los patrones utilizados con ruido con el patron original | ||
- Red con aprendizaje de Oja: utilizando el mismo dataset que la red de Kohonen, obtener la primer componente principal para hacer analisis | ||
|
||
## Requerimientos | ||
|
||
- Python 3 | ||
|
||
## Ejecución | ||
### Configuraciones iniciales | ||
|
||
Una vez clonado el proyecto en su carpeta de preferencia, para configurar los parámetros iniciales se debe utilizar el archivo `config.json`. Este archivo posee una cierta variedad de parámetros, que se discutiran a continuacion. | ||
|
||
#### Parámetros | ||
- "method": metodo a utilizar para resolver un problema especifico. Su valor puede ser: | ||
- "kohonen": utiliza la red de Kohonen | ||
- "hopfield": utiliza la red de Hopfield | ||
- "oja": utiliza la red con aprendizaje de Oja | ||
- "kohonen_props": objeto con propiedades a utilizar por la red de Kohonen para resolver el problema indicado. Si el campo "method" no tiene el valor asociado, este campo es ignorado. Sus campos son: | ||
- "dataset_path": path al archivo `europe.csv` | ||
- "eta": tasa de aprendizaje inicial | ||
- "k": dimension de la red a utilizar (una grilla rectangular de kxk neuronas) | ||
- "r": radio inicial | ||
- "epochs": limite de epocas a iterar | ||
- "hopfield_props": objeto con propiedades a utilizar por la red de Hopfield para resolver el problema indicado. Si el campo "method" no tiene el valor asociado, este campo es ignorado. Sus campos son: | ||
- "patterns": vector de letras a utilizar como patrones almacenados. | ||
- "noise_prob": probabilidad de ruido | ||
- "oja_props": objeto con propiedades a utilizar por la red con aprendizaje de Oja para resolver el problema indicado. Si el campo "method" no tiene el valor asociado, este campo es ignorado. Sus campos son: | ||
- "dataset_path": path al archivo `europe.csv` | ||
- "eta": tasa de aprendizaje | ||
- "epochs": limite de epocas a iterar | ||
|
||
### Ejecución del proyecto | ||
|
||
Para correr el proyecto, una vez posicionado sobre el directorio base de este y habiendo configurado los parámetros iniciales, basta con ejecutar: | ||
|
||
```bash | ||
$ python3 main.py | ||
``` | ||
Al finalizar su ejecución, el programa mostrará los parámetros iniciales de ejecución, y creara una serie de archivos distintos en base al problema requerido. Para el caso de la red de Kohonen, se crean los archivos `classifications.csv` (que indica en que neurona se encuentra cada valor de entrada, pensado como ubicacion en grilla en funcion de su fila y columna), `u_matrix.csv` (valores de la matriz U asociada) y `weights_matrix.csv` (los pesos correspondientes a las neuronas de la red). Para el caso de la red de Hopfield, se crean los archivos `hopfield_n.txt` (donde n es una de las letras de los patrones usados, muestra los pasos que hace la red desde el patron inicial con ruido hasta el patron de convergencia), `energies.csv` (energia en cada iteracion para cada uno de los patrones ruidosos) y `hopfield_results.csv` (para cada patron ruidoso, se muestra el porcentaje de pixeles correctos del patron de convergencia comparando contra el patron original utilizado). Para la red con aprendizaje de Oja se generan los archivos `loadings.csv` (pesos finales de la red) `components.csv` (valores de la primera componente para cada dato perteneciente al dataset de entrada) y `oja_errors.csv` (evolucion del error por epoca con su desviacion estandar) | ||
|
||
Ademas para el caso de la red de Hopfield, se adjunta un clasificador de combinaciones de patrones de 4 letras en base a su "ortogonalidad". Este revise dos parametros: la cantidad de combinaciones a listar y el orden del listado (siendo false si es orden de menor a mayor y true viceversa). Se ejecuta como: | ||
|
||
```bash | ||
$ python3 get_patterns.py 15 true | ||
``` | ||
|
||
### Configuraciones ejemplo | ||
|
||
Si por ejemplo busco usar la red de Hopfield con los patrones "K","N","S","V" para asociar patrones ruidosos de estos con probabilidad de ruido 0.2 la configuracion del archivo es: | ||
|
||
```json | ||
{ | ||
"method" : "hopfield", | ||
"hopfield_props": { | ||
"patterns": ["K", "N", "S", "V"], | ||
"noise_prob": 0.2 | ||
} | ||
} | ||
``` | ||
|
||
Si ahora busco usar una red de Kohonen de 4x4 neuronas, con tasa de aprendizaje 0.1, radio inicial 4 y que itere por 1001 epocas la configuracion del archivo es: | ||
|
||
```json | ||
{ | ||
"method" : "kohonen", | ||
"kohonen_props":{ | ||
"dataset_path" : "resources/europe.csv", | ||
"eta" : 0.1, | ||
"k" : 4, | ||
"r" : 4, | ||
"epochs" : 1001 | ||
} | ||
} | ||
``` | ||
|
||
Por ultimo, para utilizar la red con aprendizaje de Oja con tasa de aprendizaje 0.01 y que itere por 1000 epocas la configuracion del archivo es: | ||
|
||
```json | ||
{ | ||
"method" : "oja", | ||
"oja_props":{ | ||
"dataset_path" : "resources/europe.csv", | ||
"eta" : 0.01, | ||
"epochs": 1000 | ||
} | ||
} | ||
``` |
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
from models import HopfieldObservables, HopfieldProperties | ||
import numpy as np | ||
|
||
def execute(properties:HopfieldProperties): | ||
|
||
# Build starting W | ||
# patterns is K transposed | ||
patterns = np.array(properties.patterns) | ||
kt = patterns.transpose() | ||
k = patterns | ||
w = np.dot(kt, k) | ||
w = np.multiply(w, 1/len(patterns[0])) # N is number of elements in patterns | ||
np.fill_diagonal(w, 0) # 0s in diagonal | ||
|
||
# Execute algorithm for each letter with noise | ||
pattern_states = [] | ||
pattern_energies = [] | ||
for pattern in properties.noise_patterns: | ||
(states,energies) = execute_single(w, pattern) | ||
pattern_states.append(states) | ||
pattern_energies.append(energies) | ||
return HopfieldObservables(pattern_states,pattern_energies) | ||
|
||
def get_energy(w,state): | ||
energy = 0 | ||
for (row_index, row) in enumerate(w): | ||
energy += (np.dot(row,state) * state[row_index]) | ||
return -(0.5) * energy | ||
|
||
def execute_single(w, pattern): | ||
pattern = np.array(pattern) | ||
state = pattern.copy() | ||
stop = False | ||
states = [state.copy()] | ||
energies = [get_energy(w,pattern)] | ||
while not stop: | ||
|
||
# Calculate h | ||
h = np.dot(w,state.transpose()) | ||
|
||
# Update state | ||
for (index, hi) in enumerate(h): | ||
if hi == 0: | ||
continue | ||
state[index] = np.sign(hi) | ||
|
||
energies.append(get_energy(w,state)) | ||
# Check end condition | ||
if ((state == states[-1]).all()): | ||
stop = True | ||
continue | ||
|
||
states.append(state.copy()) | ||
return (states,energies) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,122 @@ | ||
from models import KohonenObservables, KohonenProperties, KohonenNeuron | ||
import numpy as np | ||
import random as rn | ||
import sys | ||
import math | ||
|
||
def standarize_input(input_set): | ||
# Calculates standard deviations and means for each field | ||
field_set = np.array(input_set).transpose() | ||
field_aggregations = [] | ||
for field in field_set: | ||
field_aggregations.append([np.mean(field), np.std(field)]) | ||
|
||
# Build standardized set | ||
output_set = [] | ||
for entry in input_set: | ||
aux_row = [] | ||
for index,field in enumerate(entry): | ||
aux_row.append((field - field_aggregations[index][0]) / field_aggregations[index][1]) | ||
|
||
output_set.append(aux_row.copy()) | ||
return output_set | ||
|
||
def execute(properties:KohonenProperties): | ||
# Initialize input values | ||
input_set = standarize_input(properties.input_set) | ||
|
||
# Create lattice of k x k neurons and initialize weights with values of entry chosen at random | ||
neurons = [] | ||
for i in range(properties.k): | ||
for j in range(properties.k): | ||
w = rn.choice(input_set) | ||
neurons.append(KohonenNeuron(w.copy(), i, j)) | ||
|
||
# Initialize epochs, eta and radius | ||
total_epochs = properties.epochs | ||
starting_eta = properties.eta | ||
eta = properties.eta | ||
starting_r = properties.r | ||
r = properties.r | ||
|
||
# Loop through inputs | ||
curr_epochs = 0 | ||
random_ordered_inputs = input_set.copy() | ||
while(curr_epochs < total_epochs): | ||
rn.shuffle(random_ordered_inputs) | ||
for entry in random_ordered_inputs: | ||
# Find best match among neurons | ||
winner_neuron = find_winner_neuron(entry, neurons) | ||
|
||
# Update weights using Kohonen's rule | ||
update_neighbours(neurons, winner_neuron, eta, r, entry) | ||
|
||
# Update epochs, eta and r | ||
# r updates 4 times: [0 ; epochs/5] => r, [epochs/5 ; 2*epochs/5] => r/(r * 1/4), ..., [4*epochs/5 ; epochs] => r/r | ||
if (curr_epochs != 0 and curr_epochs % (total_epochs/5) == 0): | ||
r = starting_r - (int(curr_epochs / (total_epochs/5)) * (starting_r-1) / 4) | ||
if (curr_epochs != 0 and curr_epochs % (total_epochs/100) == 0): | ||
eta = starting_eta - (int(curr_epochs / (total_epochs/100)) * starting_eta / 100) | ||
curr_epochs += 1 | ||
|
||
return get_observables(neurons, input_set, properties) | ||
|
||
def find_winner_neuron(entry, neurons): | ||
winner_neuron = None | ||
winner_diff = sys.maxsize | ||
|
||
# Find best match among neurons | ||
for neuron in neurons: | ||
difference = np.sum(np.abs(np.subtract(entry, neuron.w))) | ||
if difference < winner_diff: | ||
winner_diff = difference | ||
winner_neuron = neuron | ||
|
||
return winner_neuron | ||
|
||
|
||
def find_neighbours(neurons,central_neuron,r,k): | ||
neighbourhood = [] | ||
# Find neighbours | ||
for i in range(math.floor(central_neuron.i - r), math.ceil(central_neuron.i + r + 1)): | ||
for j in range(math.floor(central_neuron.j - r), math.ceil(central_neuron.j + r + 1)): | ||
# Check bounds | ||
if i >= 0 and i < k and j >= 0 and j < k: | ||
# Check distance | ||
if (abs(i - central_neuron.i) + abs(j - central_neuron.j)) <= r: | ||
neighbourhood.append(neurons[i*k+j]) | ||
return neighbourhood | ||
|
||
|
||
def update_neighbours(neurons, central_neuron, eta, r, input_value): | ||
k = int(math.sqrt(len(neurons))) | ||
neighbourhood = find_neighbours(neurons, central_neuron, r, k) | ||
|
||
# Update neighbour weights | ||
for i in range(0, len(neighbourhood)): | ||
neighbourhood[i].update_w(input_value, eta) | ||
|
||
# Calculates the U-Matrix and associates an input to each neuron | ||
def get_observables(neurons, standarized_input, properties:KohonenProperties): | ||
input_map = {} | ||
# Find associated neuron for each input | ||
for i, entry in enumerate(standarized_input): | ||
input_map[properties.input_names[i][0]] = find_winner_neuron(entry, neurons) | ||
|
||
u_matrix = {} | ||
weights_matrix = {} | ||
for neuron in neurons: | ||
neighbourhood = find_neighbours(neurons, neuron, 1, properties.k) | ||
avg_distance = 0 | ||
for neighbour in neighbourhood: | ||
avg_distance += np.sum(np.abs(np.subtract(neuron.w, neighbour.w))) | ||
avg_distance /= len(neighbourhood) - 1 | ||
u_matrix[(neuron.i, neuron.j)] = avg_distance | ||
|
||
for weight in neuron.w: | ||
if (neuron.i, neuron.j) in weights_matrix: | ||
weights_matrix[(neuron.i, neuron.j)].append(weight) | ||
else: | ||
weights_matrix[(neuron.i, neuron.j)] = [weight] | ||
|
||
return KohonenObservables(input_map,u_matrix,weights_matrix) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
from models import OjaProperties,OjaObservables | ||
import numpy as np | ||
import random as rn | ||
import sys | ||
import math | ||
|
||
def standardize_input(input_set): | ||
# Calculates standard deviations and means for each field | ||
field_set = np.array(input_set).transpose() | ||
field_aggregations = [] | ||
for field in field_set: | ||
field_aggregations.append([np.mean(field), np.std(field)]) | ||
|
||
# Build standardized set | ||
output_set = [] | ||
for entry in input_set: | ||
aux_row = [] | ||
for index,field in enumerate(entry): | ||
aux_row.append((field - field_aggregations[index][0]) / field_aggregations[index][1]) | ||
|
||
output_set.append(aux_row.copy()) | ||
return output_set | ||
|
||
def execute(properties:OjaProperties): | ||
# Initialize input values | ||
input_set = standardize_input(properties.input_set) | ||
|
||
w = np.random.uniform(-1,1,len(input_set[0])) | ||
|
||
error_values = [] | ||
for i in range(properties.epochs): | ||
errors = [] | ||
avg_error = 0 | ||
|
||
for input in input_set: | ||
s = np.dot(input,w) | ||
w += properties.eta * s * (input- s*w) | ||
|
||
for (index,input) in enumerate(input_set): | ||
errors.append(abs(np.dot(input,w) - properties.lib_components[index][0])) | ||
avg_error+=errors[-1] | ||
|
||
avg_error/=len(errors) | ||
error_values.append([avg_error, np.std(errors)]) | ||
|
||
principal_component = [] | ||
for input in input_set: | ||
s = np.dot(input,w) | ||
principal_component.append(s) | ||
|
||
return OjaObservables(principal_component,w,error_values) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
{ | ||
"method" : "oja", | ||
"hopfield_props": { | ||
"patterns": ["A", "V", "F", "L"], | ||
"noise_prob": 0.1 | ||
}, | ||
"kohonen_props":{ | ||
"dataset_path" : "resources/europe.csv", | ||
"eta" : 0.1, | ||
"k" : 4, | ||
"r" : 4, | ||
"epochs" : 1000 | ||
}, | ||
"oja_props":{ | ||
"dataset_path" : "resources/europe.csv", | ||
"eta" : 0.0001, | ||
"epochs": 5000 | ||
} | ||
} |
Oops, something went wrong.