Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Unittest] FSU Unittest with Simple FC Model #2835

Merged
merged 1 commit into from
Jan 2, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
110 changes: 110 additions & 0 deletions test/unittest/integration_tests/integration_test_fsu.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
// SPDX-License-Identifier: Apache-2.0
/**
* Copyright (C) 2024 Donghak Park <[email protected]>
*
* @file integration_test_fsu.cpp
* @date 20 Dec 2024
* @brief Unit Test for Asynch FSU
* @see https://github.com/nnstreamer/nntrainer
* @author Donghak Park <[email protected]>
* @bug No known bugs except for NYI items
*/

#include <app_context.h>
#include <array>
#include <chrono>
#include <ctime>
#include <gtest/gtest.h>
#include <iostream>
#include <layer.h>
#include <memory>
#include <model.h>
#include <optimizer.h>
#include <sstream>
#include <vector>

using LayerHandle = std::shared_ptr<ml::train::Layer>;
using ModelHandle = std::unique_ptr<ml::train::Model>;

template <typename T>
static std::string withKey(const std::string &key, const T &value) {
std::stringstream ss;
ss << key << "=" << value;
return ss.str();
}

template <typename T>
static std::string withKey(const std::string &key,
std::initializer_list<T> value) {
if (std::empty(value)) {
throw std::invalid_argument("empty data cannot be converted");
}

std::stringstream ss;
ss << key << "=";

auto iter = value.begin();
for (; iter != value.end() - 1; ++iter) {
ss << *iter << ',';
}
ss << *iter;

return ss.str();
}

TEST(fsu, simple_fc) {

std::unique_ptr<ml::train::Model> model = ml::train::createModel(
ml::train::ModelType::NEURAL_NET, {withKey("loss", "mse")});

model->addLayer(ml::train::createLayer(
"input", {withKey("name", "input0"), withKey("input_shape", "1:1:320")}));
for (int i = 0; i < 6; i++) {
model->addLayer(ml::train::createLayer(
"fully_connected",
{withKey("unit", 1000), withKey("weight_initializer", "xavier_uniform"),
withKey("bias_initializer", "zeros")}));
}
model->addLayer(ml::train::createLayer(
"fully_connected",
{withKey("unit", 100), withKey("weight_initializer", "xavier_uniform"),
withKey("bias_initializer", "zeros")}));

model->setProperty({withKey("batch_size", 1), withKey("epochs", 1),
withKey("memory_swap", "true"),
withKey("memory_swap_lookahead", "1"),
withKey("model_tensor_type", "FP16-FP16")});

auto optimizer = ml::train::createOptimizer("sgd", {"learning_rate=0.001"});
model->setOptimizer(std::move(optimizer));

int status = model->compile(ml::train::ExecutionMode::INFERENCE);
EXPECT_EQ(status, ML_ERROR_NONE);

status = model->initialize(ml::train::ExecutionMode::INFERENCE);
EXPECT_EQ(status, ML_ERROR_NONE);

model->save("simplefc_weight_fp16_fp16_100.bin",
ml::train::ModelFormat::MODEL_FORMAT_BIN);
model->load("./simplefc_weight_fp16_fp16_100.bin");
Comment on lines +87 to +89
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simple question. Why do we need this save and load code? Is it because it only supports inference mode now?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for test FSU, we need file that stored in storage. so save model file, and load files for now
--> in real Inference case, they have bin files already

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes, that's what the FSU is !🤣 Thank you for the kind reply :)


uint feature_size = 320;

float input[320];
float label[1];

for (uint j = 0; j < feature_size; ++j)
input[j] = j;

std::vector<float *> in;
std::vector<float *> l;
std::vector<float *> answer;

in.push_back(input);
l.push_back(label);

answer = model->inference(1, in, l);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can we check the FSU works from this test code?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can check mem_usage.
actually i add this case for FSU build test, not for detail works
--> working test will be add to benchmarks tool


in.clear();
l.clear();
}
6 changes: 6 additions & 0 deletions test/unittest/integration_tests/meson.build
Original file line number Diff line number Diff line change
@@ -11,8 +11,14 @@ mixed_precision_targets = [
'integration_test_mixed_precision.cpp',
]

fsu_targets = [
model_util_path / 'models_test_utils.cpp',
'integration_test_fsu.cpp',
]

if get_option('enable-fp16')
test_target += mixed_precision_targets
test_target += fsu_targets
endif

exe = executable(