Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use this on my deep neural network? #4

Open
Nithanaroy opened this issue Apr 7, 2019 · 5 comments
Open

How to use this on my deep neural network? #4

Nithanaroy opened this issue Apr 7, 2019 · 5 comments

Comments

@Nithanaroy
Copy link

I'm looking for ways to do ENAS search on my deep neural network. Could you share some ideas on how to achieve it using your repo?

@countif
Copy link
Owner

countif commented Apr 7, 2019

It depends on which kind of neural network you use.
Show me some details.

@Nithanaroy
Copy link
Author

It is a standard simple dense neural network with tf.keras API for a classification problem. The structure looks like,

import nni

def train(params):
    # fetch data
    train_dataset = tf.data.TFRecordDataset(...)
    test_dataset = tf.data.TFRecordDataset(...)

    # build model
    x = create_input_layer(...)

    y = Dense(params['layer1'], activation='relu', name="dense0")(x)
    y = BatchNormalization()(y)
    y = Dropout(params['dropout'])(y)
    ...

    y = Dense(4, activation='softmax', name="softmax")(y)

    model = tf.keras.models.Model(x, y)
    optimizer = Adam(lr=params['lr'])
    model.compile(loss='categorical_crossentropy',
                        optimizer=optimizer,
                        metrics=['accuracy']
                     )
    
    # train
    h = model.fit(
                    train_dataset,
                    steps_per_epoch=...,
                    epochs=params['epochs'],
                    validation_data=test_dataset
                    validation_steps=...
                )

    # report reward
    metrics = dict([(metric, h.history[metric][-1]) for metric in h.history])
    nni.report_final_result( metrics['val_acc'] )

def main():
    params = nni.get_next_parameter()

with config.yaml as,

authorName: default
experimentName: nni_enas_training
trialConcurrency: 1
maxExecDuration: 1h
maxTrialNum: 10
trainingServicePlatform: local
# The path to Search Space
searchSpacePath: search_space.json
useAnnotation: false
tuner:
  builtinTunerName: ???
# The path and the running command of trial
trial:  
  command: python3 mydnn.py
  codeDir: .
  gpuNum: 1

and search_space.json as,

{
    "dropout": {
        "_type": "uniform",
        "_value": [0.5, 0.9]
    },
    "layer1": {
        "_type": "choice",
        "_value": [124, 512, 1024]
    },
    "layer2": {
        "_type": "choice",
        "_value": [124, 512, 1024]
    },
    "epochs": {
        "_type": "choice",
        "_value": [100, 200, 300]
    },
    "lr": {
        "_type": "choice",
        "_value": [0.0001, 0.0003, 0.001, 0.01]
    }
}
  • What would be builtinTunerName and how integrate ENAS into the code?
  • Is it possible to make ENAS dynamically generate the number of layers also instead of hardcoding it to, say 2, like in the above example?

@Nithanaroy
Copy link
Author

Any help @countif ? Thank you :)

@countif
Copy link
Owner

countif commented Apr 9, 2019

Which dataset you use?
It only support cifar10 actually.

@Nithanaroy
Copy link
Author

Nithanaroy commented Apr 9, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants