A model based on tf.Module (low-level):
class model(tf.Module): def __init__(this): super().__init__(); //create vars, eg. //this.MyVar = tf.Variable(...); @tf.function def __call__(this,Inp): //do some calculation with Inp Out = ...; return Out;
A model based on tf.keras.Model (high-level):
class model(tf.keras.Model): def __init__(this): super().__init__(); //create vars, eg. //this.MyVar = tf.Variable(...); @tf.function def call(this,Inp): //do some calculation with Inp Out = ...; return Out;
Train the class based on tf.Module:
Model = model(); Loss = tf.losses.MeanSquaredError(); Optim = tf.optimizers.SGD(1e-1); Steps = 1000; for I in range(Steps): with tf.GradientTape() as T: Lv = Loss(Y,Model(X)); Grads = T.gradient(Lv, Model.trainable_variables); Optim.apply_gradients(zip(Grads, Model.trainable_variables));
Train the class based on tf.keras.Model:
Model = model(); Loss = tf.losses.MeanSquaredError(); Optim = tf.optimizers.SGD(1e-1); Steps = 1000; Epochs = Steps/(len(X)/BSIZE); Model.compile(loss=Loss, optimizer=Optim); Model.fit(X,Y, batch_size=BSIZE, epochs=Epochs, verbose=0);
Save the tf.Module-based model:
tf.saved_model.save(Model,SOME_DIR_PATH);
Save the tf.keras.Model-based model:
tf.keras.models.save_model(Model,SOME_DIR_PATH);
Load the tf.Module-based model and continue training:
M = tf.saved_model.load(SOME_DIR_PATH); Vars = M.Some_Keras_Layer.trainable_variables+[M.Some_Var]; for I in range(Steps): with tf.GradientTape() as T: Lv = Loss(Y, M(X)); Grads = T.gradient(Lv, Vars); Optim.apply_gradients(zip(Grads, Vars));
Load the tf.keras.Model-based model and continue training:
M = tf.keras.models.load_model(SOME_DIR_PATH); M.fit(X,Y, batch_size=BSIZE, epochs=Epochs, verbose=0);
No comments:
Post a Comment