Wednesday, 30 October 2019

TensorFlow 1 Nostalgia: Create a Graph and Run in a Session

TensorFlow 2 runs by default in Eager mode in which functions return value-tensors instead of op-tensors. To build an Autograph, put @tf.function annotation in the line right before 'def'. However, there's another way to use Ops as really ops, that is making op chain inside graph as_default(), and run with tf.compat.v1.Session, see the code below.

Source code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow as tf;

#create new graph, inside default graph, TF functions return op-tensors.
#outside of default graph, TF functions return value-tensors, as Eager mode is default on TF2
G = tf.Graph();  
with G.as_default():

  #G is now the default graph
  print("Is default graph:",tf.compat.v1.get_default_graph() is G);

  #no operations
  print("Operations:",G.get_operations());

  #add some operations
  print("\nAdding operations...");
  Inp    = tf.compat.v1.placeholder(tf.float32, [2], name="Inp");
  Times2 = tf.multiply(Inp, 2, name="Mul");

  #now having 1 placeholder, 1 operation
  print("Operations:",G.get_operations());

#feed to graph
print("\nRun graph in session, result:");
S = tf.compat.v1.Session(graph=G);
R = S.run(Times2, feed_dict={Inp:[1,2]});
print(R);
#eof

Colab vs Paperspace vs Kaggle

Colab:
  • Free: Yes
  • IPython-based: Yes
  • Code blocks saved as-is: No (ipynb JSON)
  • TensorFlow 1: Yes
  • TensorFlow 2: Yes
  • Save to GDrive: Yes
  • Save to GitHub: Yes
  • Save to GitLab: No
  • Comfortable UI: Yes
  • GPU: Yes
Paperspace:
  • Free: Yes
  • IPython-based: Yes
  • Code blocks saved as-is: No (ipynb JSON)
  • TensorFlow 1: Yes
  • TensorFlow 2: Yes
  • Save to GDrive: No
  • Save to GitHub: No
  • Save to GitLab: No
  • Comfortable UI: No (Too big top bar)
  • GPU: Yes
Kaggle:
  • Free: Yes
  • IPython-based: Yes
  • Code blocks saved as-is: No (ipynb JSON)
  • TensorFlow 1: Yes
  • TessorFlow 2: No (Can't even !pip install tensorflow==2.0.0)
  • Save to GDrive: No
  • Save to GitHub: No
  • Save to GitLab: No
  • Comfortable UI: No (No left side panel)
  • GPU: Yes

Tuesday, 29 October 2019

TensorFlow: Save and Load to Continue Training (tf.Module instead of tf.keras.Model)

After saving a tf.Module with tf.saved_model.save, the model can be loaded by tf.saved_model.load, the model can be train more by applying gradients to:
  • M.Some_Layer.trainable_variables
  • M.Some_Var
The list Model.trainable_variables is no longer in the model after loading.

Source code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow as tf;
from tensorflow.keras.layers import *;

#constants
BSIZE = 4;

#model
class model(tf.Module):
  def __init__(this):
    super().__init__();
    #this.W1 = tf.Variable(tf.random.uniform([2,20], -1,1));
    #this.B1 = tf.Variable(tf.random.uniform([  20], -1,1));
    this.Layer1 = Dense(20, activation=tf.nn.leaky_relu);

    this.W2 = tf.Variable(tf.random.uniform([20,1], -1,1));
    this.B2 = tf.Variable(tf.random.uniform([   1], -1,1));

  @tf.function(input_signature=[tf.TensorSpec([BSIZE,2], tf.float32)])
  def __call__(this,Inp):
    #H1  = tf.nn.leaky_relu(tf.matmul(Inp,this.W1) + this.B1);
    H1  = this.Layer1(Inp);
    Out = tf.sigmoid(tf.matmul(H1,this.W2) + this.B2);    
    return Out;

#data (OR)
X = tf.convert_to_tensor([[0,0],[0,1],[1,0],[1,1]], tf.float32);
Y = tf.convert_to_tensor([[0],  [1],  [1],  [1]  ], tf.float32);

#train
Model = model();
Loss  = tf.losses.MeanSquaredError();
Optim = tf.optimizers.SGD(1e-1);
Steps = 10;

for I in range(Steps):
  if I%(Steps/10)==0:
    Out = Model(X);
    Lv  = Loss(Y,Out);
    print("Loss:",Lv.numpy());

  with tf.GradientTape() as T:
    Out = Model(X);
    Lv  = Loss(Y,Out);

  Grads = T.gradient(Lv, Model.trainable_variables);
  Optim.apply_gradients(zip(Grads, Model.trainable_variables));

Out = Model(X);
Lv  = Loss(Y,Out);
print("Loss:",Lv.numpy(),"(Last)");

#save
print("\nSaving model...");
Dir = "/tmp/models/test";
tf.saved_model.save(Model,Dir);

#load
print("\nLoading model...");
M = tf.saved_model.load(Dir);
print(vars(M).keys());
print(tf.keras.backend.flatten(M(X)).numpy());

#train more
print("\nContinue training...");
Steps = 1000;

for I in range(Steps):
  if I%(Steps/10)==0:
    Out = M(X);
    Lv  = Loss(Y,Out);
    print("Loss:",Lv.numpy());

  with tf.GradientTape() as T:
    Out = M(X);
    Lv  = Loss(Y,Out);

  Grads = T.gradient(Lv, M.Layer1.trainable_variables+[M.W2,M.B2]);
  Optim.apply_gradients(zip(Grads, M.Layer1.trainable_variables+[M.W2,M.B2]));

Out = M(X);
Lv  = Loss(Y,Out);
print("Loss:",Lv.numpy(),"(Last)");
print(tf.keras.backend.flatten(M(X)).numpy());
#eof

Why Unsupervised Learning & Reinforcement Learning are More Important than Supervised Learning

Supervised Learning (SL):
  • Requires some training data
  • This learning method only knows what learnt
Unsupervised Learning (UL, check data using formula), and Reinforcement Learning (RL, check data using environment):
  • Don't require data
  • Generate data and test themselves --> Some creativity!
However, the optimal method in reality is combining SL with both UL & RL to make IL: Incremental Learning.

Friday, 25 October 2019

Nginx Redirect: Return and Rewrite

Nginx redirect all using location+return:

server {
  listen ...;
  server_name ...;
  
  location ~* ^/(.*)$ {
    return 307 https://some.domain/$1;
  }
}

Nginx redirect all using rewrite:

server {
  listen ...;
  server_name ...;

  rewrite ^/(.*)$ https://some.domain/$1 redirect;
}

Nginx as proxy (to port 9999 for example):

server {
  listen ...;
  server_name ...;

  location / {
    proxy_pass http://localhost:9999;
  }
}

Thursday, 24 October 2019

TensorFlow: Load Model to Continue Training

TensorFlow low-level model based on tf.Module is easy to save just as Keras model but low-level model is hard to continue training after loading back as custom functions must be created to assign weight values. The following is an example to save/load Keras model to continue training.

Source code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow as tf;
from tensorflow.keras.layers import *;

#constants
BSIZE = 4;

#model
class model(tf.keras.Model):
  def __init__(this):
    super().__init__();
    this.W1 = tf.Variable(tf.random.uniform([2,20], -1,1));
    this.B1 = tf.Variable(tf.random.uniform([  20], -1,1));

    this.W2 = tf.Variable(tf.random.uniform([20,1], -1,1));
    this.B2 = tf.Variable(tf.random.uniform([   1], -1,1));

  #@tf.function(input_signature=[tf.TensorSpec([BSIZE,2])])
  def call(this,Inp):
    H1  = tf.nn.leaky_relu(tf.matmul(Inp,this.W1) + this.B1);
    Out = tf.sigmoid(tf.matmul(H1,this.W2) + this.B2);
    return Out;

#data
X = tf.convert_to_tensor([[0,0],[0,1],[1,0],[1,1]], tf.float32);
Y = tf.convert_to_tensor([[0],  [1],  [1],  [0]  ], tf.float32);

#train
Model = model();

#hard to resume training with this low-level training procedure:
'''
Loss  = tf.losses.MeanSquaredError();
Optim = tf.optimizers.SGD(1e-1);
Steps = 100;

for I in range(Steps):
  if I%(Steps/10)==0:
    Out = Model(X);
    Lv  = Loss(Y,Out);
    print("Loss:",Lv.numpy());

  with tf.GradientTape() as T:
    Out = Model(X);
    Lv  = Loss(Y,Out);

  Grads = T.gradient(Lv, Model.trainable_variables);
  Optim.apply_gradients(zip(Grads, Model.trainable_variables));

Out = Model(X);
Lv  = Loss(Y,Out);
print("Loss:",Lv.numpy(),"(Last)");
'''

#easier to resume training with keras
Model.compile(loss=tf.losses.MeanSquaredError(), optimizer=tf.optimizers.SGD(1e-1));
Model.fit(X,Y, batch_size=4, epochs=10, verbose=0);
print("Test:");
print(Model.predict(X, batch_size=4, verbose=0));

#save
print("\nSaving model...");
tf.keras.models.save_model(Model,"/tmp/models/test");

#load
print("\nLoading model to train more...");
M = tf.keras.models.load_model("/tmp/models/test");
print(M.predict(X, batch_size=4, verbose=0));

#continue training
M.fit(X,Y, batch_size=4, epochs=5000, verbose=0);
print("\nTest:");
print(M.predict(X, batch_size=4, verbose=0));
#eof

Wednesday, 23 October 2019

3D Plotting with Numpy


Source code:
%tensorflow_version 2.x
%reset -f

#core
import math;

#libs
import numpy      as np;
import tensorflow as tf;
from tensorflow.keras.layers import *;

import matplotlib.pyplot as pp;
from mpl_toolkits import mplot3d;

#plot
pp.figure(figsize=[10,10]);
pp3d = pp.axes(projection="3d",elev=45,azim=20);
X    = np.linspace(-10,10, 50);
Y    = np.linspace(-10,10, 50);
X,Y  = np.meshgrid(X,Y);
Z    = np.sin(X/math.pi/2)*np.cos(Y/math.pi/2)*(X-Y);

#pp3d.plot_wireframe(X,Y,Z);
pp3d.plot_surface(X,Y,Z, cmap="coolwarm");
#eof

Monday, 21 October 2019

ReLU as a Function of Single Infinite Domain


In machine learning, ReLU = max(0,x); this leads to a function with 2 domains:
  • f = 0 for x<0
  • f = x for x>=0
However, ReLU can be expressed as a function in infinite domain:

ReLU = f(x) = [ abs(x) + x ] / 2

ML Activation Functions and Their Uses

Linear Activation (both negative and positive to infinity):

Tanh Activation (limited negative, limited positive):

Sigmoid Activation (no negative, limited positive):

ReLU (no negative, positive to infinity):

Softplus aka Smooth ReLU (rectifier, smooth value transition):

Basically, depends on what output values should be and an activation function is selected:
  • Identity activation: No negative limit, no positive limit.
  • Tanh: Limited negative, limited positive.
  • Sigmoid: No negative, limited positive.
  • ReLU: No negative, limited positive.
  • Etc.
Any custom function can be used as activation function, for example, a function similar to Tanh:

f(x) = [ sign(x)*(abs(x) - ln(cosh(x))) ] / 0.7

Softsign is a sigmoid-like activation function that utilise sign(x) and abs(x) too:

softsign(x) = x / (abs(x)+1)

Friday, 18 October 2019

Python 3D Plotting with matplotlib

The following is a plot of a sample function f(x,y) = sin(x)*cos(y)*(x-y).

Source code:
%reset -f

#core
import math;

#libs
import numpy             as np;
import matplotlib.pyplot as pp;

from mpl_toolkits import mplot3d;

#function to plot
Pi = math.pi;
f  = lambda x,y: math.sin(x/Pi/2)*math.cos(y/Pi/2)*(x-y);

#make data
Len = 50;
X   = np.linspace(-10,10, Len);
Y   = np.linspace(-10,10, Len);
X,Y = np.meshgrid(X,Y);
Z   = [];

for I in range(Len):
  Z += [[]];
  for J in range(Len):
    Z[I] += [f(X[I][J],Y[I][J])]; #any function of x,y  

#plot
pp.figure(figsize=[10,10]);
pp3d = pp.axes(projection="3d", elev=50, azim=70);

pp3d.set_title("Function z = f(x,y)");
pp3d.set_xlabel("x values");
pp3d.set_ylabel("y values");
pp3d.set_zlabel("z values");
pp3d.plot_wireframe(X,Y,np.array(Z));
#pp3d.plot_surface(X,Y,np.array(Z), cmap="coolwarm");
#eof

The 4 Factors of Machine Learning

The 4 factors of machine learning:


1) Are there training data?
2) Should training data be generated, all or more?
3) Are there formula to check output?
4) Should output be checked in virtual(bot)/real(robot) environment?

For Supervised Learning (SL):
1) Yes
2) No
3) No
4) No

For Unsupervised Learning (UL):
1) No
2) Yes
3) Yes
4) No

For Reinforcement Learning (RL):
1) No
2) Yes
3) No
4) Yes

Summary:

With training data --> Supervised Learning
No training data, generate them, then:
  • Check output using formula --> Unsupervised Learning
  • Check output using environment --> Reinforcement Learning
Any of the 3 above with more data added to train --> Incremental Learning.

Thursday, 17 October 2019

TensorFlow: Draw Loss Landscape of Single Neuron that Learns OR

Single neuron with bias, sigmoid activation, 2 inputs, data from OR true table. Loss landscape plot:

Source code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow        as tf;
import numpy             as np;
import matplotlib.pyplot as pp;

from mpl_toolkits import mplot3d;

#constants
BSIZE = 4;

#model
class model(tf.Module):
  def __init__(this):
    super().__init__();
    this.W = tf.Variable(tf.random.uniform([2,1], -1,1));
    this.B = tf.Variable(tf.random.uniform([  1], -1,1));

  @tf.function(input_signature=[tf.TensorSpec([BSIZE,2])])
  def __call__(this,Inp):
    return tf.sigmoid(tf.matmul(Inp,this.W) + this.B);

  def ff(this,Inp):
    Out = tf.sigmoid(tf.matmul(Inp,this.W) + this.B);
    return Out;

#data
X = tf.convert_to_tensor([[0,0],[0,1],[1,0],[1,1]], tf.float32);
Y = tf.convert_to_tensor([[0],  [1],  [1],  [1]  ], tf.float32);

#train
Model = model();
Loss  = tf.losses.MeanAbsoluteError();
Optim = tf.optimizers.SGD(1e-1);
Steps = 5000;
Xyz   = [];
#'''
for I in range(Steps):
  if I%(Steps/10)==0:
    Out       = Model(X);
    Lossvalue = Loss(Y,Out);
    print("Loss:",Lossvalue.numpy());
    Xyz += [[Model.W.numpy()[0],Model.W.numpy()[1],Lossvalue.numpy()]];

  with tf.GradientTape() as T:
    Out       = Model(X);
    Lossvalue = Loss(Y,Out);

  Grads = T.gradient(Lossvalue, Model.trainable_variables);
  Optim.apply_gradients(zip(Grads, Model.trainable_variables));

Out       = Model(X);
Lossvalue = Loss(Y,Out);
print("Loss:",Lossvalue.numpy(),"(Last)");
Xyz += [[Model.W.numpy()[0],Model.W.numpy()[1],Lossvalue.numpy()]];

print("\nWeights of optimum:");
W = tf.keras.backend.flatten(Model.W).numpy();
print(W);
#'''
#loss landscape
D  = 50;
P  = np.linspace(-10,10, D); #marker points
W1 = [];
W2 = [];
for I in range(D):
  W1 += [[]];
  W2 += [[]];  
  for J in range(D):
    W1[I] += [P[I]];
    W2[I] += [P[J]];

print("\nW1",W1);
print("W2",W2);
Z  = [];

for I in range(D):
  Zrow = [];
  for J in range(D):
    Model.W = tf.convert_to_tensor([[P[I]],[P[J]]], tf.float32);
    Out     = Model.ff(X);
    Lossval = Loss(Y,Out).numpy();
    Zrow   += [Lossval];

  Z += [Zrow];

print("Z:",Z);
Z = np.array(Z);

pp.figure(figsize=(8,8));
pp3d = pp.axes(projection="3d",elev=10,azim=10);
pp3d.text(0,0,0,"(0,0,0)");
pp3d.text(W[0],W[1],0,"Optimum");

pp3d.plot([0,10],[0,0],"-r");
pp3d.plot([0,0],[0,10],"-g");
pp3d.plot([0,0],[0,0],[0,1],"-b");
pp3d.plot([W[0]],[W[1]],[0],"yo");

pp3d.set_title("Loss Landscape");
pp3d.set_xlabel("Weight1");
pp3d.set_ylabel("Weight2");
pp3d.set_zlabel("Loss");
pp3d.plot_wireframe(W1,W2,Z, cmap="coolwarm");

#gradient descent curve
W1s = [];
W2s = [];
Ls  = [];
for I in range(len(Xyz)):
  W1s += [Xyz[I][0]];
  W2s += [Xyz[I][1]];
  Ls  += [Xyz[I][2]];  

pp3d.plot(W1s,W2s,Ls,"-ro");
#eof

Wednesday, 16 October 2019

TensorFlow: Single Neuron Linear Regression without Bias

Source code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow as tf;

#constants
BSIZE = 1;

#model
class model(tf.Module):
  def __init__(this):
    super().__init__();
    this.W1 = tf.Variable(tf.random.uniform([2,1], -1,1));
  
  @tf.function(input_signature=[tf.TensorSpec([BSIZE,2])])
  def __call__(this,Inp):
    return tf.matmul(Inp,this.W1);

#data
X = tf.convert_to_tensor([[1,2]],tf.float32);
Y = tf.convert_to_tensor([[3  ]],tf.float32);

#train
Model = model();
Loss  = tf.losses.LogCosh();
Optim = tf.optimizers.SGD(1e-1);
Steps = 10;

for I in range(Steps):
  if I%(Steps/10)==0:
    Out       = Model(X);
    Lossvalue = Loss(Y,Out);
    print("Loss:",Lossvalue.numpy());

  with tf.GradientTape() as T:
    Out       = Model(X);
    Lossvalue = Loss(Y,Out);

  Grads = T.gradient(Lossvalue, Model.trainable_variables);
  Optim.apply_gradients(zip(Grads, Model.trainable_variables));

Out       = Model(X);
Lossvalue = Loss(Y,Out);
print("Loss:",Lossvalue.numpy(),"(Last)");

#test
print("\nTest:");
print(X.numpy()[0],"-->",Y.numpy()[0]);
print(Model(X).numpy()[0][0]);

print("\nDone.");
#eof

Monday, 14 October 2019

Friday, 11 October 2019

PyTorch: Single Neuron Linear Regression with Gradient Descent

PyTorch has auto-differentiation just like TensorFlow. The following code show a single neuron with linear activation (identity activation) that learns simple regression through Gradient Descent.

Source code:
%reset -f

#libs
import torch as t;

#data
X = t.tensor([[1.,2.]]);
Y = t.tensor([ 3.    ]);

#model
W1 = t.rand(2, requires_grad=True);

def feedforward(Inp):
  return t.dot(Inp,W1);

#before train
print("Before train:");
print(float(feedforward(X[0])));

#train
Steps = 50;
print("\nTraining...");

for I in range(Steps):
  for J in range(len(X)):
    #forward
    Inp = X[J];
    Exp = Y[J];
    Out = feedforward(Inp);

    #backward
    Delta = Out-Exp;
    Loss  = Delta**2; #loss function of delta (error)
    Grad  = 2*Delta;  #gradient of loss function
    Out.backward(Grad);

    #apply grads
    W1.data -= 0.01*W1.grad.data;
    W1.grad.data.zero_();

  if I%(Steps/10)==0:
    print("Loss:",float(Loss));

print("\nAfter train:");
print(float(feedforward(X[0])));

print("\nDone.");
#eof

Comparison of Major Machine Learning Related Libraries

TensorFlow (tensorflow.org):
  • Python, auto-differentiation, advanced features (RNN, Conv)
  • C/C++ APIs:            Yes (also JS, Swift)
  • CUDA:                  Yes
  • Distributed computing: Yes
  • Production ready:      Yes
PyTorch - Similar to Torch (pytorch.org):
  • Python, auto-differentiation, advanced features (RNN, Conv)
  • C/C++ APIs:            Yes
  • CUDA:                  Yes
  • Distributed computing: Yes
  • Production ready:      Yes
Chainer - Similar to Torch (chainer.org):
  • Python, auto-differentiation, advanced features (RNN, Conv)
  • C/C++ APIs:            Yes (Unstable)
  • CUDA:                  Yes
  • Distributed computing: Yes
  • Production ready:      Yes
CNTK:
  • A competitive ML lib from Microsoft.
Keras:
  • High-level APIs only, interface for TensorFlow, CNTK, etc.
  • No longer in development, ceased since 2017
  • Java-based, not competitive to C/C++-based.
OpenCV:
  • Computer Vision only.
Shogun:
  • High-level APIs only.
Scikit-Learn:
  • No auto-differentiation!
Pandas:
  • No ML built-in, column data processing
NumPy:
  • No ML built-in, N-dimension array processing
SciPy:
  • Math functions
Matplotlib:
  • Data plotting
More to consider:
  • Dlib, Accord.NET, mlpack, DyNet, OpenNN, ML.NET, Sonnet, MXNet, Gluon, DL4J, Onnx, ml.js, brain.js, ConvNet.js, WebDNN, XGBoost, StatsModels, LightGBM, CatBoost, PyBrain, Eli5, fast.ai, TFLearn, Lasagne, nolearn, Elephas, Seaborn, Synaptic, KerasJS, NeuroJS

New SameSite Cookie Option Enforcement on Chrome

Chrome recently requires cookies to be set with SameSite option. Without this option, SameSite is consider as None, but Chrome also requires that SameSite=None must go together with cookie Secure option, that means all web requests have to go through HTTPS.

These are the cases:
1. SameSite not set
Browser considers that SameSite=None, and shows warning if it is set in contents served thru' HTTP instead of HTTPS.

2. SameSite=None
3. SameSite=Lax
4. SameSite=Strict

Many libraries, for example, socket.io have their own cookies beside the cookies of the website/webapp containing them. The warning will be shown always, until updates are done in the libraries.

Socket.io has cookie named 'io', because Socket.io runs on both WebSocket and HTTP/HTTPS long-polling.

Thursday, 10 October 2019

TensorFlow: Simple Single Neuron Linear Regressor

A single neuron with linear activation, aka identity activation, or just like having no activation can do linear regression too. The following code is low-level TensorFlow code to do the task.

Source code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow              as tf;
from   tensorflow.keras.layers import *;

#constants
BSIZE = 1;

#model
class model(tf.Module):
  def __init__(this):
    super().__init__();
    this.W1 = tf.Variable(tf.random.uniform([2,1], -1,1));
    this.B1 = tf.Variable(tf.random.uniform([  1], -1,1));

  @tf.function(input_signature=[tf.TensorSpec([BSIZE,2])])
  def __call__(this,X):
    Out = tf.matmul(X,this.W1) + this.B1;
    return Out;

#data
X = tf.convert_to_tensor([[1,2]],tf.float32);
Y = tf.convert_to_tensor([[3]  ],tf.float32);

#train
Model = model();
Loss  = tf.losses.LogCosh();
Optim = tf.optimizers.SGD(1e-1);
Steps = 10;

for I in range(Steps):
  if I%(Steps/10)==0:
    Out       = Model(X);
    Lossvalue = Loss(Y,Out);
    print("Loss:",Lossvalue.numpy());

  with tf.GradientTape() as T:
    Out       = Model(X);
    Lossvalue = Loss(Y,Out);

  Grads = T.gradient(Lossvalue, Model.trainable_variables);
  Optim.apply_gradients(zip(Grads, Model.trainable_variables));

Out       = Model(X);
Lossvalue = Loss(Y,Out);
print("Loss:",Lossvalue.numpy(),"(Last)");

print("\nTest");
print(Model(X).numpy());

print("\nDone.");
#eof

Tuesday, 8 October 2019

Sample: TensorFlow 2.x Estimator 'DNNRegressor'

TensorFlow has ready-to-use networks like DNNRegressor, DNNClassifier, RNNClassifier (experimental at TF 2.0). These are easy networks to use but there's a limitation: Hard to customise them. For example, DNNRegressor has only 1 constructor param named `activation_fn` which is used for all layers of the network, but in good practice, it's good to use different activation functions at hidden layers and output layer.

Source code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow              as tf;
from   tensorflow.keras.layers import *;

import numpy             as np;
import matplotlib.pyplot as pp;
import logging;

#disable tf log
tf.get_logger().setLevel(logging.ERROR)

#input shape
Input = tf.feature_column.numeric_column("X",shape=[2]);

#ready-to-use network
Model = tf.estimator.DNNRegressor(
  hidden_units    = [20,1],
  feature_columns = [Input],
  activation_fn   = tf.identity
);

#data
def input_fn():
  return {"X":tf.convert_to_tensor([[1.,2.]])},tf.convert_to_tensor([3.]);

#train
print("Training...");
for I in range(10):
  R = Model.evaluate(input_fn,steps=1);
  print("Loss:",R["loss"]);
  Model.train(input_fn,steps=1000);

#test
print("\nPredict:");
print(next(Model.predict(input_fn))["predictions"][0]);

print("\nDone.");
#eof

Sunday, 6 October 2019

Regression vs Classification

The main difference between regression and classification:
  • Regression: Output range is in contiguous range.
  • Classification: Output is always class indices or a set of probabilities of classes.
Single output neuron targeting labels
  • This is classification
Multiple output neurons targeting probabilities of labels:
  • This is still classification
Example of regression:
  • Linear activation (full numeric range)

Thursday, 3 October 2019

Load Image Files and Make Ready-to-Train Data

TensorFlow has built-in utility functions to read file, decode image format, convert value types, and resize image data before training, the `load_img` utility function below show how to load a JPEG, JPG file to ready-to-train data of shape (width,height,channels):

Example to get file paths:
Dataset = tf.data.Dataset.list_files(DATA_DIR+"/roses/*").take(10);

Source code:
def load_img(Path):
  File = tf.io.read_file(Path);
  Img  = tf.image.decode_jpeg(File, channels=3);
  Img  = tf.image.convert_image_dtype(Img,tf.float32);
  Img  = tf.image.resize(Img, [IMG_WIDTH,IMG_HEIGHT]);
  return Img;

Wednesday, 2 October 2019

TensorFlow: Save to .pb File of a Low-Level Model

Save/load a model subclassing from tf.keras.Model:
  • Save: tf.saved_model.save(Model,Model_Dir);
  • Load: tf.keras.models.load_model(Model_Dir);
In order to save a low-level (using tf.Variable) model, the tf.Variable(s) must the under a tf.Module instead of being global vars.
  • Save: tf.saved_model.save(Model,Model_Dir);
  • Load: tf.saved_model.load(Model_Dir);
Source code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow as tf;

#constants
BSIZE = 4;

#model
class model(tf.Module):
  def __init__(this):
    super().__init__();
    this.W1 = tf.Variable(tf.random.uniform([2,20], -1,1));
    this.B1 = tf.Variable(tf.random.uniform([  20], -1,1));                          
    this.W2 = tf.Variable(tf.random.uniform([20,1], -1,1));
    this.B2 = tf.Variable(tf.random.uniform([   1], -1,1));                          
                          
  @tf.function(input_signature=[tf.TensorSpec([BSIZE,2],tf.float32)])
  def __call__(this,X):
    H1  = tf.nn.leaky_relu(tf.matmul(X,this.W1) + this.B1);
    Out = tf.sigmoid(tf.matmul(H1,this.W2) + this.B2);
    return Out;

def get_loss(Out):
  return tf.reduce_sum(tf.square(Y-Out));

#PROGRAMME ENTRY POINT==========================================================
#data
X = tf.convert_to_tensor([[0,0],[0,1],[1,0],[1,1]],tf.float32);
Y = tf.convert_to_tensor([[0],  [1],  [1],  [0]  ],tf.float32);

#train
M     = model();
Optim = tf.keras.optimizers.SGD(1e-1);

for I in range(1000):
  if I%100==0:
    Out  = M(X);
    Loss = get_loss(Out);
    print("Loss:",Loss.numpy());

  with tf.GradientTape() as T:
    Out  = M(X);
    Loss = get_loss(Out);

  Grads = T.gradient(Loss, [M.W1,M.B1,M.W2,M.B2]);
  Optim.apply_gradients(zip(Grads, [M.W1,M.B1,M.W2,M.B2]));
#end for

Out  = M(X);
Loss = get_loss(Out);
print("Loss:",Loss.numpy(),"(Last)");

print("\nSaving...");
tf.saved_model.save(M,"/tmp/test-model");

print("\nEval from previous save:");
M2 = tf.saved_model.load("/tmp/test-model")
print(tf.round(M2(X)).numpy());

print("\nDone.");
#eof

Keras Model: Save to .pb File and Load Back

It's rather hard to save and load tf.Variable(s) in a ML with only tensor maths as the current available saver utils only support Keras model (or to design owned saving format), these utils are (no more tf.train.Saver in TF2 as TF2 has no ore sessions):
  • tf.keras.callbacks.ModelCheckpoint
  • tf.keras.Model.save
The above saver utils only save weights and those kind of variables. To save a full frozen model, there are 2 formats: HDF5, or SavedModel (will be default in TF2). Let's Model be a Keras model, save and load as following:

Save:
tf.saved_model.save(Model,Path_To_A_Dir);

Load:
Feedonly_Model = tf.saved_model.load(Path_To_Model_Dir);
Keras_Model    = tf.keras.models.load_model(Path_To_Model_Dir);

Source code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow as tf;

#data
X = [[0,0],[0,1],[1,0],[1,1]];
Y = [[0],  [1],  [1],  [0]  ];
X = tf.convert_to_tensor(X,tf.float32);
Y = tf.convert_to_tensor(Y,tf.float32);

#model
class model(tf.keras.Model):
  def __init__(this):
    super().__init__();
    this.Layer1 = tf.keras.layers.Dense(20, activation=tf.nn.leaky_relu);
    this.Out    = tf.keras.layers.Dense(1,  activation=tf.sigmoid);

  def call(this,X):
    H1  = this.Layer1(X);
    Out = this.Out(H1);
    return Out;
#end class

#saver for training (model check point is not a frozen graph, not convenient for inference)
Checkpoint = "/tmp/test/model.ckpt";
Callback = tf.keras.callbacks.ModelCheckpoint(
  filepath  =Checkpoint,
  save_freq =len(X)*100,
  verbose   =True
);

#train (or load)
#Model = tf.saved_model.load("/tmp/test/model"); #feed only, not keras model
Model = tf.keras.models.load_model("/tmp/test/model");
print(Model.evaluate(X,Y, batch_size=4, verbose=0));
'''
print("Training...");
Model = model();
Model.compile(loss     =tf.losses.mean_squared_error,
              optimizer=tf.keras.optimizers.SGD(1e-1),
              metrics  =["accuracy"]);

for I in range(10):
  Out = Model.evaluate(X,Y, batch_size=4, verbose=0);
  print("\nI =",I); 
  print(Out);
  Model.fit(X,Y, batch_size=4, epochs=100, verbose=0) #, 
            #callbacks=[Callback]);

Out = Model.evaluate(X,Y, batch_size=2, verbose=0);
print("\nAfter training:"); 
print(Out,"(Last)");
tf.saved_model.save(Model,"/tmp/test/model");
'''
print("\nDone.");
#eof