Wednesday, 20 November 2019

Create a Simple CentOS Docker Container and Attach/Detach from It

#install docker
sudo yum install docker -y

#start docker
sudo systemctl start docker

#pull centos image
sudo docker pull centos

#create a centos container with interactive shell
#c1 is any name for the container
sudo docker container create -it --name c1 centos

#start the container
sudo docker container start c1

#attach to the container
sudo docker container attach c1

#detach from the container
#press: ctrl+p+q

#stop the container
sudo docker container stop c1

#remove all stopped containers
sudo docker container prune

Tuesday, 19 November 2019

Create Docker with SSHD and Bind Host Ports to Its Ports

A docker image with systemd and sshd:
https://hub.docker.com/r/ravik694/c7-systemd-sshd

Install Docker:
sudo yum install docker -y

1. Docker pull the image:
sudo docker pull ravik694/c7-systemd-sshd

2. Create a container named 'd1':
sudo docker run -ti -d -P -p 122:22 --privileged --name d1 -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /tmp/$(mktemp -d):/run ravik694/c7-systemd-sshd

Change the host port for SSH (122 above), and the name of the container (d1 above). Mostly mapping the SSH port is enough, other ports are already exposed by option -P. See step 3 below for getting container IPs, and Nginx proxy_pass to those IPs.

3. Get virtual IP of container d1:
sudo docker inspect d1 | grep IPA

4. SSH to it and change root password:
#for the first container, step 3 gives: 172.17.0.2
ssh root@IP_IN_STEP_3
#type 'toor' for password
#change passwd for root
passwd root

5. Test SSH over host port:
#hm, this port 32771 can't be used to create the second container
#but the image will bind different ports for more containers:
#32771 --> firstcontainer:22
#32773 --> secondcontainer:22
#plus 2, plus 2...
ssh root@localhost -p 32771

6. Map host port to docker port (for a running container):
THIS IS NOT A SOLUTION FOR THOSE DON'T WANT FIREWALL SERVICE.
https://forums.docker.com/t/how-to-expose-port-on-running-container/3252/13

7. Map host port to docker port
THIS IS THE BETTER SOLUTION.
https://stackoverflow.com/a/49371983/5581893

a. Stop the running container, 'd1' for example
sudo docker container stop d1

b. Find the container Id
sudo docker container ls -a

c. Edit 'config.v2.json' of the container
* Open the file through tools such as WinSCP, as 'root' user. Sudo edit file /etc/ssh/sshd_config to enable password authentication and root login.
* Use desktop editor like Sublime Text with 'Pretty JSON' package to format the JSON in the file.
* Find 'ExposedPorts' in the file to add more open ports in the container.

d. Edit 'hostconfig.json' of the container
* Open the file through tools such as WinSCP, as 'root' user. Sudo edit file /etc/ssh/sshd_config to enable password authentication and root login.
* Use desktop editor like Sublime Text with 'Pretty JSON' package to format the JSON in the file.
* Find 'PortBindings' in the file to map more ports from host to container.

e. Restart Docker
sudo systemctl restart docker

f. Start back the container
sudo docker container start d1

Thursday, 14 November 2019

Different Methods of Turning a Python Function into C-based Code

Python code must be converted into C-based code for performance and utilising CPU together with GPU. These are some methods:

1. Use @tf.function decorator
Recommended method in TensorFlow 2

2. Use tf.function directly over some 'def' (pure Python function)
Not recommended, although this yields the same result as method 1.

3. Use tf.autograph.to_graph to create Autograph
Not recommended, although this creates a callable similar to tf.function

4. Use tf.Graph().as_default() to create Graph
Not recommended, legacy of TensorFlow 1, must run in a tf.compat.v1.Session

Example code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow as tf;

#global constants
T1 = tf.constant(1, tf.float32);

#use decoration to convert to c-based function
#this decorator makes: f1 = tf.function(f1);
@tf.function
def f1(T1):
  T2 = tf.constant(2, tf.float32);
  return T1+T2;

def f2(T1):
  T2 = tf.constant(2, tf.float32);
  return T1+T2;

print("Use decorator:");
print(f1); #TensorFlow function!
print(f1(T1));

#the same like using @tf.function decorator
print("\nUse tf.function directly:");
f2 = tf.function(f2);
print(f2); #TensorFlow function!
print(f2(T1));

#create autograph from function
def f3(T1):
  T2 = tf.constant(2, tf.float32);
  return T1+T2;

g3 = tf.autograph.to_graph(f3);
print("\nUse Autograph as function:");
print(g3); #Python function!
print(g3(T1));

#create graph using 'with'
G = tf.Graph();
with G.as_default(): 
  Inp = tf.compat.v1.placeholder(tf.float32);
  T2  = tf.constant(2, tf.float32);
  Out = Inp+T2;

print("\nUse Graph in session:");
S = tf.compat.v1.Session(graph=G);
R = S.run(Out, feed_dict={Inp:T1.numpy()});
print(G);
print(R); #numpy values, not raw python values
#eof

Result:
Use decorator: <tensorflow.python.eager.def_function.Function object at 0x7efe62ad32b0> tf.Tensor(3.0, shape=(), dtype=float32) Use tf.function directly: <tensorflow.python.eager.def_function.Function object at 0x7efe62b29278> tf.Tensor(3.0, shape=(), dtype=float32) Use Autograph as function: <function create_converted_entity_factory.<locals>.create_converted_entity.<locals>.tf__f3 at 0x7efe62903e18> tf.Tensor(3.0, shape=(), dtype=float32) Use Graph in session: <tensorflow.python.framework.ops.Graph object at 0x7efe696155c0> 3.0

Tuesday, 12 November 2019

Returning Values of TensorFlow Functions in Different Modes

TensorFlow functions, eg. tf.abs, tf.add, etc. all of them behave differently in the 2 modes: Eager mode, and graph mode.

In Eager mode (default mode in TensorFlow 2), a function returns a tensor with values inside.

In graph mode (default mode in TensorFlow 1, or inside tf.Graph().as_default() block, a function returns a tensor with op inside.

Example code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow as tf;
from tensorflow.keras.layers import *;

#code
#tf function returns value-based tensor in eager mode
T1 = tf.abs(-1);
print("Value-tensor (aka. Eager tensor):");
print(T1);

#tf function returns op-based tensor in graph mode
G = tf.Graph();
with G.as_default():
  T2 = tf.abs(-1);
  print("\nOp-tensor (aka. graph tensor):");
  print(T2);
#eof

Monday, 11 November 2019

Create Constant Tensors for Feeding in TensorFlow 2

In order to feed to tf.Session.run(feed_dict={...}), just array or numpy array should adequate, but a tensor is needed to call model @tf.function. The following code shows how to create constant tensors; there are 2 methods to do so.

Example code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow as tf;
from tensorflow.keras.layers import *;

#code
T1 = tf.constant         ([1,2,3], tf.float32);
T2 = tf.convert_to_tensor([1,2,3], tf.float32);
print(T1);
print(T2);
#eof

Tuesday, 5 November 2019

Relation Between Operation and Tensor in TensorFlow 2

tf.Operation:

  • Properties:
    • outputs: A list of tensors
tf.Tensor:
  • Properties:
    • op: The tf.Operation connected to this tensor.
      Exists in op-tensor only (ie. a tensor in graph)
    • op.outputs: A list in which this tensor is also an entry.
    • value_index: Index of this tensor in the op.outputs list.
  • Methods:
    • numpy(): The numeric values in this tensor.
      Exists in value-tensor (eager tensor) only (ie. a tensor in TF2 default mode).

Concepts in TensorFlow 1 and 2

These are fundamental concepts in TensorFlow 1 and 2:
  • Session: The environment including devices and graph to run calculation. Session is no longer necessary to be created in TensorFlow 2, TF2 auto-creates sessions to run graph.
  • Graph: The structure containing tensors (nodes) and ops (edges).
    Graph is created in TensorFlow 1 by a chain of functions.
    Graph is created in TensorFlow 2 by a function annotated with @tf.function with the chain of functions inside.
  • Tensor (or Op-Tensor) is the tensor containing an op.
  • EagerTensor (or Value-Tensor) is the tensor containing values.
  • Placeholder (no longer in TF2) is a tensor with empty op.
TensorFlow functions in Eager mode return value-tensors, TensorFlow functions in Graph mode or inside tf.Graph().as_default() block return op-tensors.

Session can run ops and op-tensors. No value returned from session-run an op. Value-tensor is returned from session-run an op-tensor.

Example code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow as tf;

#code
G = tf.Graph();
with G.as_default():
  T1 = tf.compat.v1.placeholder(tf.float32,[1]);
  T2 = tf.compat.v1.placeholder(tf.float32,[1]);
  print("Placeholders are tensors with empty ops:");
  print(T1);
  print(T2);

  print("\nA TensorFlow function:");
  print(tf.add);

  #create a Tensor
  Add = tf.add(T1,T2);
  print("\nA tensor with op, or op-tensor:");
  print(Add);
  print("\nThe op is:");
  print(Add.op);

#create an op
Op = tf.Operation(tf.compat.v1.NodeDef(name="Op",op="Add"),G,inputs=[T1,T2]);
T  = tf.Tensor(op=Op,value_index=0,dtype=tf.float32); #op-tensor

#run to the op-tensor
S  = tf.compat.v1.Session(graph=G);
V1 = tf.convert_to_tensor([1], tf.float32);
V2 = tf.convert_to_tensor([2], tf.float32);
print("Value-tensors (or eager tensors):");
print(V1);
print(V2);

R  = S.run(T, feed_dict={T1:V1.numpy(), T2:V2.numpy()});
print("\nOp-Tensor after run:");
print(R);
#eof

Monday, 4 November 2019

Keep TensorFlow 1 Code Running in TensorFlow 2

TensorFlow 2 is rather different from TensorFlow 1 and TF2 runs in Eager mode by default in which all operations are considered as functions. The following shows how to keep TensorFlow 1 code running in TensorFlow 2.

Method 1: Use compatibility module

%tensorflow_version 2.x
import tensorflow.compat.v1 as tf;
tf.disable_v2_behavior();

Method 2: Change the code to fit TF2
More about TF1-to-TF2 migration:

Function, Tensor, and Operation in TensorFlow 2

TensorFlow 1 runs by default in graph mode in which TF functions return op-tensors, not value-tensors (eager tensors). Operations are used to build graph and graph is fed by a session.

Since TensorFlow 2, the library runs by default in Eager mode (function mode) that every function returns value-tensor (eager tensor), unless: 
  • TF function is inside a function annotated with @tf.function
  • Or, inside a 'with' tf.Graph.as_default() block.
Example code:
%tensorflow_version 2.x
%reset -f

#libs
import tensorflow as tf;

def f1(Inp): #no @tf.function, no autograph
  print("\nEager tensor with values:");
  print(Inp);
  M = tf.multiply(Inp,2); #tf.multiply returns value-tensor here
  return M;

#@tf.function makes a graph of operations
@tf.function
def f2(Inp): #autograph needs param
  print("\nGraph tensor without values:");
  print(Inp);
  M = tf.multiply(Inp,2); #tf.multiply returns op-tensor here
  return M;

#another way to make a graph of operations
G = tf.Graph();
with G.as_default():
  Inp = tf.compat.v1.placeholder(tf.float32, [3]); #manual graph needs placeholder
  Op  = tf.multiply(Inp,2); #tf.multiply returns op-tensor here

T = tf.convert_to_tensor([1,2,3], tf.float32);
print("Eager tensor with values:");
print(T);
f1(T);

R = f2(T);
print("\nOutput from graph is eager tensor:");
print(R);

#tf.multiply in @tf.function returns op-tensor,
#but returns value-tensor (eager tensor) out here:
print("\nUse operation as function in Eager mode:");
print(tf.multiply(T,2)); #tf.multiply returns eager tensor here

print("\nRun operation in graph with feed:");
S = tf.compat.v1.Session(graph=G);
V = S.run(Op,{Inp:T.numpy()});
print(V);
#eof

Friday, 1 November 2019

ML Models that Inherits tf.Module and tf.keras.Model

A machine learning model class in TensorFlow can inherit either tf.Module or tf.keras.Model.

A model based on tf.Module (low-level):
class model(tf.Module):
  def __init__(this):
    super().__init__();
    //create vars, eg.
    //this.MyVar = tf.Variable(...);

  @tf.function
  def __call__(this,Inp):
    //do some calculation with Inp
    Out = ...;
    return Out;

A model based on tf.keras.Model (high-level):
class model(tf.keras.Model):
  def __init__(this):
    super().__init__();
    //create vars, eg.
    //this.MyVar = tf.Variable(...);

  @tf.function
  def call(this,Inp):
    //do some calculation with Inp
    Out = ...;
    return Out;

Train the class based on tf.Module:
Model = model();
Loss  = tf.losses.MeanSquaredError();
Optim = tf.optimizers.SGD(1e-1);
Steps = 1000;

for I in range(Steps):
  with tf.GradientTape() as T:
    Lv = Loss(Y,Model(X));

  Grads = T.gradient(Lv, Model.trainable_variables);
  Optim.apply_gradients(zip(Grads, Model.trainable_variables));

Train the class based on tf.keras.Model:
Model  = model();
Loss   = tf.losses.MeanSquaredError();
Optim  = tf.optimizers.SGD(1e-1);
Steps  = 1000;
Epochs = Steps/(len(X)/BSIZE);

Model.compile(loss=Loss, optimizer=Optim);
Model.fit(X,Y, batch_size=BSIZE, epochs=Epochs, verbose=0);

Save the tf.Module-based model:
tf.saved_model.save(Model,SOME_DIR_PATH);

Save the tf.keras.Model-based model:
tf.keras.models.save_model(Model,SOME_DIR_PATH);

Load the tf.Module-based model and continue training:
M    = tf.saved_model.load(SOME_DIR_PATH);
Vars = M.Some_Keras_Layer.trainable_variables+[M.Some_Var];

for I in range(Steps):
  with tf.GradientTape() as T:
    Lv = Loss(Y, M(X));

  Grads = T.gradient(Lv, Vars);
  Optim.apply_gradients(zip(Grads, Vars));

Load the tf.keras.Model-based model and continue training:
M = tf.keras.models.load_model(SOME_DIR_PATH);
M.fit(X,Y, batch_size=BSIZE, epochs=Epochs, verbose=0);