Tuesday, 14 January 2020

Available Loss Functions and Optimisers in TensorFlow 2

TensorFlow 2 has a pack of loss functions:
  • Basic and popular:
    • MeanAbsoluteError: The most basic loss function
    • MeanSquaredError: The faster and popular loss function
  • Smoothed MAE:
    • LogCosh
  • Entropy loss functions:
    • BinaryCrossentropy
    • CategoricalCrossentropy
    • SparseCategoricalCrossentropy
    • KLDivergence
  • Related to SVM:
    • Hinge
    • SquaredHinge
    • CategoricalHinge
  • Others:
    • CosineSimilarity
    • MeanAbsolutePercentageError
    • MeanSquaredLogarithmicError
    • Poisson
    • Huber
TensorFlow 2 also has a pack of optimisers (in tf.optimizers.*) to use to optimise the variables (weights and biases):

  • Basic:
    • SGD The most basic optimiser, Stochastic Gradient Descent.
  • Momentum:
    • RMSprop Similar to momentum optimiser
  • Adaptives:
    • Adam The fastest optimiser with super-convergence, Adaptive Momentum.
    • Adamax Adam with infinity norm
    • Adagrad Adaptive gradient
    • Adadelta Adaptive delta
  • Others
    • Ftrl Follow-the-regularised-leader
Adam (Adaptive Momentum) is the fastest one to optimise variables, super-convergence, practically tested.

No comments:

Post a Comment