- Basic and popular:
- MeanAbsoluteError: The most basic loss function
- MeanSquaredError: The faster and popular loss function
- Smoothed MAE:
- LogCosh
- Entropy loss functions:
- BinaryCrossentropy
- CategoricalCrossentropy
- SparseCategoricalCrossentropy
- KLDivergence
- Related to SVM:
- Hinge
- SquaredHinge
- CategoricalHinge
- Others:
- CosineSimilarity
- MeanAbsolutePercentageError
- MeanSquaredLogarithmicError
- Poisson
- Huber
- Basic:
- SGD The most basic optimiser, Stochastic Gradient Descent.
- Momentum:
- RMSprop Similar to momentum optimiser
- Adaptives:
- Adam The fastest optimiser with super-convergence, Adaptive Momentum.
- Adamax Adam with infinity norm
- Adagrad Adaptive gradient
- Adadelta Adaptive delta
- Others
- Ftrl Follow-the-regularised-leader
Adam (Adaptive Momentum) is the fastest one to optimise variables, super-convergence, practically tested.
No comments:
Post a Comment