Garth Wales
  • Introduction
  • META
  • To be added elsewhere
  • University
    • 400 Project Research
    • Aims
    • Presentation
    • Report
    • Work Diary
  • Deep Declarative Networks
    • Potential avenues
    • Work Diary
    • Deep Declarative Networks
    • Image Segmentation
  • shortcuts / syntax / setup
    • Jupyter
    • SSH
    • Python
    • CUDA
    • Screen
    • Markdown Syntax
    • Git
    • Dendron
    • Visual Studio Code
    • LaTeX
    • Windows
    • Bash
    • Vim
  • Machine Learning
    • Markov decision process
    • Reinforcement learning
    • Deep Q Network
    • Modern hopfield network
    • Object Detection
    • Face detection
    • Activation Functions
    • Gym
    • RL frameworks
    • Hyperparameters
    • Interpretability
  • Reinforcement learning
    • Memory
    • Exploration techniques
    • Agents
  • garth.wales
    • Website redesign
  • Computer Graphics and Vision
    • Image Mosaicing
    • Stereo Vision
    • Rendering
    • Exam papers
  • Neural Networks
    • Introduction
    • Learning
    • Learning theory
    • Stochastic Neural Networks
    • Interpretability
    • Word Respresentation
    • Language modelling with recurrent networks
    • Sequence-to-sequence networks
    • Exam
Powered by GitBook
On this page

Was this helpful?

To be added elsewhere

These will be used to flesh out some other pages

PreviousMETANext400 Project Research

Last updated 4 years ago

Was this helpful?

  • Future project idea: celluar automata

  • Focal Loss

    Focal Loss was introduced by Lin et al., from Facebook, in . They claim to improve one-stage object detectors using Focal Loss to train a detector they name RetinaNet. Focal loss is a Cross-Entropy Loss that weighs the contribution of each sample to the loss based in the classification error. The idea is that, if a sample is already classified correctly by the CNN, its contribution to the loss decreases. With this strategy, they claim to solve the problem of class imbalance by making the loss implicitly focus in those problematic classes. Moreover, they also weight the contribution of each class to the lose in a more explicit class balancing. They use Sigmoid activations, so Focal loss could also be considered a Binary Cross-Entropy Loss

  • (also in the 480 todo)

  • Is there a relationship between augmented data and new data both creating the same effect?

    • Does the addition of 100,000 new images work better than the addition of 100,000 augmentations? Is this a good question?

  • (really good for explanations of different activation and regularisation techniques)

  • This whole spinningup stuff covers just about everything!

  • rlhard (very good!)

https://madhuramiah.medium.com/how-i-increased-the-accuracy-of-mnist-prediction-from-84-to-99-41-63ebd90cc8a0
https://stackoverflow.com/questions/37232782/nan-loss-when-training-regression-network
https://www.tensorflow.org/api_docs/python/tf/keras/activations
https://stackoverflow.com/questions/41175401/what-is-a-batch-in-tensorflow
https://medium.com/@mjbhobe/classifying-fashion-with-a-keras-cnn-achieving-94-accuracy-part-2-a5bd7a4e7e5a
https://machinelearningmastery.com/dropout-for-regularizing-deep-neural-networks/
https://www.kaggle.com/toldo171/tutorial-how-to-get-99-2-from-scratch-indepth
https://stats.stackexchange.com/questions/419751/why-is-softmax-function-used-to-calculate-probabilities-although-we-can-divide-e
https://stackoverflow.com/questions/44164749/how-does-keras-handle-multilabel-classification
this paper
https://gombru.github.io/2018/05/23/cross_entropy_loss/
https://www.pyimagesearch.com/2020/07/13/r-cnn-object-detection-with-keras-tensorflow-and-deep-learning/
https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/
https://spinningup.openai.com/en/latest/user/introduction.hml#introduction
https://www.alexirpan.com/2018/02/14/rl-hard.html
https://www.youtube.com/watch?v=fevMOp5TDQs