Garth Wales
  • Introduction
  • META
  • To be added elsewhere
  • University
    • 400 Project Research
    • Aims
    • Presentation
    • Report
    • Work Diary
  • Deep Declarative Networks
    • Potential avenues
    • Work Diary
    • Deep Declarative Networks
    • Image Segmentation
  • shortcuts / syntax / setup
    • Jupyter
    • SSH
    • Python
    • CUDA
    • Screen
    • Markdown Syntax
    • Git
    • Dendron
    • Visual Studio Code
    • LaTeX
    • Windows
    • Bash
    • Vim
  • Machine Learning
    • Markov decision process
    • Reinforcement learning
    • Deep Q Network
    • Modern hopfield network
    • Object Detection
    • Face detection
    • Activation Functions
    • Gym
    • RL frameworks
    • Hyperparameters
    • Interpretability
  • Reinforcement learning
    • Memory
    • Exploration techniques
    • Agents
  • garth.wales
    • Website redesign
  • Computer Graphics and Vision
    • Image Mosaicing
    • Stereo Vision
    • Rendering
    • Exam papers
  • Neural Networks
    • Introduction
    • Learning
    • Learning theory
    • Stochastic Neural Networks
    • Interpretability
    • Word Respresentation
    • Language modelling with recurrent networks
    • Sequence-to-sequence networks
    • Exam
Powered by GitBook
On this page

Was this helpful?

  1. Deep Declarative Networks

Deep Declarative Networks

PreviousWork DiaryNextImage Segmentation

Last updated 3 years ago

Was this helpful?

A neural network is trained via backpropagation, which generally computes gradients of each neuron w.r.t a propagated error term. However this requires a network to learn without knowledge of hard constraints and/or limits the possible functions/applications that can be used within a traditional neural network.

A Deep Declarative Network (DNN) uses implicit function theorem to extend neurons to include declarative nodes, which can be differentiated by solving an optimisation problem instead. These can be intermixed with implicit nodes (traditional non-linear neurons). In fact, functions like ReLU, sigmoid or softmax can be represented as a declarative node ().

This approach is most notably interesting for computer vision problems, which have many existing analytical approaches. Hopefully, a DDN can utilise the known solutions for parts of a problem and learn the required solutions for other parts of a problem. This is particularly useful for many problems with have multiple individual stages to the process (some of which may often not be differentiable normally).

Original repository

Blind PnP solver (uses RANSAC as a declarative node making it interesting!)

http://reports-archive.adm.cs.cmu.edu/anon/2019/CMU-CS-19-109.pdf
GitHub - anucvml/ddn: Deep Declarative NetworksGitHub
Provides resources and tutorials for DDN
GitHub - dylan-campbell/bpnpnet: Solving the Blind Perspective-n-Point Problem End-To-End With Robust Differentiable Geometric OptimizationGitHub
Logo
Logo