- Source: Pruning (artificial neural network)
In the context of artificial neural networks, pruning is the practice of removing parameters (which may entail removing individual parameters, or parameters in groups such as by neurons) from an existing network. The goal of this process is to maintain accuracy of the network while increasing its efficiency. This can be done to reduce the computational resources required to run the neural network. A biological process of synaptic pruning takes place in the brain of mammals during development (see also Neural Darwinism).
Node (neuron) pruning
A basic algorithm for pruning is as follows:
Evaluate the importance of each neuron.
Rank the neurons according to their importance (assuming there is a clearly defined measure for "importance").
Remove the least important neuron.
Check a termination condition (to be determined by the user) to see whether to continue pruning.
Edge (weight) pruning
Most work on neural network pruning focuses on removing weights, namely, setting their values to zero.
Early work suggested to also change the values of non-pruned weights.
References
Kata Kunci Pencarian:
- Pruning (artificial neural network)
- Decision tree pruning
- History of artificial neural networks
- Synaptic pruning
- Neural scaling law
- Deep learning
- Dilution (neural networks)
- Efficiently updatable neural network
- Symbolic artificial intelligence
- Neuro-fuzzy