Zeroth-Order Optimization of Optical Neural Networks with Linear Combination Natural Gradient and Calibrated Model
Abstract
References
Index Terms
- Zeroth-Order Optimization of Optical Neural Networks with Linear Combination Natural Gradient and Calibrated Model
Recommendations
Natural conjugate gradient training of multilayer perceptrons
Natural gradient (NG) descent, arguably the fastest on-line method for multilayer perceptron (MLP) training, exploits the ''natural'' Riemannian metric that the Fisher information matrix defines in the MLP weight space. It also accelerates ordinary ...
Part 2: Multilayer perceptron and natural gradient learning
AbstractSince the perceptron was developed for learning to classify input patterns, there have been plenty of studies on simple perceptrons and multilayer perceptrons. Despite wide and active studies in theory and applications, multilayer perceptrons ...
Training neural networks using Central Force Optimization and Particle Swarm Optimization: Insights and comparisons
Central Force Optimization (CFO) is a novel and upcoming metaheuristic technique that is based upon physical kinematics. It has previously been demonstrated that CFO is effective when compared with other metaheuristic techniques when applied to multiple ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
Sponsors
In-Cooperation
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
Conference
Acceptance Rates
Upcoming Conference
- Sponsor:
- sigda
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 77Total Downloads
- Downloads (Last 12 months)77
- Downloads (Last 6 weeks)42
Other Metrics
Citations
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in