Colloquium Abstracts Fall 2025
Abstracts will be posted here for the colloquium talks when they are available.
Bo Deng
University of Nebraska
October 2, 2025
Error-free Training for Artificial Neural Network
If we define intelligence as not making the same mistake twice, then a system achieves this artificial intelligence if and only if it can learn from its mistakes every time. For a feedforward neural network under supervised training, this means that it can be trained error-free for every data set. This problem is known as the discrete classification problem in mathematics. Its solution was obtained more than thirty years ago by what is now known as the Universal Approximation Theorem. In this talk, I will present a numerical algorithm to fulfill the UAT. I will illustrate the algorithm by both abstract and practical classification problems.