Jonathan Siegel, a postdoc working with Prof. Jinchao Xu, gave a talk at the CCMA Workshop on Mathematical Data Science on December 15. His talk, titled “Optimal Approximation Rates for Neural Networks with Cosine and ReLUk Activation Functions,” concerned recent surprising results he obtained with Professor Xu, which show that shallow neural networks with ReLUk (also known as RePU) activation functions achieve exceptionally high approximation rates, especially in high dimensions. Dr. Siegel and Prof. Xu have posted a preprint of their work under the title “High-Order Approximation Rates for Neural Networks with ReLUk Activation function.”