Hegland, Markus; Leopardi, Paul
Description
We examine sparse grid quadrature on weighted tensor products (wtp) of reproducing kernel Hilbert spaces on products of the unit sphere S². We examine sparse grid quadrature on weighted tensor products (wtp) of reproducing kernel Hilbert spaces on products of the unit sphere S², in the case of worst case quadrature error for rules with arbitrary quadrature weights. We describe a dimension adaptive quadrature algorithm based on an algorithm of Hegland (ANZIAM J. 44 (E), C335–C353, 2003), and...[Show more] also formulate an adaptation of Wasilkowski and Woźniakowski’s wtp algorithm (Wasilkowski and Woźniakowski: J. Complex. 15(3), 402–447, 1999), here called the ww algorithm. We prove that the dimension adaptive algorithm is optimal in the sense of Dantzig (Oper. Res. 5(2), 266–2777, 1957) and therefore no greater in cost than the ww algorithm. Both algorithms therefore have the optimal asymptotic rate of convergence of quadrature error given by Theorem 3 of Wasilkowski and Woźniakowski (J. Complex. 15(3), 402–447, 1999). A numerical example shows that, even though the asymptotic convergence rate is optimal, if the dimension weights decay slowly enough, and the dimensionality of the problem is large enough, the initial convergence of the dimension adaptive algorithm can be slow.We examine sparse grid quadrature on weighted tensor products (wtp) of reproducing kernel Hilbert spaces on products of the unit sphere S², in the case of worst case quadrature error for rules with arbitrary quadrature weights. We describe a dimension adaptive quadrature algorithm based on an algorithm of Hegland (ANZIAM J. 44 (E), C335–C353, 2003), and also formulate an adaptation of Wasilkowski and Woźniakowski’s wtp algorithm (Wasilkowski and Woźniakowski: J. Complex. 15(3), 402–447, 1999), here called the ww algorithm. We prove that the dimension adaptive algorithm is optimal in the sense of Dantzig (Oper. Res. 5(2), 266–2777, 1957) and therefore no greater in cost than the ww algorithm. Both algorithms therefore have the optimal asymptotic rate of convergence of quadrature error given by Theorem 3 of Wasilkowski and Woźniakowski (J. Complex. 15(3), 402–447, 1999). A numerical example shows that, even though the asymptotic convergence rate is optimal, if the dimension weights decay slowly enough, and the dimensionality of the problem is large enough, the initial convergence of the dimension adaptive algorithm can be slow., in the case of worst case quadrature error for rules with arbitrary quadrature weights. We describe a dimension adaptive quadrature algorithm based on an algorithm of Hegland (ANZIAM J. 44 (E), C335–C353, 2003), and also formulate an adaptation of Wasilkowski and Woźniakowski’s wtp algorithm (Wasilkowski and Woźniakowski: J. Complex. 15(3), 402–447, 1999), here called the ww algorithm. We prove that the dimension adaptive algorithm is optimal in the sense of Dantzig (Oper. Res. 5(2), 266–2777, 1957) and therefore no greater in cost than the ww algorithm. Both algorithms therefore have the optimal asymptotic rate of convergence of quadrature error given by Theorem 3 of Wasilkowski and Woźniakowski (J. Complex. 15(3), 402–447, 1999). A numerical example shows that, even though the asymptotic convergence rate is optimal, if the dimension weights decay slowly enough, and the dimensionality of the problem is large enough, the initial convergence of the dimension adaptive algorithm can be slow.
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.