|
| 1 | +- title: "Estimating Floating-Point Errors Using Automatic Differentiation" |
| 2 | + description: | |
| 3 | + Floating-point errors are a testament to the finite nature of computing |
| 4 | + and if left uncontrolled they can have catastrophic results. As such, for |
| 5 | + high-precision computing applications, quantifying these uncertainties |
| 6 | + becomes imperative. There have been significant efforts to mitigate such |
| 7 | + errors by either extending the underlying floating-point precision, using |
| 8 | + alternate compensation algorithms or estimating them using a variety of |
| 9 | + statistical and non-statistical methods. A prominent method of dynamic |
| 10 | + floating-point error estimation is using Automatic Differentiation (AD). |
| 11 | + However, most state-of-the-art AD-based estimation software requires |
| 12 | + manually adapting or annotating the source code by some amount. Moreover, |
| 13 | + operator overloading AD based error estimation tools call for multiple |
| 14 | + gradient recomputations to report errors over a large variety of inputs |
| 15 | + and suffer from all the shortcomings of the underlying operator |
| 16 | + overloading strategy such as reduced efficiency. In this work, we propose |
| 17 | + a customizable way to use AD to synthesize source code for estimating |
| 18 | + uncertainties arising from floating-point arithmetic in C/C++ applications. |
| 19 | + |
| 20 | + Our work presents an automatic error annotation framework that can be used |
| 21 | + in conjunction with custom user defined error models. We also present our |
| 22 | + progress with error estimation on GPU applications. |
| 23 | +
|
| 24 | + location: "[SIAM UQ 2022](https://www.siam.org/conferences/cm/conference/uq22)" |
| 25 | + date: 2022-04-14 |
| 26 | + speaker: V Vassilev, G Singh |
| 27 | + id: "FPErrorEstADSIAMUQ2022" |
| 28 | + artifacts: | |
| 29 | + [Video](https://www.youtube.com/watch?v=pndnawFPKHA&list=PLeZvkLnDkqbS8yQZ6VprODLKQVdL7vlTO&index=8), |
| 30 | + [Link to slides](/assets/presentations/G_Singh-SIAMUQ22_FP_Error_Estimation.pdf) |
| 31 | + highlight: 1 |
| 32 | + |
1 | 33 | - title: "GPU Acceleration of Automatic Differentiation in C++ with Clad"
|
2 | 34 | description: |
|
3 | 35 | Automatic Differentiation (AD) is instrumental for science and industry. It
|
|
0 commit comments