Previous Lecture | lect11 | Next Lecture |
lect11, Thu 02/13
How floating-point works
Reading assignment
For next week, read NCM Sections 10.1, 10.2, and 10.5, and “The $25,000,000,000 Eigenvector”.
References for today’s lecture
Section 1.7 (floating-point arithmetic) of the NCM book, and this Wikipedia page on floating-point format.
You may also want to look at this nice article on the IEEE 64-bit floating-point standard by John Cook. Also, here is an interesting article on Google’s TPU processor, which uses a different floating-point format.
Outline
-
A few more words about matrix condition number
-
Floating-point arithmetic
-
Backward error analysis and error bounds on partial pivoting [didn’t get to this]
- cs111 routines:
- cs111.print_float64()
- numpy/scipy objects and routines:
- np.float64
- np.finfo()
- np.inf
- np.nan
- np.isinf()
- np.isnan()