Assignment #04#

Exercise #04-01: a glimpse in the C language#

This exercise can be done on a linux machine only!

Tip

You can use MyBinder’s terminal if you don’t have Linux!

Here is the C code sample from the lecture:

#include <stdio.h> 
int main ()
{
    int a = 2;
    int b = 3;
    int c = a + b;
    printf ("Sum of two numbers : %d \n", c);
}

Write this code in a C code file, compile and run it.

Now, replace the line int b = 3 with char b[] = "Hello";. Compile and run the program again (ignore warnings at compilation). Does the output match your expectations? Can you explain what happens? Compare this behavior to python’s, and try to explain why this behavior can lead to faster execution times.

Exercise #04-02: Monte-Carlo estimation of \(\pi\)#

A simple way to estimate \(\pi\) using a computer is based on a Monte-Carlo method. By drawing a sample of N points with random 2D coordinates (x, y) in the [0, 1[ range, the ratio of points that fall within the unit circle divided by the total number of points (N) gives an estimate of \(\pi / 4\).

Provide two implementations of the monte-carlo estimation of \(\pi\): a pure python version (standard library) and a vectorized version using numpy. Time their execution for N = [1e2, 1e3, …, 1e7]. Optional: plot the numpy speed-up as a function of N.

Optional: try the numpy version with N=1e8 and above. Make conclusions about a new trade-off happening for large values of N.

Tip

To time the execution of a function, you can use the time module (stackoverflow)

Exercise #04-03: a new format based on fixed point binary numbers#

Write a function which converts binary strings to decimal numbers. The function should handle unsigned (positive) numbers only. Examples:

  • '101010' \(\rightarrow\) 42

  • '10000.1' \(\rightarrow\) 16.5

  • '1011101101.10101011' \(\rightarrow\) 749.66796875

Now let’s develop a new standard based on this representation. Dots cannot be represented by 0s and 1s, so that if we want the position of the dot to be flexible we need an additional memory slot to store this position. Let’s define our new format as a 32 bits long sequence of bits, the first 5 bits (starting from the left) being used to give the position of the dot, and the remaining 27 bits used to represent the number. Examples:

  • '10101010101010101010101010101010' \(\rightarrow\) 699050.65625.

  • '00000001100110011001100110011001' \(\rightarrow\) 0.19999999552965164.

Explanation for example 1: the first five digits are '10101' which gives the number 21. The second part of the string therefore becomes a dot at position 21: '010101010101010101010.101010'. This binary number is then converted to decimal.

Let’s name this standard “BSE” (for “best standard ever”), and try to convince the Institute of Electrical and Electronics Engineers to adopt it in place of the old IEEE 754 standard. We have to answer the following questions:

  • what is the smallest number the BSE can represent? The largest?

  • what is the maximal accuracy of the BSE? (in other words, what is the difference between the smallest positive number and zero?)

  • what is the lowest accuracy of our standard? (in other words, what is the difference between the largest number we can represent and the second largest?)

  • does the difference between two nearest representable change, when the dot position doesn’t?

  • now compute the precision of our format for a range of possible values of the BSE

  • for these values, compare the BSE to the IEEE754 binary32 format (or its numpy equivalent np.float32) using numpy.nextafter.

  • (optional: you can also use matplotlib and a log-log plot to produce a graphic similar to the wikipedia page on IEEE 754)

Conclude. Do you think we should try to convince the Institute of Electrical and Electronics Engineers and present them our results?

Warning

The BSE format is not the IEEE754 format. The BSE is a fun format explaining some (but not all) of the underlying concepts behind floating point numbers. I’m just saying, because some people got confused during the exam and remembered the BSE better than the real floating point representation…

Exercise #04-04: exponential error growth#

The number e can be defined as the sum of the infinite series:

\[e = \sum_{n=0}^{\infty} \frac{1}{n!}\]

We are going to approximate this number by truncating the sum to a finite value. We use the standard library and it’s math module:

import math
n = 100
e1 = 0
for i in range(n + 1):
    e1 += 1. / math.factorial(i)
e1
2.7182818284590455

Close enough! Now let’s compute it with the same values, but summed from n=100 to n=0:

e2 = 0
for i in range(n + 1)[::-1]:
    e2 += 1. / math.factorial(i)
e2
2.718281828459045

Seems reasonable too! Are they different?

e1 - e2
4.440892098500626e-16

Which of the two values is closest to the actual e? Explain why this occurs, and what we can learn from this experiment.