This exercise can be done on a linux machine only! You can use the university workstations for this.
Here is the C code sample from the lecture:
#include <stdio.h>
int main ()
{
int a = 2;
int b = 3;
int c = a + b;
printf ("Sum of two numbers : %d \n", c);
}
Write this code in a C code file, compile and run it.
Now, replace the line int b = 3
with char b[] = "Hello";
. Compile and run the program again (ignore warnings at compilation). Does the output match your expectations? Can you explain what happens? Compare this behavior to python's, and try to explain why this behavior can lead to faster execution times.
A simple way to estimate $\pi$ using a computer is based on a Monte-Carlo method. By drawing a sample of N points with random 2D coordinates (x, y) in the [0, 1[
range, the ratio of points that fall within the unit circle divided by the total number of points (N) gives an estimate of $\pi / 4$.
Provide two implementations of the monte-carlo estimation of $\pi$: a pure python version (standard library) and a vectorized version using numpy. Time their execution for N = [1e2, 1e3, ..., 1e7]. Plot the numpy speed-up as a function of N.
Optional: try the numpy version with N=1e8 and above. Make conclusions about a new trade-off happening for large values of N.
Tip: you can try to mimic %timeit
in your code by running each function at least three times and taking the fastest execution of all three.
Write a function which converts binary strings to decimal numbers. The function should handle unsigned (positive) numbers only. Examples:
'101010'
$\rightarrow$ 42
'10000.1'
$\rightarrow$ 16.5
'1011101101.10101011'
$\rightarrow$ 749.66796875
Now let's develop a new standard based on this representation. Dots cannot be represented by 0s and 1s, so that if we want the position of the dot to be flexible we need an additional memory slot to store this position. Let's define our new format as a 32 bits long sequence of bits, the first 5 bits (starting from the left) being used to give the position of the dot, and the remaining 27 bits used to represent the number. Examples:
'10101010101010101010101010101010'
$\rightarrow$ 699050.65625
. '00000001100110011001100110011001'
$\rightarrow$ 0.19999999552965164
. Explanation for example 1: the first five digits are '10101'
which gives the number 21. The second part of the string therefore becomes a dot at position 21: '010101010101010101010.101010'
. This binary number is then converted to decimal.
Let's name this standard "BSE" (for "best standard ever"), and try to convince the Institute of Electrical and Electronics Engineers to adopt it in place of the old IEEE 754 standard. We have to answer the following questions:
binary32
format (or its numpy equivalent np.float32
) using numpy.nextafter.Conclude. Do you think we should try to convince the Institute of Electrical and Electronics Engineers and present them our results?
Final note: the BSE format is not the IEEE754 format. I'm just saying, because some people got confused during the exam and remembered the BSE better than the real floating point representation...
The number e
can be defined as the sum of the infinite series:
We are going to approximate this number by truncating the sum to a finite value. We use the standard library and it's math module:
import math
n = 100
e1 = sum([1. / math.factorial(i) for i in range(n+1)])
e1
Close enough! Now let's compute it with the same values, but summed from n=100 to n=0:
e2 = sum([1. / math.factorial(i) for i in range(n+1)][::-1])
e2
Seems reasonable too! Are they different?
e1 - e2
Which of the two values is closest to the actual e? Explain why this occurs, and what we can learn from this experiment.
Back to the table of contents