Programming project#

Instructions#

The programming tasks described below have to be done in groups of two. It’s totally okay to ask for help or advice to others (also to me!), but copy-pasting from your classmates is strictly forbidden. You will see that some tasks require internet searches that go beyond what you’ve learned in my class notes. This is expected: my goal is for you to be able to solve new problems more or less independently.

Project due date: Sunday 11.06.2023, 23:59 (the return folders on OLAT will close at precisely this time - no exceptions!)

Please return two files:

  • the full notebook as .ipynb, executed with all outputs (I recommend to execute “Restart kernel and run all cells” one last time before submission)

  • any additional file you may have downloaded for your project (as zip or tar file containing several files if too large).

Please return only one project per group (one of the two has to upload the files).

Grading (30% of the final VU grade) will be based on the following criteria:

  • correctness of the code (the notebook should theoretically run on my laptop as well, assuming that the data files are in the folder)

  • correctness of the solutions (correct plot, correct values, etc.)

  • for the plots: correct use of title, axis labels, units, etc.

  • clarity of the code (no unused lines of code, presence of inline comments, etc.)

  • clarity and layout of the overall document (correct use of the Jupyter Notebook markdown format)

  • originality (for the open project)

You can download this notebook to get started. Delete this “Instructions” section, add your name and Matrikelnummer and start coding! For the tasks that ask for a written answer (e.g. some values, answers, etc.), please use the notebook’s markdown format to write them down!

Preamble#

  • Names:

  • Matrikelnummer:

01 - An energy balance model with hysteresis#

Based on the code we wrote in week 07, code the following extension. For this exercise it’s OK to copy-paste my solutions and start from there. Only copy the necessary (no useless code)!

The planetary albedo \(\alpha\) is in fact changing with climate change. As the temperature drops, sea-ice and ice sheets are extending (increasing the albedo). Inversely, the albedo decreases as temperature rises. The planetary albedo of our simple energy balance model follows the following equation:

\[\begin{split} \alpha = \begin{cases} 0.3,& \text{if } T \gt 280\\ 0.7,& \text{if } T \lt 250\\ a T + b, & \text{otherwise} \end{cases} \end{split}\]

01-01: Compute the parameters \(a\) and \(b\) so that the equation is continuous at T=250K and T=280K.

01-02: Now write a function called alpha_from_temperature which accepts a single positional parameter T as input (a scalar) and returns the corresponding albedo. Test your function using doctests to make sure that it complies to the instructions above.

01-03: Adapt the existing code from week 07 to write a function called temperature_change_with_hysteresis which accepts t0 (the starting temperature in K), n_years (the number of simulation years) as positional arguments and tau (the atmosphere transmissivity) as keyword argument (default value 0.611). Verify that:

  • the stabilization temperature with t0 = 292 and default tau is approximately 288K

  • the stabilization temperature with t0 = 265 and default tau is approximately 233K

01-04: Realize a total of N simulations with starting temperatures regularly spaced between t0=206K, and t0=318K and plot them on a single plot for n_years=50. The plot should look somewhat similar to this example for N=21.

Bonus: only if you want (and if time permits), you can try to increase N and add colors to your plot to create a graph similar to this one.

02 - Weather station data files#

I downloaded 10 min data from the recently launched ZAMG data hub. The data file contains selected parameters from the “INNSBRUCK-FLUGPLATZ (ID: 11804)” weather station.

You can download the data from the following links (right-click + “Save as…”):

Let me open the data for you and display its content:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('INNSBRUCK-FLUGPLATZ_Datensatz_20150101_20211231.csv', index_col=1, parse_dates=True)
df = df.drop('station', axis=1)

02-01: after reading the documentation of the respective functions (and maybe try a few things yourself), explain in plain sentences:

  • what am I asking pandas to do with the index_col=1, parse_dates=True keyword arguments? Why am I doing this?

  • what am I asking pandas to do with .drop()? Why axis=1?

Now let me do something else from you:

dfmeta = pd.read_csv('ZEHNMIN Parameter-Metadaten.csv', index_col=0)
dfmeta.loc[df.columns]
Kurzbeschreibung Beschreibung Einheit
DD Windrichtung Windrichtung, vektorielles Mittel über 10 Minuten °
FF vektorielle Windgeschwindigkeit Windgeschwindigkeit, vektorielles Mittel über ... m/s
GSX Globalstrahlung Globalstrahlung, arithmetisches Mittel über 10... W/m²
P Luftdruck Luftdruck, Basiswert zur Minute10 hPa
RF Relative Feuchte Relative Luftfeuchte, Basiswert zur Minute10 %
RR Niederschlag 10 Minuten Summe des Niederschlags, Summe der ... mm
SO Sonnenscheindauer Sonnenscheindauer, Sekundensumme über 10 Minuten s
TB1 Erdbodentemperatur in 10cm Tiefe Erdbodentemperatur in 10cm Tiefe, Basiswert zu... °C
TB2 Erdbodentemperatur in 20cm Tiefe Erdbodentemperatur in 20cm Tiefe, Basiswert zu... °C
TB3 Erdbodentemperatur in 50cm Tiefe Erdbodentemperatur in 50cm Tiefe, Basiswert zu... °C
TL Lufttemperatur in 2m Lufttemperatur in 2m Höhe, Basiswert zur Minute10 °C
TP Taupunkt Taupunktstemperatur, Basiswert zur Minute10 °C

02-02: again, explain in plain sentences what the dfmeta.loc[df.columns] is doing, and why it works that way.

Finally, let me do a last step for you before you start coding:

dfh = df.resample('H').mean()
/var/folders/29/f4w56zjd1c1b_gdh_b7bjs3c0000gq/T/ipykernel_55286/1970207480.py:1: FutureWarning: 'H' is deprecated and will be removed in a future version, please use 'h' instead.
  dfh = df.resample('H').mean()

02-03: explore the dfh dataframe. Explain, in plain words, what the purpose of .resample('H') followed by mean() is. Explain what .resample('H').max() and .resample('H').sum() would do.

02-04: Using np.allclose, make sure that the average of the first hour (that you’ll compute yourself from df) is indeed equal to the first row of dfh. Now, two variables in the dataframe have units that aren’t suitable for averaging. Please convert the following variables to the correct units:

  • RR needs to be converted from the average of 10 min sums to mm/h

  • SO needs to be converted from the average of 10 min sums to s/h

From now on, we will use the hourly data only (and further aggregations when necessary). The 10 mins data are great but require a little bit more of pandas kung fu (the chinese term, not the sport) to be used efficiently.

Spend some time exploring the dfh dataframe we just created. What time period does it cover? What variables does it contain?

Note on pandas: all the exercises below can be done with or without pandas. Each question can be answered with very few lines of code (often one or two) with pandas, and I recommend to use it as much as possible. If you want, you can always use numpy in case of doubt: you can access the data as a numpy array with: df[column_name].values.

03 - Precipitation#

In this section, we will focus on precipitation only.

03-01: Compute the average annual precipitation (m/year) over the 7-year period.

03-02: What is the smallest non-zero precipitation measured at the station? What is the maximum hourly precipitation measured at the station? When did this occur?

03-03: Plot a histogram of hourly precipitation, with bins of size 0.2 mm/h, starting at 0.1 mm/h and ending at 25 mm/h. Plot the same data, but this time with a logarithmic y-axis. Compute the 99th percentile (or quantile) of hourly precipitation.

03-04: Compute daily sums (mm/d) of precipitation (tip: use .resample again). Compute the average number or rain days per year in Innsbruck (a “rain day” is a day with at least 0.1 mm / d of measured precipitation).

03-05: Now select (subset) the daily dataframe to keep only only daily data in the months of December, January, February (DFJ). To do this, note that dfh.index.month exists and can be used to subset the data efficiently. Compute the average precipitation in DJF (mm / d), and the average number of rainy days in DJF. Repeat with the months of June, July, August (JJA).

03-06: Repeat the DJF and JJA subsetting, but this time with hourly data. Count the total number of times that hourly precipitation in DJF is above the 99th percentile computed in exercise 03-03. Repeat with JJA.

03-07: Compute and plot the average daily cycle of hourly precipitation in DFJ and JJA. I expect a plot similar to this example. To compute the daily cycle, I recommend to combine two very useful tools. First, start by noticing that ds.index.hour exists and can be used to categorize data. Then, note that df.groupby exists and can be used exactly for that (documentation).

04 - A few other variables#

In this section, we will continue to analyze the weather station data.

04-01: Verify that the three soil temperatures have approximately the same average value over the entire period. Now plot the three soil temperature timeseries in hourly resolution over the course of the year of 2020 (example). Repeat the plot with the month of may 2020.

04-02: Plot the average daily cycle of all three soil temperatures.

04-03: Compute the difference (in °C) between the air temperature and the dewpoint temperature. Now plot this difference on a scatter plot (x-axis: relative humidity, y-axis: temperature difference).

05 - Free coding project#

The last part of this semester project is up to you! You are free to explore whatever interests you. I however add three requirements:

  1. This section should have at least 5 original plots in it. They are the output of your analysis.

  2. This section should also use additional data that you downloaded yourself. The easiest way would probably be to download another station(s) from the ZAMG database, or data from the same station but for another time period (e.g. for trend or change analysis). You can, however, decide to do something completely different if you prefer (as long as you download and read one more file).

  3. this section should contain at least one regression or correlation analysis between two parameters. Examples:

    • between two different variables at the same station (like we did with the dewpoint above)

    • between different stations (for example, average temperature as a function of station elevation)

    • between average temperature and time (trends analysis)

    • etc.

That’s it! Here are a few ideas:

  • detection of trends and changes at the station Innsbruck for 1993-2021

  • comparison of 5-yr climatologies at various stations in Tirol, taking elevation or location into account

  • compute the theoretical day length from the station’s longitude and latitude (you can find solutions for this online, just let me know the source if you used a solution online), and use these computations to compare the measured sunshine duration to the maximum day length. This can be used to classify “sunshine days” for example.

  • use the python “windrose” library to plot a windrose at different locations and time of day.

  • etc.

If you have your own idea but are unsure about whether this is too much or not enough, come to see me in class! In general, the three requirements above should be enough.

My goal with this section is to let you formulate a programming goal and implement it.