661 lines
22 KiB
Plaintext
661 lines
22 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "394753bc-ab2b-417a-a98a-ba988bd62edd",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"# Wetterdaten importieren"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c557767d-2319-441a-8b45-6fe8e4bbfb32",
|
|
"metadata": {},
|
|
"source": [
|
|
"Die Wetterdaten des DWDs werden über einen OpenData Server bereitgestellt. Um an diese Daten zu kommen, müssen diese relativ komplex heruntergeladen und zusammengefügt werden."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7abd6877-b35f-4604-ba57-399234b97281",
|
|
"metadata": {},
|
|
"source": [
|
|
"Als erstes werden Vorbereitungen dafür getroffen, die Daten zu importieren. Dazu werden die benötigten Bibliotheken importiert und einige Variablen gesetzt.\n",
|
|
"Außerdem wird ein Ordner angelget, in dem die heruntergeladenen Daten gespeichert werden können."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"id": "c87fe05a-63e3-4748-a01a-d46cb12e9b05",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Fertig\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from operator import contains\n",
|
|
"import requests\n",
|
|
"import os\n",
|
|
"\n",
|
|
"import zipfile\n",
|
|
"import io\n",
|
|
"import pandas as pd\n",
|
|
"\n",
|
|
"url = 'https://opendata.dwd.de/climate_environment/CDC/observations_germany/climate/10_minutes/air_temperature/now/'\n",
|
|
"download_folder = 'dwd-data/'\n",
|
|
"\n",
|
|
"from datetime import datetime\n",
|
|
"\n",
|
|
"from influxdb_client import InfluxDBClient, Point, WritePrecision, BucketRetentionRules\n",
|
|
"from influxdb_client.client.write_api import SYNCHRONOUS\n",
|
|
"\n",
|
|
"token = \"8TYGzTJhqCyKpspMp95Yk858DY2uMzj6wbexbFGMiaLjcG6caiQtNiBKOFlxXnYuEoduFqS9o6_q8UmP1eJC0w==\"\n",
|
|
"org = \"test-org\"\n",
|
|
"bucket = \"dwd_now\"\n",
|
|
"influx_url = \"http://influxdb:8086\"\n",
|
|
"\n",
|
|
"if not os.path.isdir(download_folder):\n",
|
|
" print(\"Daten Ordner erstellt\")\n",
|
|
" os.mkdir(download_folder)\n",
|
|
"\n",
|
|
"print(\"Fertig\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7cad1e52-4d22-4dc5-952c-3578d73280ec",
|
|
"metadata": {},
|
|
"source": [
|
|
"Um die Daten später importieren zu können, muss zunächst ein Bucket in der Datenbank angelegt werden. Wenn das Bucket schon vorhanden ist, wird es gelöscht und erneut angelegt."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "b9acf473-2f26-40c6-9c48-1a4ec159bd3d",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Vorhandes Bucket löschen\n",
|
|
"Bucket angelegt\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"with InfluxDBClient(url=influx_url, token=token) as client:\n",
|
|
" buckets_api = client.buckets_api()\n",
|
|
" buckets = buckets_api.find_buckets().buckets \n",
|
|
" data_bucket = [x for x in buckets if x.name == bucket]\n",
|
|
" \n",
|
|
" if len(data_bucket) > 0:\n",
|
|
" print(\"Vorhandes Bucket löschen\")\n",
|
|
" #buckets_api.delete_bucket(data_bucket[0]) #Jetzt gerade nicht löschen das ist nervig\n",
|
|
" \n",
|
|
" retention_rules = BucketRetentionRules(type=\"expire\", every_seconds=86400)\n",
|
|
" #created_bucket = buckets_api.create_bucket(bucket_name=bucket, retention_rules=retention_rules, org=org)\n",
|
|
" \n",
|
|
" print(\"Bucket angelegt\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "88787497-ec8d-47ed-b885-d1a1cfd443e2",
|
|
"metadata": {},
|
|
"source": [
|
|
"Um an die Daten der Webseite zu gelangen, wird mittels ScreenScrapping jeder Link zu einer der gezipten CSV Dateien gesucht. Dafür wird BeautifulSoup genutzt. Damit BeautifulSoup die Links finden kann, muss zunächst einmal die HTML Datei heruntergeladen werden."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "90f1eb08-b4dd-4743-ad38-492bfd742fec",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Download\n",
|
|
"<Response [200]>\n",
|
|
"<a href=\"10minutenwerte_TU_00073_now.zip\">10minutenwerte_TU_00073_now.zip</a>\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(\"Download\")\n",
|
|
"response = requests.get(url)\n",
|
|
"print(response)\n",
|
|
"\n",
|
|
"from bs4 import BeautifulSoup\n",
|
|
"\n",
|
|
"soup = BeautifulSoup(response.text, 'html.parser')\n",
|
|
"\n",
|
|
"dwd_links = soup.findAll('a')\n",
|
|
"\n",
|
|
"print(dwd_links[2])\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ac3c644a-cac2-41b5-9be0-f01bcb9a40cc",
|
|
"metadata": {},
|
|
"source": [
|
|
"Die so gefilterten Links werden dann in dieser Schleife heruntergeladen und gespeichert. Der Pfad für die Stationsbeschreibungsdatei wird in eine extra Variable geschrieben, um später an die Daten der Stationen zu gelangen."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "2524986b-9c26-42d5-8d76-f4e228d0eb48",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Download 473 von 473\r"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"download_counter = int(1)\n",
|
|
"dwd_len = len(dwd_links)\n",
|
|
"station_file = ''\n",
|
|
"\n",
|
|
"for file_text in dwd_links:\n",
|
|
" \n",
|
|
" dwd_len = len(dwd_links)\n",
|
|
" \n",
|
|
" if (str(file_text.text).__contains__('10minutenwerte')):\n",
|
|
" dest_file = download_folder + file_text.text\n",
|
|
" if not os.path.isfile(dest_file): \n",
|
|
" file_url = url + \"/\" + file_text.text\n",
|
|
" \n",
|
|
" download(file_url, dest_file)\n",
|
|
" elif (str(file_text)).__contains__('Beschreibung_Stationen'):\n",
|
|
" dest_file = download_folder + file_text.text\n",
|
|
" file_url = url + \"/\" + file_text.text\n",
|
|
" download(file_url,dest_file)\n",
|
|
" station_file = dest_file\n",
|
|
" \n",
|
|
" print(\"Download \", download_counter,\" von \",dwd_len, end='\\r')\n",
|
|
" download_counter += 1\n",
|
|
" \n",
|
|
" def download(url, dest_file):\n",
|
|
" response = requests.get(file_url)\n",
|
|
" open(dest_file, 'wb').write(response.content)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "14b90ff2-1473-4e44-9c6b-fdd2d6c20773",
|
|
"metadata": {},
|
|
"source": [
|
|
"Zunächst werden die Wetterstationen in die Klasse Station eingelesen. Aus den Klassen wird ein dictionary erstellt, in welchem mittels der \"Stations_id\" gesucht werden kann. Weil die Stationsdaten nicht als CSV gespeichert sind, musste eine eigene Technik entwickelt werden, um die Daten auszulesen.\n",
|
|
"\n",
|
|
"Als erstes wird so lange gelesen bis kein Leerzeichen mehr erkannt wird. Danach wird gelesen bis wieder ein Leerzeichen erkannt wird. Dadurch können die Felder nacheinander eingelesen werden. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"id": "430041d7-21fa-47d8-8df9-7933a8749f82",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"dwd-data/zehn_now_tu_Beschreibung_Stationen.txt\n",
|
|
"Großenkneten \n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"\n",
|
|
"class Station:\n",
|
|
" def __init__(self, Stations_id, Stationshoehe, geoBreite, geoLaenge, Stationsname, Bundesland):\n",
|
|
" self.Stations_id = Stations_id\n",
|
|
" self.Stationshoehe = Stationshoehe\n",
|
|
" self.geoBreite = geoBreite\n",
|
|
" self.geoLaenge = geoLaenge\n",
|
|
" self.name = Stationsname\n",
|
|
" self.Bundesland = Bundesland\n",
|
|
"\n",
|
|
"def read_station_file():\n",
|
|
" \n",
|
|
" def get_value(i, line, empty_spaces):\n",
|
|
" value = \"\"\n",
|
|
" while(line[i] == ' '):\n",
|
|
" i += 1\n",
|
|
" spaces = 0\n",
|
|
" while(spaces < empty_spaces):\n",
|
|
" if(line[i] == ' '):\n",
|
|
" spaces += 1\n",
|
|
" value += line[i]\n",
|
|
" i += 1\n",
|
|
" return (i,value)\n",
|
|
" \n",
|
|
" f = open(station_file, \"r\", encoding=\"1252\")\n",
|
|
" i = 0\n",
|
|
" stations = {}\n",
|
|
" for line in f:\n",
|
|
" if i > 1:\n",
|
|
"\n",
|
|
" y = 0\n",
|
|
"\n",
|
|
" result = get_value(y,line, 1)\n",
|
|
" Stations_id = str(int(result[1])) #Die Konvertierung in int und zurück zu string entfernt die am Anfang leigenden nullen\n",
|
|
" y = result[0]\n",
|
|
"\n",
|
|
" result = get_value(y,line, 1)\n",
|
|
" von_datum = result[1]\n",
|
|
" y = result[0]\n",
|
|
"\n",
|
|
" result = get_value(y,line, 1)\n",
|
|
" bis_datum = result[1]\n",
|
|
" y = result[0]\n",
|
|
"\n",
|
|
" result = get_value(y,line, 1)\n",
|
|
" Stationshoehe = result[1]\n",
|
|
" y = result[0]\n",
|
|
"\n",
|
|
" result = get_value(y,line, 1)\n",
|
|
" geoBreite = result[1]\n",
|
|
" y = result[0]\n",
|
|
"\n",
|
|
" result = get_value(y,line, 1)\n",
|
|
" geoLaenge = result[1]\n",
|
|
" y = result[0]\n",
|
|
"\n",
|
|
" result = get_value(y,line, 3)\n",
|
|
" Stationsname = result[1]\n",
|
|
" y = result[0]\n",
|
|
"\n",
|
|
" result = get_value(y,line, 1)\n",
|
|
" Bundesland = result[1]\n",
|
|
" y = result[0]\n",
|
|
"\n",
|
|
" station = Station(Stations_id, Stationshoehe, geoBreite, geoLaenge, Stationsname, Bundesland)\n",
|
|
" stations[Stations_id] = station\n",
|
|
"\n",
|
|
" i+=1\n",
|
|
" return(stations)\n",
|
|
"\n",
|
|
"\n",
|
|
"print(station_file)\n",
|
|
"stations = read_station_file()\n",
|
|
"print(stations[\"44\"].name)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "81bbb42e-3bd9-4b29-a6e3-11e1d1593307",
|
|
"metadata": {},
|
|
"source": [
|
|
"Um an die Messwerte in den Dateien zu gelangen, müssen diese entpackt werden. \n",
|
|
"Dies kann einige Zeit in Anspruch nehmen. Es wird immer die Station angezeigt, die gerade importiert wird."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"id": "27966795-ee46-4af1-b63c-0f728333ec79",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Import durchgeführt \r"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"def import_data(df):\n",
|
|
" client = InfluxDBClient(url=influx_url, token=token, org=org)\n",
|
|
" \n",
|
|
" write_api = client.write_api(write_options=SYNCHRONOUS)\n",
|
|
" \n",
|
|
" error = 0\n",
|
|
" \n",
|
|
" for index, row in df.iterrows():\n",
|
|
" \n",
|
|
" measurement_time = datetime.strptime(str(int(row[1])),\"%Y%m%d%H%M\")\n",
|
|
"\n",
|
|
" #station = stations[str(row[0])].name\n",
|
|
" \n",
|
|
" try:\n",
|
|
" station = stations[str(row[0])].name\n",
|
|
" #print(station)\n",
|
|
" except:\n",
|
|
" print(\"Station unknow\", end='\\r')\n",
|
|
" else:\n",
|
|
" try:\n",
|
|
" p = Point(station)\n",
|
|
"\n",
|
|
" #if(row[3]) != -999: p.field(\"PP_10\", row[3])\n",
|
|
" p.field(\"PP_10\", row[3])\n",
|
|
" p.field(\"TTL10\",row[4])\n",
|
|
" p.field(\"TM5_10\", row[5])\n",
|
|
" p.field(\"RF_10\", row[6])\n",
|
|
" p.field(\"TD_10\", row[7])\n",
|
|
"\n",
|
|
" p.time(measurement_time,WritePrecision.S)\n",
|
|
" write_api.write(bucket=bucket, record=p)\n",
|
|
" print(\" \", end='\\r')\n",
|
|
" print(\"Import Station: \", station, end='\\r')\n",
|
|
" except:\n",
|
|
" error += 1\n",
|
|
" if error < 1:\n",
|
|
" print(\"Error Import Station: \", station)\n",
|
|
" client.close()\n",
|
|
"\n",
|
|
"def read_dwd_file(file):\n",
|
|
" df = pd.read_csv(file,sep=';')\n",
|
|
" #print(df, end='\\r')\n",
|
|
" #print(df.iat[0,1])\n",
|
|
" import_data(df)\n",
|
|
"\n",
|
|
"\n",
|
|
"for filename in os.listdir(download_folder):\n",
|
|
" file_path = os.path.join(download_folder, filename)\n",
|
|
" if(str(file_path).__contains__('.zip')):\n",
|
|
" zip=zipfile.ZipFile(file_path)\n",
|
|
" f=zip.open(zip.namelist()[0])\n",
|
|
" read_dwd_file(f)\n",
|
|
" #print(contents)\n",
|
|
"\n",
|
|
"print(\" \", end='\\r')\n",
|
|
"print(\"Import durchgeführt\", end='\\r')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "e2112e23-4a2f-40cb-adf6-44e3caa7c6f7",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Wetterdaten verarbeiten\n",
|
|
"Nachdem die Wetterdaten importiert worden sind, können die Daten verarbeitet werden."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "8461cb29-7634-4b70-9f01-0356b5219046",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Höchst-, Tiefst- und Durchschnittswert"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d70b0073-50e1-4042-a8c0-db4848729d4a",
|
|
"metadata": {},
|
|
"source": [
|
|
"Zur Veranschaulichung wird der Tages-Höchst- und Tiefstwert, sowie der Durchschnittswert für die letzen 24 Stunden in Bad Lippspringe ermittelt."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "dacfcc8e-a74b-4067-8f55-ac4063294dec",
|
|
"metadata": {},
|
|
"source": [
|
|
"Als erstes müssen die Daten der letzten 24 Stunden aus der Datenbank abgerufen werden. Dazu wird mithilfe des Query Clients ein Flux Query ausgeführt, der nach den gewünschten Daten filtert."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"id": "a3e48dfa-eeca-4a3c-a8c7-7b65c223b6c6",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"client = InfluxDBClient(url=influx_url, token=token, org=org)\n",
|
|
"\n",
|
|
"query_api = client.query_api()\n",
|
|
"query = 'from(bucket: \"' + bucket + '\")\\\n",
|
|
" |> range(start: -24h)\\\n",
|
|
" |> filter(fn: (r) => r[\"_measurement\"] == \"Lippspringe, Bad \")\\\n",
|
|
" |> filter(fn: (r) => r[\"_field\"] == \"TM5_10\")'\n",
|
|
"result = query_api.query(org=org, query=query)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "711e5ab4-c5c0-4e03-bc49-fd2f2d0946ed",
|
|
"metadata": {},
|
|
"source": [
|
|
"Als nächstes werden einige Variablen definiert, um den Höchst- und Tiefstwert zu erhalten.\n",
|
|
"Für den Höchstwert nehmen wir standardmäßig einen sehr niedrigen und für den Tiefstwert einen sehr hohen Wert.\n",
|
|
"\n",
|
|
"Außerdem wird für den Durschnittswert ein Zähler und eine Summen Variable definiert."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"id": "053593fb-1fad-4280-b519-a163b89daa7f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"max = -254\n",
|
|
"min = 254\n",
|
|
"\n",
|
|
"i=0\n",
|
|
"sum=0"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 17,
|
|
"id": "94441229-f809-4942-908f-9c0397461245",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Der Tageshöchstwert der letzen 24h liegt bei: -254\n",
|
|
"Der Tagestiefstwert der letzen 24h liegt bei: 254\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"for table in result:\n",
|
|
" for record in table.records:\n",
|
|
" value = record.get_value()\n",
|
|
" i = i + 1\n",
|
|
" sum = sum + value\n",
|
|
" if value > max:\n",
|
|
" max = value\n",
|
|
" if value < min:\n",
|
|
" min = value \n",
|
|
"\n",
|
|
"print(\"Der Tageshöchstwert der letzen 24h liegt bei: \"+ str(max))\n",
|
|
"print(\"Der Tagestiefstwert der letzen 24h liegt bei: \"+ str(min))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "8a9d2fd8-08ac-4b32-83b0-df03285dda93",
|
|
"metadata": {},
|
|
"source": [
|
|
"Um den Durchschnittswert auszurechnen, muss nur noch die Summe aller Werte, also die Variable sum, durch die Anzahl der Werte \"i\" geteilt werden."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 18,
|
|
"id": "6cf90047-ba7b-42fc-b012-aa9647d60191",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"ename": "ZeroDivisionError",
|
|
"evalue": "division by zero",
|
|
"output_type": "error",
|
|
"traceback": [
|
|
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
|
"\u001b[0;31mZeroDivisionError\u001b[0m Traceback (most recent call last)",
|
|
"Input \u001b[0;32mIn [18]\u001b[0m, in \u001b[0;36m<cell line: 1>\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0m average \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43msum\u001b[39;49m\u001b[43m \u001b[49m\u001b[38;5;241;43m/\u001b[39;49m\u001b[43m \u001b[49m\u001b[43mi\u001b[49m\n\u001b[1;32m 2\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mDie Durchschnittstemperatur der letzten 24h liegt bei: \u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;241m+\u001b[39m \u001b[38;5;28mstr\u001b[39m(average))\n",
|
|
"\u001b[0;31mZeroDivisionError\u001b[0m: division by zero"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"average = sum / i\n",
|
|
"print(\"Die Durchschnittstemperatur der letzten 24h liegt bei: \"+ str(average))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "beac4081-54db-405a-a3c8-612918ee6f45",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Heißeste Wetterstation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "476a3d5d-9ef9-4746-9993-e4b5af076883",
|
|
"metadata": {},
|
|
"source": [
|
|
"Als erstes müssen alle notwendigen Daten abgerufen werden."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "4cf4e14f-37f7-4d67-98dd-55c53a5235d7",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"client = InfluxDBClient(url=influx_url, token=token, org=org)\n",
|
|
"\n",
|
|
"query_api = client.query_api()\n",
|
|
"query = 'from(bucket: \"' + bucket + '\")\\\n",
|
|
" |> range(start: -24h)\\\n",
|
|
" |> filter(fn: (r) => r[\"_field\"] == \"TM5_10\")'\n",
|
|
"result = query_api.query(org=org, query=query)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "033f9c72-859b-4d64-92d3-6def4aaaecf4",
|
|
"metadata": {},
|
|
"source": [
|
|
"Zum Bestimmen der heißesten Wetterstation müssen zuerst mehrere Varialben definiert werden."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "6fd12713-48ed-4624-8436-b3f3e23d7612",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"station_name = \"\" # Name der Station, die gerade Verarbeitet wird\n",
|
|
"max_station_temp = -254 # Höchsttemperatur der heißesten Station\n",
|
|
"max_station_name = \"\" # Name der heißesten Station"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "add9133e-09df-4773-9913-9f617beeee5f",
|
|
"metadata": {},
|
|
"source": [
|
|
"Die nächste Schleife iteriert über die Zeilen der Tabelle. Dabei wird für jede Station der Höchstwert bestimmt."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "c5be1f84-d6a0-42cb-9ce8-38d5de155a53",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"for table in result: \n",
|
|
" max = -254 # Maximalwert der aktuell verarbeiteten Station\n",
|
|
" for record in table.records:\n",
|
|
" station_name = record.get_measurement() # Abfragen des Stationsnamens\n",
|
|
" value = record.get_value() # Abfragen des Messwertes\n",
|
|
" if value > max:\n",
|
|
" max = value \n",
|
|
" if max > max_station_temp: # Wenn die aktuelle Station heißer ist als der bisherige Maximalwert\n",
|
|
" max_station_temp = max # den neuen heißesten Wert speichern\n",
|
|
" max_station_name = station_name # und auch den Namen der Station speichern\n",
|
|
"\n",
|
|
"print(\"Die heißeste Station ist \" + str(max_station_name) + \" mit einer Temperatur von \" + str(max_station_temp) + \"C.\" )"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "95c26c18-6993-4245-b8f8-58fce2377179",
|
|
"metadata": {},
|
|
"source": [
|
|
"Das Bestimmen der niedrigsten Temperatur ist ziemlich ähnlich."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "3736b28c-c6b0-4d94-a750-6598d28dc316",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"min_station_temp = 254 # Höchsttemperatur der heißesten Station \n",
|
|
"min_station_name = \"\" # Name der heißesten Station\n",
|
|
"\n",
|
|
"for table in result: \n",
|
|
" min = 254 \n",
|
|
" for record in table.records:\n",
|
|
" station_name = record.get_measurement() \n",
|
|
" value = record.get_value() \n",
|
|
" if value < min:\n",
|
|
" min = value \n",
|
|
" if min < min_station_temp: \n",
|
|
" min_station_temp = max \n",
|
|
" min_station_name = station_name\n",
|
|
"\n",
|
|
"print(\"Die kälteste Station ist \" + str(min_station_name) + \" mit einer Temperatur von \" + str(min_station_temp) + \"C.\" )"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.4"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|