diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml index c9c610f..060ad39 100644 --- a/.github/workflows/publish.yml +++ b/.github/workflows/publish.yml @@ -17,11 +17,15 @@ jobs: - name: Set up Quarto uses: quarto-dev/quarto-actions/setup@v2 with: - version: pre-release + version: 1.5.57 - uses: r-lib/actions/setup-r@v2 with: use-public-rspm: true + + - name: Avoid bundling Shinylive packages + run: echo "SHINYLIVE_WASM_PACKAGES=0" >> $GITHUB_ENV + - uses: r-lib/actions/setup-r-dependencies@v2 with: extra-packages: | @@ -31,6 +35,8 @@ jobs: any::downlit any::xml2 any::shinyMobile + BristolMyersSquibb/blockr + any::pracma cache-version: 2 - name: Render Quarto Project @@ -49,7 +55,7 @@ jobs: uses: lycheeverse/lychee-action@v1 with: fail: true - args: "docs --exclude-loopback --insecure --exclude-mail --exclude-path docs/site_libs --cache --max-cache-age 1d" + args: "docs --exclude-loopback --insecure --accept 200,429 --exclude-mail --exclude-path docs/site_libs --cache --max-cache-age 1d" - name: Read lychee's out.md if: failure() diff --git a/DESCRIPTION b/DESCRIPTION index 41c6a3c..eeeda03 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -12,6 +12,7 @@ Imports: downlit, xml2, rmarkdown, - shinylive + shinylive, + seasonal Remotes: posit-dev/r-shinylive diff --git a/_extensions/quarto-ext/shinylive/shinylive.lua b/_extensions/quarto-ext/shinylive/shinylive.lua index 6890a7a..a8ce817 100644 --- a/_extensions/quarto-ext/shinylive/shinylive.lua +++ b/_extensions/quarto-ext/shinylive/shinylive.lua @@ -458,6 +458,13 @@ return { el.attr.classes = pandoc.List() el.attr.classes:insert("shinylive-r") end + + el.text = + "#| '!! shinylive warning !!': |\n".. + "#| shinylive does not work in self-contained HTML documents.\n" .. + "#| Please set `embed-resources: false` in your metadata.\n" .. + el.text + return el end } diff --git a/_freeze/posts/2024-09-09-zurich-roadcycling-wc-2024/index/execute-results/html.json b/_freeze/posts/2024-09-09-zurich-roadcycling-wc-2024/index/execute-results/html.json index 2e14c67..68105a3 100644 --- a/_freeze/posts/2024-09-09-zurich-roadcycling-wc-2024/index/execute-results/html.json +++ b/_freeze/posts/2024-09-09-zurich-roadcycling-wc-2024/index/execute-results/html.json @@ -1,8 +1,8 @@ { - "hash": "724f884a775ed2912cb4a714baa53228", + "hash": "32e35bb5a033905f8461c43952783588", "result": { "engine": "knitr", - "markdown": "---\nlayout: post\ntitle: \"2024 road and para-cycling road world championships: preliminaRy analysis\"\nimage: logo.png\nauthor: David Granjon\ndate: '2024-09-10'\ncategories:\n - sport\n - R\nformat: \n html:\n code-fold: 'show'\nfilters:\n - shinylive\n---\n\n\n\n\n![](logo.png){width=25% fig-align=\"center\"}\n\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlibrary(gpx)\nlibrary(dplyr)\nlibrary(leaflet)\nlibrary(ggplot2)\nlibrary(patchwork)\nlibrary(rayshader)\nlibrary(ggrgl)\nlibrary(gtsummary)\n\nlibrary(rgl)\noptions(rgl.useNULL = TRUE)\nsetupKnitr(autoprint = TRUE)\n```\n:::\n\n\n\n\nFrom Sept 21 to Sept 29, Zurich will welcome the 2024 road and para-cycling road world [championships](https://zurich2024.com/en/). To mark the occasion, my friends and I went to do the 2 first loops (\"only\" 140km, 1700m elevation) of the Elite Mens circuit that will start from [Winterthur](https://zurich2024.com/en/rennstrecken/winterthur-zurich/) on Sept 29. 273km and 4470m of pure pleasure! I am not sure whether riders will have time to enjoy the view. At least I hope they have a better weather than us.\n\n## Circuit overview\n\n\n### Get the GPX file\n\nThe road circuit is available as __GPX__ [format](https://zurich2024.com/wp-content/uploads/2024/08/GPX-22-Winterthur-Zurich-1.gpx), which can be imported by any route planner like Komoot or Strava ... or with R :).\n\nThere are various way to read such format in R, as shown in this other [article](https://www.appsilon.com/post/r-gpx-files). For this blog post, we leverage the `gpx` [package](https://www.rdocumentation.org/packages/tmaptools/versions/2.0/topics/read_GPX):\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nzch_gpx <- read_gpx(\"GPX-22-Winterthur-Zurich-1.gpx\")\nglimpse(zch_gpx)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\nList of 3\n $ routes :List of 1\n ..$ :'data.frame':\t10687 obs. of 5 variables:\n .. ..$ Elevation : num [1:10687] 438 438 438 438 438 ...\n .. ..$ Time : POSIXct[1:10687], format: \"2023-11-03 06:08:27\" \"2023-11-03 06:08:29\" ...\n .. ..$ Latitude : num [1:10687] 47.5 47.5 47.5 47.5 47.5 ...\n .. ..$ Longitude : num [1:10687] 8.72 8.72 8.72 8.72 8.72 ...\n .. ..$ extensions: logi [1:10687] NA NA NA NA NA NA ...\n $ tracks :List of 1\n ..$ :'data.frame':\t0 obs. of 4 variables:\n .. ..$ Elevation: logi(0) \n .. ..$ Time : logi(0) \n .. ..$ Latitude : logi(0) \n .. ..$ Longitude: logi(0) \n $ waypoints:'data.frame':\t9 obs. of 6 variables:\n ..$ Elevation : num [1:9] 464 411 411 411 411 ...\n ..$ Time : POSIXct[1:9], format: NA NA ...\n ..$ Latitude : num [1:9] 47.5 47.4 47.4 47.4 47.4 ...\n ..$ Longitude : num [1:9] 8.76 8.55 8.55 8.55 8.55 ...\n ..$ Name : chr [1:9] \"km 0\" \"Info\" \"Info\" \"Info\" ...\n ..$ Description: chr [1:9] NA NA NA NA ...\n```\n\n\n:::\n:::\n\n\n\n\nWe obtain a list containing 3 dataframes, namely `routes`, `tracks` and `waypoints`.\n\n### Visualize the route\n\nIn the following, we can visualize these data on an __interactive map__. To do so, I chose the `leaflet` package. First, we pass the data to `leaflet()`, then we select a __map provider__ with `addTiles()`. I like to use the a rather light one as I want the user to focus on the route trace and not on any single mountain or village. Therefore, I went for the `CartoDB.Positron` tiles, which you can test [here](https://leaflet-extras.github.io/leaflet-providers/preview/). The trace is injected with `addPolylines`, passing the `Latitude` and `Longitude` columns of our dataset, as well as few styling parameters such as color, line weight and opacity.\n\nThen, we add the starting point and end point of the race available in `zch_gpx$waypoints`. Note that since the last loop goes 7 times around the finish line, the GPS coordinates are duplicated so we only extract `zch_gpx$waypoints[1, ]` and `zch_gpx$waypoints[2, ]`. Those data are given to the `addCircleMarkers()` function, which allows to pass extra information like popups or labels.\nFinally, I wanted to highlight the 4 most significant climbs of this tour:\n\n - Buch am Irchel: 4.83km at 4.2%.\n - Kyburg: 1.28km at 10.3%.\n - Binz: 3.7km at 4.4%.\n - Witikon: 2.63km at 5.3%.\n\nI first had to locate the exact coordinates of each climb (the marker is put at the top). That's the reason why you can see a few JavaScript lines at the end of the script. This is a helper passed to `htmlwidgets::onRender()`, which allowed me to click on the map and get the coordinates in an alert window.\n\n```js\nfunction(x, el, data) {\n var map = this;\n map.on('click', function(e) {\n var coord = e.latlng;\n var lat = coord.lat;\n var lng = coord.lng;\n alert('You clicked the map at latitude: ' + lat + ' and longitude: ' + lng);\n });\n}\n```\n\nI then copied the results and passed them to `addMarkers()`. I faced some challenges while trying to get the markers render well when zooming in and out. Be careful to fix the X and Y anchors and specify the size of the icon you use:\n\n```r\nicon = list(\n iconUrl = \"https://www.vanuatubeachbar.com/wp-content/uploadleaflet-maps-marker-icons/mountains.png\",\n iconWidth = 32,\n iconHeight = 37,\n iconAnchorX = 0,\n iconAnchorY = 0\n)\n```\n\nThe above setting ensures that at any level of zoom, the icon stays on the trace.\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nleaflet(zch_gpx$routes[[1]]) |>\n addTiles(\n urlTemplate = \"https://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}{r}.png\",\n attribution = '© OpenStreetMap contributors © CARTO',\n options = tileOptions(\n subdomains = \"abcd\",\n\t maxZoom = 20\n )\n ) |>\n addPolylines(lat = ~Latitude, lng = ~Longitude, color = \"#000000\", opacity = 0.8, weight = 3) |>\n addCircleMarkers(data = zch_gpx$waypoints[1, ], lat = ~Latitude, lng = ~Longitude, color = \"#3eaf15\", opacity = 0.8, weight = 5, radius = 10, label = \"Start of race\") |>\n addCircleMarkers(data = zch_gpx$waypoints[2, ], lat = ~Latitude, lng = ~Longitude, color = \"#e73939\", opacity = 0.8, weight = 5, radius = 10, label = \"End of race\") |>\n addMarkers(\n lng = 8.64389380440116,\n lat = 47.5413932128899,\n icon = list(\n iconUrl = \"https://www.vanuatubeachbar.com/wp-content/uploads/leaflet-maps-marker-icons/mountains.png\",\n iconWidth = 32,\n iconHeight = 37,\n iconAnchorX = 0,\n iconAnchorY = 0\n ),\n label = \"Buch am Irchel: 4.83km at 4.2% **\"\n ) |>\n addMarkers(\n lng = 8.743660245090725,\n lat = 47.45665840019784,\n icon = list(\n iconUrl = \"https://www.vanuatubeachbar.com/wp-content/uploads/leaflet-maps-marker-icons/mountains.png\",\n iconWidth = 32,\n iconHeight = 37,\n iconAnchorX = 0,\n iconAnchorY = 0\n ),\n label = \"Kyburg: 1.28km at 10.3% ****\"\n ) |>\n addMarkers(\n lng = 8.624014738015832,\n lat = 47.351512429613024,\n icon = list(\n iconUrl = \"https://www.vanuatubeachbar.com/wp-content/uploads/leaflet-maps-marker-icons/mountains.png\",\n iconWidth = 32,\n iconHeight = 37,\n iconAnchorX = 0,\n iconAnchorY = 0\n ),\n label = \"Maur-Binz: 3.7km at 4.4% **\"\n ) |>\n addMarkers(\n lng = 8.607488349080088,\n lat = 47.36219723777833,\n icon = list(\n iconUrl = \"https://www.vanuatubeachbar.com/wp-content/uploads/leaflet-maps-marker-icons/mountains.png\",\n iconWidth = 32,\n iconHeight = 37,\n iconAnchorX = 0,\n iconAnchorY = 0\n ),\n label = \"ZurichbergStrasse/Witikon: 2.63km at 5.3% **\"\n ) |>\n htmlwidgets::onRender(\n \"function(x, el, data) {\n var map = this;\n map.on('click', function(e) {\n var coord = e.latlng;\n var lat = coord.lat;\n var lng = coord.lng;\n console.log('You clicked the map at latitude: ' + lat + ' and longitude: ' + lng);\n });\n }\" \n )\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n```\n\n:::\n:::\n\n\n\n\nWhile the main climbs aren't particularly difficult, except Kyburg, repeating them 7 times after more than 200km will be certainly challenging. Besides, we can't only judge a climb by the average gradient as, sometimes a climb may be composed of a rather flat part, followed by very steep parts, making it more challenging than a regular gradient. That's the case of the Buch am Irchel climb.\n\n

Buch am Irchel, Berg Am Irchel, Switzerland

• Distance: 4.8 km, Elevation: 201 m, Avg. Grade: 4.3 %
\n\nAs a side note, the current GPX file indicates a duration of 8 hours, which gives roughly 35km/h average speed, substantially lower than what the pro will do during the race.\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nduration <- max(zch_gpx$routes[[1]]$Time) - min(zch_gpx$routes[[1]]$Time)\navg_speed <- 275 / as.numeric(duration)\n```\n:::\n\n\n\n\n### What about the elevation profile?\n\nThe previous map does not say much about the __elevation profile__. The cumulated positive elevation is obtained by summing the __elevation difference__ between 2 __consecutive__ time points, only taking __positive__ results:\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\ngain <- 0\ni <- 1\nn_iter <- nrow(zch_gpx$routes[[1]]) - 1\nwhile (i <= n_iter) {\n current_elevation <- zch_gpx$routes[[1]][i, \"Elevation\"]\n new_elevation <- zch_gpx$routes[[1]][i + 1, \"Elevation\"]\n diff <- new_elevation - current_elevation\n if (diff > 0) gain <- gain + diff\n i <- i + 1\n}\n```\n:::\n\n\n\n\nNote that the website gain is officially 4470m whereas ours is 4492m. This difference might be explained by the usage of different __smoothing algorithms__ for the [elevation](https://support.strava.com/hc/en-us/articles/216919447-Elevation). Funnily, we all had different bike computers and none of us had the same elevation result at the end of the ride.\n\nWe split the trace into 2 parts. The first loop takes place around Winterthur, north of Zurich. Then, a transition leads to the __city loop__, which is repeated 7 times.\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nrace_route <- zch_gpx$routes[[1]] |>\n filter(Time <= \"2023-11-03 09:13:52\")\ncity_circuit <- zch_gpx$routes[[1]] |>\n filter(Time > \"2023-11-03 09:13:52\")\nggplot() +\n geom_area(data = race_route, aes(x = Time, y = Elevation), fill = \"darkblue\") +\n geom_area(data = city_circuit, aes(x = Time, y = Elevation), fill = \"darkred\") +\n labs(\n title = \"Zurich UCI 2024 Elevation profile\",\n subtitle = \"men elite race\",\n caption = sprintf(\"Cumulated elevation: + %sm\", round(gain))\n ) +\n ylab(\"Elevation (m)\") +\n theme(\n axis.title.x = element_blank(),\n axis.text.x = element_blank(),\n axis.ticks.x = element_blank()\n )\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/map-elevation-1.png){width=672}\n:::\n:::\n\n\n\n\nAs you can see, altough the race never goes above 700m, we manage to reach 4500m elevation gain.\n\n### A 3D elevation profile with rayshader\n\nWe could cumulate both information about elevation and x and y coordinates to get the __3D profile__ with [`rayshader`](https://www.rayshader.com/). The `make_3d_plot` function first creates a `ggplot` object using coordinates and `color_col` as color aesthetic. We set `color_col` to `Elevation` and hide the x and y axis information (as they won't be very useful). Could you guess where `(47.5, 8.4)` is? Probably not :). This plot object is passed to `plot_gg`, to proceed to the 3D conversion.\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nmake_3d_plot <- function(dat, color_col, legend_title, scale = 150, show_legend = TRUE) {\n tmp_3d_plot <- ggplot(dat) + \n geom_point(aes(x = Longitude, y = Latitude, color = .data[[color_col]])) +\n scale_color_continuous(type = \"viridis\", limits = c(0, max(dat[[color_col]])), name = legend_title) +\n theme(\n axis.title.x = element_blank(),\n axis.text.x = element_blank(),\n axis.ticks.x = element_blank(),\n axis.title.y = element_blank(),\n axis.text.y = element_blank(),\n axis.ticks.y = element_blank(),\n legend.position = if (!show_legend) \"none\"\n )\n plot_gg(tmp_3d_plot, width = 3.5, multicore = TRUE, windowsize = c(1600, 1000), sunangle = 225, zoom = 0.40, phi = 15, theta = 80, scale = scale)\nrender_snapshot()\n}\n\nmake_3d_plot(zch_gpx$routes[[1]], \"Elevation\", \"Elevation (m)\", show_legend = FALSE)\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/3d-profil-1.png){width=672}\n:::\n:::\n\n\n\n\nWouldn't it be nice to be able to move the plot around and have different angles? We can do this with the help of [`ggrgl`](https://coolbutuseless.github.io/package/ggrgl/articles/geom-path-3d.html), particularly the `geom_path_3d()`. We extract some Zurich's canton cities and project them on the map with `geom_point()` and `geom_text()` to add annotations. Since this plot does not render with quarto, we included an image after the code.\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# https://simplemaps.com/data/ch-cities\nch_cities <- readr::read_csv(\"ch.csv\") \nzh_cities <- ch_cities |> filter(\n city %in% c(\"Zürich\", \"Winterthur\", \"Binz\", \"Uster\", \"Dübendorf\", \"Küsnacht\")\n)\np <- ggplot(zch_gpx$routes[[1]]) +\n geom_path_3d(\n aes(Longitude, Latitude, z = Elevation),\n extrude = TRUE, extrude_edge_colour = 'grey20', extrude_face_fill = 'grey80',\n extrude_edge_alpha = 0.2) +\n geom_text(data = zh_cities, aes(lng, lat, label = city)) + \n geom_point(data = zh_cities, aes(lng, lat), colour = 'red') + \n theme_ggrgl() + \n labs(\n title = \"Elevation 3D profile\",\n subtitle = \"World UCI road men elite 2024, Zurich\"\n )\ndevoutrgl::rgldev(fov = 30, view_angle = -30)\np\n```\n:::\n\n\n\n\n\"rgl\n\nOn the left, you can notice the steepest climb (Kyburg), that connects the two loops. I highly recommend you to play around locally so you can try out different angles and explore\nthe different parts.\n\nOverall, 4490m for 275km is definitely not the most hilly ride for professional athletes, compared to the amateur Alpen Brevet Platinium, which offers 275km for 8907m elevation, just a tiny bit higher than __Mount Everest__. Here again, it all depends on the average speed at which this race will go. I personally expect a value between 40-42km/h, depending on the weather conditions (rain, wind, ...). Let's see ...\n\n## The ride\n\n### FIT TO CSV\n\nIn the below section, we analyse few logs of my ride, which are extracted from the my bike GPS `fit` file. We first convert this file to a format that R can read, for instance `csv`. I used this [website](https://gotoes.org/strava/convert_fit_files_to_csv.php), but you can also find cli alternatives like [here](https://developer.garmin.com/fit/fitcsvtool/).\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# I found this R package but could not make it work\n# Given that it is 5 years old.\n#remotes::install_github(\"muschellij2/fit2csv\")\nres <- readr::read_csv(\"2024-09-08-063850.csv\")\nhead(res)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n# A tibble: 6 × 123\n GOTOES_CSV timestamp position_lat position_long altitude heart_rate\n \n1 NA 2024-09-08 00:38:50 47.4 8.55 438. NA \n2 NA 2024-09-08 00:38:51 47.4 8.55 438. NA \n3 NA 2024-09-08 00:38:52 47.4 8.55 438. NA \n4 NA 2024-09-08 00:38:53 47.4 8.55 438. NA \n5 NA 2024-09-08 00:38:54 47.4 8.55 438. NA \n6 NA 2024-09-08 00:38:55 47.4 8.55 438. NA \n# ℹ 117 more variables: cadence , distance , speed ,\n# power , compressed_speed_distance , grade ,\n# resistance , time_from_course , cycle_length ,\n# temperature , ...17 , ...18 , ...19 , speed_1s ,\n# cycles , total_cycles , ...23 , ...24 , ...25 ,\n# ...26 , ...27 , ...28 , ...29 , ...30 ,\n# compressed_accumulated_power , accumulated_power , …\n```\n\n\n:::\n:::\n\n\n\n\nWe select only few interesting columns for the analysis and also remove the 43 minutes coffee break we took in the middle of the ride in Kyburg's [castle](https://www.myswitzerland.com/en-ch/experiences/kyburg-castle-museum/):\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nres <- res |>\n tibble::rowid_to_column() |>\n mutate(\n Latitude = position_lat,\n Longitude = position_long,\n distance = distance / 1000,\n timestamp = case_when(rowid >= 9017 ~ timestamp - 43 * 60, .default = timestamp)\n ) |>\n select(timestamp, cadence, distance, speed, grade, power, temperature, calories, altitude, Latitude, Longitude)\n```\n:::\n\n\n\n\n### Data summary\n\nBelow are some __continuous__ variable summary using `gtsummary`. Notice the maximum gradient which was __18.2%__! The overall ride has a 1.1% grade, which means there is more climbings than downhills.\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nres |> \n tbl_summary(include = c(speed, cadence, grade, power), type = all_continuous() ~ \"continuous2\",\n statistic = all_continuous() ~ c(\"{mean}\", \"{min}\", \"{max}\"), \n missing = \"no\",\n label = c(speed ~ \"Speed (km/h)\", cadence ~ \"Cadence (RPM)\", grade ~ \"Grade (%)\", power ~ \"Power (Watts)\")\n ) |>\n modify_header(label ~ \"**Variable**\") |>\n modify_caption(\"**Table 1. Ride summary**\") |>\n bold_labels()\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n\n \n \n \n \n \n \n \n \n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n \n \n
Table 1. Ride summary
VariableN = 17,356
Speed (km/h)
    Mean27
    Minimum0
    Maximum68
Cadence (RPM)
    Mean61
    Minimum0
    Maximum110
Grade (%)
    Mean1.1
    Minimum-14.4
    Maximum18.2
Power (Watts)
    Mean151
    Minimum0
    Maximum701
\n
\n```\n\n:::\n:::\n\n\n\n\n### Power analysis\n\n#### Background\n\n__Power__ measures how much __work__ is done at a given time (in our case, on the bike). It is expressed in __Watts__ (W. `1W = 1J/s`). Power is expressed as follows:\n\n```r\nP = Strength x velocity\n```\n\nThere are 2 ways to rise the power. At low __velocity__ by putting more __strength__ or increase the velocity while applying the same strength.\n\nIn cycling, we also calculate the __Power/Body Weight__ ratio, as from physiological point of view, the more muscles, the more theoritical power. This is important in the climbs, where, because of the gravity, the weight becomes more important as the gradient increases. Therefore, taking cyclist 1 (bodyweight + bike 60kg) and cyclist 2 (bodyweight + bike 90kg) side by side on the same climb with similar bikes, cyclist 2 has to produce more power to climb at the same speed as cyclist 1.\n\nTherefore, a 58kg pro cyclist climber and 100kg pro track cyclist may have similar power ratio for a given duration, even though the former will likely be better at longer efforts. Talking about power without considering the effort __duration__ does not make much sense. World class women cyclists can sustain > 19W/kg during 5s (1360 for a 70kg athlete), men cyclists can sustain 24 W/kg during 5 seconds (2160W output for 90kg).\n\nWe won't have time to cover all the theory, but keep in mind that knowing your __threshold power__ (FTP) is critical for successful training. This is the power you can theoretically sustain for 1h. Based on this, one can establish __power zones__ to plan the training. For profesional riders, FTP are respectively > 5W/kg for women and > 5.8 W/kg for men. You can find more [here](https://www.highnorth.co.uk/articles/power-profiling-cycling).\n\n#### Results\n\nTo proceed, we create a plot showing the __power__ as a function of the __distance__. We also add the elevation profile in the background with `geom_area()` with a rather transparent alpha setting, so the user can focus on the power data. We add some `geom_smooth()` to see the relation between the power and distance ()`power ~ distance`) and display the mean power on an horizontal line with `geom_hline()`. On the second plot, we want to show the power __distribution__ and leverage `geom_histogram`, the idea being to compare the mean power to the threshold power.\n\nThe power chart shows that my power is decreasing over time, not a surprise. There is an effect of the __fatigue__ but also the __weather__ conditions, as the last part of the ride was in the city and under heavy rain and we had to adjust the speed not to take too much risks. Besides, when looking at the power distribution, we notice that the average power is significantly below my threshold power (FTP), which is my theoretical maximum power for 1h. For a 5h ride, this makes sense as one wants to save energy to last as long as possible.\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nmake_time_plot <- function(dat, col, show_elevation = TRUE, elevation_scale = 1) {\n p <- ggplot(dat, aes(x = distance, y = .data[[col]])) +\n geom_line() +\n geom_smooth(method = \"lm\") +\n geom_hline(yintercept = mean(dat[[col]]), linetype = \"dashed\", color = \"darkred\")\n\n if (show_elevation) p + geom_area(aes(x = distance, y = altitude / elevation_scale), alpha = 0.15) else p\n}\n\nmake_distrib_plot <- function(dat, col) {\n ggplot(dat, aes(x = .data[[col]])) +\n geom_histogram() +\n geom_vline(xintercept = 250, linetype = \"dashed\", color = \"darkgreen\") +\n geom_vline(xintercept = mean(dat[[col]]), linetype = \"dashed\", color = \"darkred\")\n}\n\npower_time <- make_time_plot(res, \"power\") +\n annotate(\n \"text\",\n x = 10,\n y = 400,\n label = \"Average power\",\n fontface = \"bold\",\n color = \"darkred\",\n size = 4.5\n ) +\n ggtitle(\"Power over time\") +\n xlab(\"Distance (km)\") +\n ylab(\"Power (Watts)\")\n\npower_distrib <- make_distrib_plot(res, \"power\") +\n annotate(\n \"text\",\n x = 310,\n y = 2500,\n label = \"Threshold power (FTP)\",\n fontface = \"bold\",\n color = \"darkgreen\",\n size = 2.5\n ) +\n annotate(\n \"text\",\n x = mean(res$power) + 20 - 60,\n y = 2500,\n label = \"Average power\",\n fontface = \"bold\",\n color = \"darkred\",\n size = 2.5\n ) +\n theme(\n axis.title.y = element_blank(),\n axis.text.y = element_blank(),\n axis.ticks.y = element_blank()\n ) +\n ggtitle(\"Power distribution\") +\n xlab(\"Power (Watts)\")\n\npower_time / power_distrib + plot_annotation(\n title = \"Power data\",\n subtitle = \"Elevation data shown in the background\",\n caption = \"More about power: https://support.strava.com/hc/en-us/articles/216918457-Power\"\n)\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/power-time-serie-1.png){width=672}\n:::\n:::\n\n\n\n\n### Speed\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nmake_time_plot(res, \"speed\", elevation_scale = 10) +\n ylab(\"Speed (km/h)\") +\n xlab(\"Distance (km)\")\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/speed-ts-1.png){width=672}\n:::\n:::\n\n\n\n\nThe ride was covered at 27km/h average speed on an open road with wind and rain, definitely not the best conditions.\n\nInterestingly, I found this nice [article](https://www.gribble.org/cycling/power_v_speed.html) about the relation between power and speed. Overall, the simulator predicts 150W to maintain an average speed of 27km/h with a 0.5% gradient coefficient, not far from what we have here. It's rather challenging to account for the wind, as it can sometimes help or makes things more challenging.\n\n### Calories\n\nDuring that ride, I consumed about 2549 calories, which corresponds to the average daily energy needs for an adult man.\n\n## Conclusion\n\nThis was a lot of fun to ride part of this upcoming event, even more to analyse the underlying data.\n", + "markdown": "---\nlayout: post\ntitle: \"2024 road and para-cycling road world championships: preliminaRy analysis\"\nimage: logo.png\nauthor: David Granjon\ndate: '2024-09-10'\ncategories:\n - sport\n - R\nformat: \n html:\n code-fold: 'show'\nfilters:\n - shinylive\n---\n\n\n\n\n\n![](logo.png){width=25% fig-align=\"center\"}\n\n\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlibrary(gpx)\nlibrary(dplyr)\nlibrary(leaflet)\nlibrary(ggplot2)\nlibrary(patchwork)\nlibrary(rayshader)\nlibrary(ggrgl)\nlibrary(gtsummary)\n\nlibrary(rgl)\noptions(rgl.useNULL = TRUE)\nsetupKnitr(autoprint = TRUE)\n```\n:::\n\n\n\n\n\nFrom Sept 21 to Sept 29, Zurich will welcome the 2024 road and para-cycling road world [championships](https://zurich2024.com/en/). To mark the occasion, my friends and I went to do the 2 first loops (\"only\" 140km, 1700m elevation) of the Elite Mens circuit that will start from [Winterthur](https://zurich2024.com/en/rennstrecken/winterthur-zurich/) on Sept 29. 273km and 4470m of pure pleasure! I am not sure whether riders will have time to enjoy the view. At least I hope they have a better weather than us.\n\n## Circuit overview\n\n\n### Get the GPX file\n\nThe road circuit is available as __GPX__ [format](https://zurich2024.com/wp-content/uploads/2024/08/GPX-22-Winterthur-Zurich-1.gpx), which can be imported by any route planner like Komoot or Strava ... or with R :).\n\nThere are various way to read such format in R, as shown in this other [article](https://www.appsilon.com/post/r-gpx-files). For this blog post, we leverage the `gpx` [package](https://www.rdocumentation.org/packages/tmaptools/versions/2.0/topics/read_GPX):\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nzch_gpx <- read_gpx(\"GPX-22-Winterthur-Zurich-1.gpx\")\nglimpse(zch_gpx)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\nList of 3\n $ routes :List of 1\n ..$ :'data.frame':\t10687 obs. of 5 variables:\n .. ..$ Elevation : num [1:10687] 438 438 438 438 438 ...\n .. ..$ Time : POSIXct[1:10687], format: \"2023-11-03 06:08:27\" \"2023-11-03 06:08:29\" ...\n .. ..$ Latitude : num [1:10687] 47.5 47.5 47.5 47.5 47.5 ...\n .. ..$ Longitude : num [1:10687] 8.72 8.72 8.72 8.72 8.72 ...\n .. ..$ extensions: logi [1:10687] NA NA NA NA NA NA ...\n $ tracks :List of 1\n ..$ :'data.frame':\t0 obs. of 4 variables:\n .. ..$ Elevation: logi(0) \n .. ..$ Time : logi(0) \n .. ..$ Latitude : logi(0) \n .. ..$ Longitude: logi(0) \n $ waypoints:'data.frame':\t9 obs. of 6 variables:\n ..$ Elevation : num [1:9] 464 411 411 411 411 ...\n ..$ Time : POSIXct[1:9], format: NA NA ...\n ..$ Latitude : num [1:9] 47.5 47.4 47.4 47.4 47.4 ...\n ..$ Longitude : num [1:9] 8.76 8.55 8.55 8.55 8.55 ...\n ..$ Name : chr [1:9] \"km 0\" \"Info\" \"Info\" \"Info\" ...\n ..$ Description: chr [1:9] NA NA NA NA ...\n```\n\n\n:::\n:::\n\n\n\n\n\nWe obtain a list containing 3 dataframes, namely `routes`, `tracks` and `waypoints`.\n\n### Visualize the route\n\nIn the following, we can visualize these data on an __interactive map__. To do so, I chose the `leaflet` package. First, we pass the data to `leaflet()`, then we select a __map provider__ with `addTiles()`. I like to use the a rather light one as I want the user to focus on the route trace and not on any single mountain or village. Therefore, I went for the `CartoDB.Positron` tiles, which you can test [here](https://leaflet-extras.github.io/leaflet-providers/preview/). The trace is injected with `addPolylines`, passing the `Latitude` and `Longitude` columns of our dataset, as well as few styling parameters such as color, line weight and opacity.\n\nThen, we add the starting point and end point of the race available in `zch_gpx$waypoints`. Note that since the last loop goes 7 times around the finish line, the GPS coordinates are duplicated so we only extract `zch_gpx$waypoints[1, ]` and `zch_gpx$waypoints[2, ]`. Those data are given to the `addCircleMarkers()` function, which allows to pass extra information like popups or labels.\nFinally, I wanted to highlight the 4 most significant climbs of this tour:\n\n - Buch am Irchel: 4.83km at 4.2%.\n - Kyburg: 1.28km at 10.3%.\n - Binz: 3.7km at 4.4%.\n - Witikon: 2.63km at 5.3%.\n\nI first had to locate the exact coordinates of each climb (the marker is put at the top). That's the reason why you can see a few JavaScript lines at the end of the script. This is a helper passed to `htmlwidgets::onRender()`, which allowed me to click on the map and get the coordinates in an alert window.\n\n```js\nfunction(x, el, data) {\n var map = this;\n map.on('click', function(e) {\n var coord = e.latlng;\n var lat = coord.lat;\n var lng = coord.lng;\n alert('You clicked the map at latitude: ' + lat + ' and longitude: ' + lng);\n });\n}\n```\n\nI then copied the results and passed them to `addMarkers()`. I faced some challenges while trying to get the markers render well when zooming in and out. Be careful to fix the X and Y anchors and specify the size of the icon you use:\n\n```r\nicon = list(\n iconUrl = \"https://www.vanuatubeachbar.com/wp-content/uploadleaflet-maps-marker-icons/mountains.png\",\n iconWidth = 32,\n iconHeight = 37,\n iconAnchorX = 0,\n iconAnchorY = 0\n)\n```\n\nThe above setting ensures that at any level of zoom, the icon stays on the trace.\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nleaflet(zch_gpx$routes[[1]]) |>\n addTiles(\n urlTemplate = \"https://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}{r}.png\",\n attribution = '© OpenStreetMap contributors © CARTO',\n options = tileOptions(\n subdomains = \"abcd\",\n\t maxZoom = 20\n )\n ) |>\n addPolylines(lat = ~Latitude, lng = ~Longitude, color = \"#000000\", opacity = 0.8, weight = 3) |>\n addCircleMarkers(data = zch_gpx$waypoints[1, ], lat = ~Latitude, lng = ~Longitude, color = \"#3eaf15\", opacity = 0.8, weight = 5, radius = 10, label = \"Start of race\") |>\n addCircleMarkers(data = zch_gpx$waypoints[2, ], lat = ~Latitude, lng = ~Longitude, color = \"#e73939\", opacity = 0.8, weight = 5, radius = 10, label = \"End of race\") |>\n addMarkers(\n lng = 8.64389380440116,\n lat = 47.5413932128899,\n icon = list(\n iconUrl = \"https://www.vanuatubeachbar.com/wp-content/uploads/leaflet-maps-marker-icons/mountains.png\",\n iconWidth = 32,\n iconHeight = 37,\n iconAnchorX = 0,\n iconAnchorY = 0\n ),\n label = \"Buch am Irchel: 4.83km at 4.2% **\"\n ) |>\n addMarkers(\n lng = 8.743660245090725,\n lat = 47.45665840019784,\n icon = list(\n iconUrl = \"https://www.vanuatubeachbar.com/wp-content/uploads/leaflet-maps-marker-icons/mountains.png\",\n iconWidth = 32,\n iconHeight = 37,\n iconAnchorX = 0,\n iconAnchorY = 0\n ),\n label = \"Kyburg: 1.28km at 10.3% ****\"\n ) |>\n addMarkers(\n lng = 8.624014738015832,\n lat = 47.351512429613024,\n icon = list(\n iconUrl = \"https://www.vanuatubeachbar.com/wp-content/uploads/leaflet-maps-marker-icons/mountains.png\",\n iconWidth = 32,\n iconHeight = 37,\n iconAnchorX = 0,\n iconAnchorY = 0\n ),\n label = \"Maur-Binz: 3.7km at 4.4% **\"\n ) |>\n addMarkers(\n lng = 8.607488349080088,\n lat = 47.36219723777833,\n icon = list(\n iconUrl = \"https://www.vanuatubeachbar.com/wp-content/uploads/leaflet-maps-marker-icons/mountains.png\",\n iconWidth = 32,\n iconHeight = 37,\n iconAnchorX = 0,\n iconAnchorY = 0\n ),\n label = \"ZurichbergStrasse/Witikon: 2.63km at 5.3% **\"\n ) |>\n htmlwidgets::onRender(\n \"function(x, el, data) {\n var map = this;\n map.on('click', function(e) {\n var coord = e.latlng;\n var lat = coord.lat;\n var lng = coord.lng;\n console.log('You clicked the map at latitude: ' + lat + ' and longitude: ' + lng);\n });\n }\" \n )\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n```\n\n:::\n:::\n\n\n\n\n\nWhile the main climbs aren't particularly difficult, except Kyburg, repeating them 7 times after more than 200km will be certainly challenging. Besides, we can't only judge a climb by the average gradient as, sometimes a climb may be composed of a rather flat part, followed by very steep parts, making it more challenging than a regular gradient. That's the case of the Buch am Irchel climb.\n\n

Buch am Irchel, Berg Am Irchel, Switzerland

• Distance: 4.8 km, Elevation: 201 m, Avg. Grade: 4.3 %
\n\nAs a side note, the current GPX file indicates a duration of 8 hours, which gives roughly 35km/h average speed, substantially lower than what the pro will do during the race.\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nduration <- max(zch_gpx$routes[[1]]$Time) - min(zch_gpx$routes[[1]]$Time)\navg_speed <- 275 / as.numeric(duration)\n```\n:::\n\n\n\n\n\n### What about the elevation profile?\n\nThe previous map does not say much about the __elevation profile__. The cumulated positive elevation is obtained by summing the __elevation difference__ between 2 __consecutive__ time points, only taking __positive__ results:\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\ngain <- 0\ni <- 1\nn_iter <- nrow(zch_gpx$routes[[1]]) - 1\nwhile (i <= n_iter) {\n current_elevation <- zch_gpx$routes[[1]][i, \"Elevation\"]\n new_elevation <- zch_gpx$routes[[1]][i + 1, \"Elevation\"]\n diff <- new_elevation - current_elevation\n if (diff > 0) gain <- gain + diff\n i <- i + 1\n}\n```\n:::\n\n\n\n\n\nNote that the website gain is officially 4470m whereas ours is 4492m. This difference might be explained by the usage of different __smoothing algorithms__ for the [elevation](https://larahamilton.com/strava-elevation/). Funnily, we all had different bike computers and none of us had the same elevation result at the end of the ride.\n\nWe split the trace into 2 parts. The first loop takes place around Winterthur, north of Zurich. Then, a transition leads to the __city loop__, which is repeated 7 times.\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nrace_route <- zch_gpx$routes[[1]] |>\n filter(Time <= \"2023-11-03 09:13:52\")\ncity_circuit <- zch_gpx$routes[[1]] |>\n filter(Time > \"2023-11-03 09:13:52\")\nggplot() +\n geom_area(data = race_route, aes(x = Time, y = Elevation), fill = \"darkblue\") +\n geom_area(data = city_circuit, aes(x = Time, y = Elevation), fill = \"darkred\") +\n labs(\n title = \"Zurich UCI 2024 Elevation profile\",\n subtitle = \"men elite race\",\n caption = sprintf(\"Cumulated elevation: + %sm\", round(gain))\n ) +\n ylab(\"Elevation (m)\") +\n theme(\n axis.title.x = element_blank(),\n axis.text.x = element_blank(),\n axis.ticks.x = element_blank()\n )\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/map-elevation-1.png){width=672}\n:::\n:::\n\n\n\n\n\nAs you can see, altough the race never goes above 700m, we manage to reach 4500m elevation gain.\n\n### A 3D elevation profile with rayshader\n\nWe could cumulate both information about elevation and x and y coordinates to get the __3D profile__ with [`rayshader`](https://www.rayshader.com/). The `make_3d_plot` function first creates a `ggplot` object using coordinates and `color_col` as color aesthetic. We set `color_col` to `Elevation` and hide the x and y axis information (as they won't be very useful). Could you guess where `(47.5, 8.4)` is? Probably not :). This plot object is passed to `plot_gg`, to proceed to the 3D conversion.\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nmake_3d_plot <- function(dat, color_col, legend_title, scale = 150, show_legend = TRUE) {\n tmp_3d_plot <- ggplot(dat) + \n geom_point(aes(x = Longitude, y = Latitude, color = .data[[color_col]])) +\n scale_color_continuous(type = \"viridis\", limits = c(0, max(dat[[color_col]])), name = legend_title) +\n theme(\n axis.title.x = element_blank(),\n axis.text.x = element_blank(),\n axis.ticks.x = element_blank(),\n axis.title.y = element_blank(),\n axis.text.y = element_blank(),\n axis.ticks.y = element_blank(),\n legend.position = if (!show_legend) \"none\"\n )\n plot_gg(tmp_3d_plot, width = 3.5, multicore = TRUE, windowsize = c(1600, 1000), sunangle = 225, zoom = 0.40, phi = 15, theta = 80, scale = scale)\nrender_snapshot()\n}\n\nmake_3d_plot(zch_gpx$routes[[1]], \"Elevation\", \"Elevation (m)\", show_legend = FALSE)\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/3d-profil-1.png){width=672}\n:::\n:::\n\n\n\n\n\nWouldn't it be nice to be able to move the plot around and have different angles? We can do this with the help of [`ggrgl`](https://coolbutuseless.github.io/package/ggrgl/articles/geom-path-3d.html), particularly the `geom_path_3d()`. We extract some Zurich's canton cities and project them on the map with `geom_point()` and `geom_text()` to add annotations. Since this plot does not render with quarto, we included an image after the code.\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# https://simplemaps.com/data/ch-cities\nch_cities <- readr::read_csv(\"ch.csv\") \nzh_cities <- ch_cities |> filter(\n city %in% c(\"Zürich\", \"Winterthur\", \"Binz\", \"Uster\", \"Dübendorf\", \"Küsnacht\")\n)\np <- ggplot(zch_gpx$routes[[1]]) +\n geom_path_3d(\n aes(Longitude, Latitude, z = Elevation),\n extrude = TRUE, extrude_edge_colour = 'grey20', extrude_face_fill = 'grey80',\n extrude_edge_alpha = 0.2) +\n geom_text(data = zh_cities, aes(lng, lat, label = city)) + \n geom_point(data = zh_cities, aes(lng, lat), colour = 'red') + \n theme_ggrgl() + \n labs(\n title = \"Elevation 3D profile\",\n subtitle = \"World UCI road men elite 2024, Zurich\"\n )\ndevoutrgl::rgldev(fov = 30, view_angle = -30)\np\n```\n:::\n\n\n\n\n\n\"rgl\n\nOn the left, you can notice the steepest climb (Kyburg), that connects the two loops. I highly recommend you to play around locally so you can try out different angles and explore\nthe different parts.\n\nOverall, 4490m for 275km is definitely not the most hilly ride for professional athletes, compared to the amateur Alpen Brevet Platinium, which offers 275km for 8907m elevation, just a tiny bit higher than __Mount Everest__. Here again, it all depends on the average speed at which this race will go. I personally expect a value between 40-42km/h, depending on the weather conditions (rain, wind, ...). Let's see ...\n\n## The ride\n\n### FIT TO CSV\n\nIn the below section, we analyse few logs of my ride, which are extracted from the my bike GPS `fit` file. We first convert this file to a format that R can read, for instance `csv`. I used this [website](https://gotoes.org/strava/convert_fit_files_to_csv.php), but you can also find cli alternatives like [here](https://developer.garmin.com/fit/fitcsvtool/).\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# I found this R package but could not make it work\n# Given that it is 5 years old.\n#remotes::install_github(\"muschellij2/fit2csv\")\nres <- readr::read_csv(\"2024-09-08-063850.csv\")\nhead(res)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n# A tibble: 6 × 123\n GOTOES_CSV timestamp position_lat position_long altitude heart_rate\n \n1 NA 2024-09-08 00:38:50 47.4 8.55 438. NA \n2 NA 2024-09-08 00:38:51 47.4 8.55 438. NA \n3 NA 2024-09-08 00:38:52 47.4 8.55 438. NA \n4 NA 2024-09-08 00:38:53 47.4 8.55 438. NA \n5 NA 2024-09-08 00:38:54 47.4 8.55 438. NA \n6 NA 2024-09-08 00:38:55 47.4 8.55 438. NA \n# ℹ 117 more variables: cadence , distance , speed ,\n# power , compressed_speed_distance , grade ,\n# resistance , time_from_course , cycle_length ,\n# temperature , ...17 , ...18 , ...19 , speed_1s ,\n# cycles , total_cycles , ...23 , ...24 , ...25 ,\n# ...26 , ...27 , ...28 , ...29 , ...30 ,\n# compressed_accumulated_power , accumulated_power , …\n```\n\n\n:::\n:::\n\n\n\n\n\nWe select only few interesting columns for the analysis and also remove the 43 minutes coffee break we took in the middle of the ride in Kyburg's [castle](https://www.myswitzerland.com/en-ch/experiences/kyburg-castle-museum/):\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nres <- res |>\n tibble::rowid_to_column() |>\n mutate(\n Latitude = position_lat,\n Longitude = position_long,\n distance = distance / 1000,\n timestamp = case_when(rowid >= 9017 ~ timestamp - 43 * 60, .default = timestamp)\n ) |>\n select(timestamp, cadence, distance, speed, grade, power, temperature, calories, altitude, Latitude, Longitude)\n```\n:::\n\n\n\n\n\n### Data summary\n\nBelow are some __continuous__ variable summary using `gtsummary`. Notice the maximum gradient which was __18.2%__! The overall ride has a 1.1% grade, which means there is more climbings than downhills.\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nres |> \n tbl_summary(include = c(speed, cadence, grade, power), type = all_continuous() ~ \"continuous2\",\n statistic = all_continuous() ~ c(\"{mean}\", \"{min}\", \"{max}\"), \n missing = \"no\",\n label = c(speed ~ \"Speed (km/h)\", cadence ~ \"Cadence (RPM)\", grade ~ \"Grade (%)\", power ~ \"Power (Watts)\")\n ) |>\n modify_header(label ~ \"**Variable**\") |>\n modify_caption(\"**Table 1. Ride summary**\") |>\n bold_labels()\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n\n \n \n \n \n \n \n \n \n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n \n \n
Table 1. Ride summary
VariableN = 17,356
Speed (km/h)
    Mean27
    Minimum0
    Maximum68
Cadence (RPM)
    Mean61
    Minimum0
    Maximum110
Grade (%)
    Mean1.1
    Minimum-14.4
    Maximum18.2
Power (Watts)
    Mean151
    Minimum0
    Maximum701
\n
\n```\n\n:::\n:::\n\n\n\n\n\n### Power analysis\n\n#### Background\n\n__Power__ measures how much __work__ is done at a given time (in our case, on the bike). It is expressed in __Watts__ (W. `1W = 1J/s`). Power is expressed as follows:\n\n```r\nP = Strength x velocity\n```\n\nThere are 2 ways to rise the power. At low __velocity__ by putting more __strength__ or increase the velocity while applying the same strength.\n\nIn cycling, we also calculate the __Power/Body Weight__ ratio, as from physiological point of view, the more muscles, the more theoritical power. This is important in the climbs, where, because of the gravity, the weight becomes more important as the gradient increases. Therefore, taking cyclist 1 (bodyweight + bike 60kg) and cyclist 2 (bodyweight + bike 90kg) side by side on the same climb with similar bikes, cyclist 2 has to produce more power to climb at the same speed as cyclist 1.\n\nTherefore, a 58kg pro cyclist climber and 100kg pro track cyclist may have similar power ratio for a given duration, even though the former will likely be better at longer efforts. Talking about power without considering the effort __duration__ does not make much sense. World class women cyclists can sustain > 19W/kg during 5s (1360 for a 70kg athlete), men cyclists can sustain 24 W/kg during 5 seconds (2160W output for 90kg).\n\nWe won't have time to cover all the theory, but keep in mind that knowing your __threshold power__ (FTP) is critical for successful training. This is the power you can theoretically sustain for 1h. Based on this, one can establish __power zones__ to plan the training. For profesional riders, FTP are respectively > 5W/kg for women and > 5.8 W/kg for men. You can find more [here](https://www.highnorth.co.uk/articles/power-profiling-cycling).\n\n#### Results\n\nTo proceed, we create a plot showing the __power__ as a function of the __distance__. We also add the elevation profile in the background with `geom_area()` with a rather transparent alpha setting, so the user can focus on the power data. We add some `geom_smooth()` to see the relation between the power and distance ()`power ~ distance`) and display the mean power on an horizontal line with `geom_hline()`. On the second plot, we want to show the power __distribution__ and leverage `geom_histogram`, the idea being to compare the mean power to the threshold power.\n\nThe power chart shows that my power is decreasing over time, not a surprise. There is an effect of the __fatigue__ but also the __weather__ conditions, as the last part of the ride was in the city and under heavy rain and we had to adjust the speed not to take too much risks. Besides, when looking at the power distribution, we notice that the average power is significantly below my threshold power (FTP), which is my theoretical maximum power for 1h. For a 5h ride, this makes sense as one wants to save energy to last as long as possible.\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nmake_time_plot <- function(dat, col, show_elevation = TRUE, elevation_scale = 1) {\n p <- ggplot(dat, aes(x = distance, y = .data[[col]])) +\n geom_line() +\n geom_smooth(method = \"lm\") +\n geom_hline(yintercept = mean(dat[[col]]), linetype = \"dashed\", color = \"darkred\")\n\n if (show_elevation) p + geom_area(aes(x = distance, y = altitude / elevation_scale), alpha = 0.15) else p\n}\n\nmake_distrib_plot <- function(dat, col) {\n ggplot(dat, aes(x = .data[[col]])) +\n geom_histogram() +\n geom_vline(xintercept = 250, linetype = \"dashed\", color = \"darkgreen\") +\n geom_vline(xintercept = mean(dat[[col]]), linetype = \"dashed\", color = \"darkred\")\n}\n\npower_time <- make_time_plot(res, \"power\") +\n annotate(\n \"text\",\n x = 10,\n y = 400,\n label = \"Average power\",\n fontface = \"bold\",\n color = \"darkred\",\n size = 4.5\n ) +\n ggtitle(\"Power over time\") +\n xlab(\"Distance (km)\") +\n ylab(\"Power (Watts)\")\n\npower_distrib <- make_distrib_plot(res, \"power\") +\n annotate(\n \"text\",\n x = 310,\n y = 2500,\n label = \"Threshold power (FTP)\",\n fontface = \"bold\",\n color = \"darkgreen\",\n size = 2.5\n ) +\n annotate(\n \"text\",\n x = mean(res$power) + 20 - 60,\n y = 2500,\n label = \"Average power\",\n fontface = \"bold\",\n color = \"darkred\",\n size = 2.5\n ) +\n theme(\n axis.title.y = element_blank(),\n axis.text.y = element_blank(),\n axis.ticks.y = element_blank()\n ) +\n ggtitle(\"Power distribution\") +\n xlab(\"Power (Watts)\")\n\npower_time / power_distrib + plot_annotation(\n title = \"Power data\",\n subtitle = \"Elevation data shown in the background\",\n caption = \"More about power: https://support.strava.com/hc/en-us/articles/216918457-Power\"\n)\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/power-time-serie-1.png){width=672}\n:::\n:::\n\n\n\n\n\n### Speed\n\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nmake_time_plot(res, \"speed\", elevation_scale = 10) +\n ylab(\"Speed (km/h)\") +\n xlab(\"Distance (km)\")\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/speed-ts-1.png){width=672}\n:::\n:::\n\n\n\n\n\nThe ride was covered at 27km/h average speed on an open road with wind and rain, definitely not the best conditions.\n\nInterestingly, I found this nice [article](https://www.gribble.org/cycling/power_v_speed.html) about the relation between power and speed. Overall, the simulator predicts 150W to maintain an average speed of 27km/h with a 0.5% gradient coefficient, not far from what we have here. It's rather challenging to account for the wind, as it can sometimes help or makes things more challenging.\n\n### Calories\n\nDuring that ride, I consumed about 2549 calories, which corresponds to the average daily energy needs for an adult man.\n\n## Conclusion\n\nThis was a lot of fun to ride part of this upcoming event, even more to analyse the underlying data.\n", "supporting": [ "index_files" ], diff --git a/_freeze/posts/2024-09-16-blockr/index/execute-results/html.json b/_freeze/posts/2024-09-16-blockr/index/execute-results/html.json new file mode 100644 index 0000000..405ffa6 --- /dev/null +++ b/_freeze/posts/2024-09-16-blockr/index/execute-results/html.json @@ -0,0 +1,15 @@ +{ + "hash": "609c4615b2a6c19aa6ed84d6da4b5dca", + "result": { + "engine": "knitr", + "markdown": "---\nlayout: post\ntitle: \"Introducing blockr: a no-code dashboard builder for R\"\nimage: https://avatars.githubusercontent.com/u/145758851?s=400&u=1545a34095e8e84f5cb2b292b1e900df59ba7239&v=4\nauthor: David Granjon\ndate: '2024-09-16'\ncategories:\n - R\nformat: \n html:\n code-fold: 'show'\nfilters:\n - shinylive\n---\n\n\n\n\n![](https://avatars.githubusercontent.com/u/145758851?s=400&u=1545a34095e8e84f5cb2b292b1e900df59ba7239&v=4){width=25% fig-align=\"center\"}\n\nSince 2023, [BristolMyersSquibb](https://www.bms.com/), the Y [company](https://the-y-company.com/) and [cynkra](https://cynkra.com) have teamed up to develop a novel __no-code__ solution for R.\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlibrary(blockr)\n```\n\n::: {.cell-output .cell-output-stderr}\n\n```\n\nAttaching package: 'blockr'\n```\n\n\n:::\n\n::: {.cell-output .cell-output-stderr}\n\n```\nThe following object is masked from 'package:graphics':\n\n layout\n```\n\n\n:::\n\n```{.r .cell-code}\nlibrary(pracma)\n```\n:::\n\n\n\n\n## Introduction\n\nblockr is an R package designed to democratize __data analysis__ by providing a __flexible__, __intuitive__, and __code-free__ approach to building data pipelines. It has 2 main user targets:\n\n1. On the one hand, it empowers __non technical__ users to create insightful data workflows using __pre-built__ blocks that can be easily connected, all without writing a single line of code.\n2. On the other hand, it provides developers with a set of tools to seamlessly create new blocks, thereby enhancing the entire framework and fostering __collaboration__ within organizations teams.\n\nblockr is data __agnostic__, meaning it can work with any kind of dataset, that is pharmaceutical data or sport analytics data. It builds on top of [shiny](https://shiny.posit.co/) to ensure real time feedback to any data change. Finally, it allows to export code to create __reproducible__ data analysis.\n\n\n## Getting started\n\n### As a simple user\n\nAs a simple user, you're not expected to write any single line of code to use blockr. You can use the below kitchen sink to get started. This example is based on the palmer penguins data and running a single stack with 3 blocks: the first block to select the data, another one to create the plot and then add the points to it.\n\nblockr has a its own __validation__ system. For instance, using the below example, you can try to press return on the first block select box (penguins is the selected default). You'll notice an immediate feedback message. A global message is displayed in the block upper middle part: \"1 error(s) found in this block\". You get more detailed mesages next to the faulty input(s): \"selected value(s) not among provided choices\". You can repeat the same experience with the last plot layer block, by emptying the color and shape select inputs. Error messages can accumulate.\n\nYou can dynamically add blocks to a current __stack__, that gathers a set of related blocks. You can think a stack as a data analysis __recipe__ as in cooking, where blocks are instructions. To add a new block, you can click on the `+` icon on the stack top right corner. This opens a sidebar on the left side, where one may search for blocks that are compatible with the current state of the pipeline. With an empty stack, only entry point blocks are suggested, so you can import data. Then, after clicking on the block, the suggestion list changes so you can, for instance, filter data or select only a subset of columns, and much more.\n\n:::{.column-page}\n```{shinylive-r}\n#| standalone: true\n#| components: [viewer]\n#| column: screen-inset-shaded\n#| viewerHeight: 800\nwebr::install(\"blockr\", repos = c(\"https://bristolmyerssquibb.github.io/webr-repos\", \"https://repo.r-wasm.org\"))\n\nlibrary(blockr)\nlibrary(palmerpenguins)\nlibrary(ggplot2)\n\nnew_ggplot_block <- function(col_x = character(), col_y = character(), ...) {\n\n data_cols <- function(data) colnames(data)\n\n new_block(\n fields = list(\n x = new_select_field(col_x, data_cols, type = \"name\"),\n y = new_select_field(col_y, data_cols, type = \"name\")\n ),\n expr = quote(\n ggplot(mapping = aes(x = .(x), y = .(y)))\n ),\n class = c(\"ggplot_block\", \"plot_block\"),\n ...\n )\n}\n\nnew_geompoint_block <- function(color = character(), shape = character(), ...) {\n\n data_cols <- function(data) colnames(data$data)\n\n new_block(\n fields = list(\n color = new_select_field(color, data_cols, type = \"name\"),\n shape = new_select_field(shape, data_cols, type = \"name\")\n ),\n expr = quote(\n geom_point(aes(color = .(color), shape = .(shape)), size = 2)\n ),\n class = c(\"geompoint_block\", \"plot_layer_block\", \"plot_block\"),\n ...\n )\n}\n\nstack <- new_stack(\n data_block = new_dataset_block(\"penguins\", \"palmerpenguins\"),\n plot_block = new_ggplot_block(\"flipper_length_mm\", \"body_mass_g\"),\n layer_block = new_geompoint_block(\"species\", \"species\")\n)\nserve_stack(stack)\n```\n:::\n\n#### Toward more complex analysis\n\nLet's consider this [dataset](https://www.kaggle.com/datasets/heesoo37/120-years-of-olympic-history-athletes-and-results?select=athlete_events.csv), which contains 120 years of olympics athletes data until Rio in 2016. In the below kitchen sink, we first add an upload block:\n\n1. Download the [dataset](https://www.kaggle.com/datasets/heesoo37/120-years-of-olympic-history-athletes-and-results?select=athlete_events.csv) file locally.\n2. CLick on `Add stack`.\n3. Click on the stack `+` button and search for `browser`, then select the `new_filesbrowser_block`.\n4. Uncollapse the stack by click on the top right arrow icon. This makes the upload block file input visible.\n5. Click on `File select` and select the downloaded file at step 1 (`athlete_events.csv`).\n6. As we obtain a csv file, we must parse it with a `new_csv_block`. Repeat step 3 to add the `new_csv_block`. The table is `271116` rows and `15` columns.\n7. Add a `new_filter_block` and select `Sex` as column and then `F` in the values input. We leave the comparison to `==` and click on the `Run` button. Notice we now have 74522 rows.\n8. Add a `new_mutate_block` with the following expression: `birth_year = Year - Age` (this gives us an approximate birth year). Click on submit.\n\nFrom now on, we leave the first stack as is and will reuse it in other stacks. We want to display the average height distribution for female athletes. Let's do it below.\n\n9. Create a new stack by clicking on `Add stack`.\n10. Add it a `new_result_block`. This allows to import the data from the first stack (and potentially any stack from the dashboard). If you don't see any data, select another stack name from the dropdown menu.\n11. Add a `new_ggplot_block`, leave `x` as default function and select `Height` as variable in the columns input.\n12. Add a `new_geomhistogram_block`. Now we have our distribution plot.\n\nAlternatively, you could remove the 2 plot blocks and add a `new_summarize_block` using `mean` as function and `Height` as column (result: 168 cm).\n\nIn the following, we create a look-up table to be able to retrieve the athlete names based on their `ID`.\n\n13. Create a new stack.\n14. Add a result block to import data from the very first stack.\n15. Add a `new_select_block` and only select `ID`, `Name`, `birth_year`, `Team` and `Sport` as columns.\n\nOur goal is now to find which athlete did 2 or more different sports.\n\n16. Create a new stack.\n17. Add a result block to import data from the very first stack.\n18. Add a `new_filter_block` , select `Medal` as column, `!=` as comparison operator and leave the value empty. Click on run, which will only get athletes with medals.\n19. Add a `new_group_by_block`, grouping by `ID` (as some athletes have the same name).\n20. Add a `new_summarize_block` by choising the function `n_distinct` applied on the `Sport` columns.\n21. Add a `new_filter_block` , select `N_DISTINCT` as column, `>=` as comparison operator and set the value to 2. Click on run. This gives us the athletes that are doing 2 sports or more.\n22. Add a `new_join_block`. Select `left_join` as join function, select the third stack (lookup table) as join table and `ID` as column.\n23. Add a `new_arrange_block` for the `birth_year` column.\n\nAs a conclusion, Hjrdis Viktoria Tpel (1904) was the first recorded athlete to compete in 2 different sports, swimming and diving for Sweden. Lauryn Chenet Williams (1984) is the latest for US with Athletics and Bobsleigh. It's actually quite amazing to see people competing in two quite unrelated sports like swimming and handbain the case of Roswitha Krause.\n\n:::{.column-screen-inset}\n```{shinylive-r}\n#| standalone: true\n#| components: [viewer]\n#| column: screen-inset-shaded\n#| viewerHeight: 800\nwebr::install(\"blockr\", repos = c(\"https://bristolmyerssquibb.github.io/webr-repos\", \"https://repo.r-wasm.org\"))\nwebr::install(\"blockr.ggplot2\", repos = c(\"https://bristolmyerssquibb.github.io/webr-repos\", \"https://repo.r-wasm.org\"))\n\nlibrary(blockr)\nlibrary(blockr.ggplot2)\n\noptions(shiny.maxRequestSize = 100*1024^2)\ndo.call(set_workspace, args = list(title = \"My workspace\"))\nserve_workspace(clear = FALSE)\n```\n:::\n\nAs an end-user, you are not supposed to write code. As such, if you think anything is missing, you can open an issue [here](https://github.com/BristolMyersSquibb/blockr/issues), or ask any developer you are working with to create new blocks. This leads us to the second part of this blog post ... How to use blockr as a developers?\n\n### As a developer\n\nHow to install it:\n\n```r\npak::pak(\"BristolMyersSquibb/blockr\")\n```\n\nblockr can't provide any single data manipulation or visualization block. That's the reason why we made it easily __extensible__. You can get an introduction to blockr for developers [here](https://bristolmyerssquibb.github.io/blockr/articles/blockr.html#blockr-for-developers).\n\nIn the following, we create an ordinary differential equations solver block using the pracma package. We choose the Lorenz [attractor](https://en.wikipedia.org/wiki/Lorenz_system). With R, equations may be written as:\n\n```r\nlorenz <- function(t, y, parms) {\n c(\n X = parms[1] * y[1] + y[2] * y[3],\n Y = parms[2] * (y[2] - y[3]),\n Z = -y[1] * y[2] + parms[3] * y[2] - y[3]\n )\n}\n```\n\nwhere `t` is the time, `y` a vector of solutions and `params` the various parameters. If you are familiar with [deSolve](https://cran.r-project.org/web/packages/deSolve/index.html), equations are defined with similar functions. For this blog post, we selected pracma as deSolve does not run in shinylive, so you could not see the embedded demonstration.\n\n### Add interactivity with the __fields__\n\nWe want to add interactivity on the 3 different parameters. Hence, we create our new block function with 3 __fields__ inside a list. Since the expected values are numbers, we leverage the `new_numeric_field`. Parameters are only explicitly shown for the first field:\n\n```r\nnew_ode_block <- function(...) {\n fields <- list(\n a = new_numeric_field(value = -8 / 3, min = -10, max = 20),\n b = new_numeric_field(-10, -50, 100),\n c = new_numeric_field(28, 1, 100)\n )\n # TBD\n # ...\n}\n```\n\nAs you may imagine, these fields are subsequently translated into shiny inputs, that is `numericInput` in our example. If you face a situation where you need to implement a custom field, not included in blockr, you can read this [vignette](https://bristolmyerssquibb.github.io/blockr/articles/new-field.html).\n\n\n### Create the block expression\nAs next step, we __instantiate__ our block with the `new_block` blockr __constructor__:\n\n```r\nnew_block(\n fields = fields,\n expr = quote(),\n ...,\n class = ,\n submit = FALSE\n)\n```\n\nA block is composed of fields, a quoted __expression__ which involved fields (to delay the evaluation), somes __classes__ which control the block behavior, and extra parameters passed with `...`. Finally, `submit` allows to delay the block evaluation by requiring the user to click on a submit button (FALSE by default). This prevents from triggering unwanted intensive computations.\n\nIn our example, the expression calls the `ode45` function. Notice the usage of `substitute` to inject the `lorenz` function within the expression. This is necessary since `lorenz` is defined outside of the expression, and using `quote` would fail. Fields are invoked with `.(field_name)`, a rather strange notation, required by `bquote` to process the expression. It is not mandory to understand this technical underlying detail, but this standard must be respected. You may also notice that some parameters like the initial conditions `y0` or time values are hardcoded. We leave the reader to transform them into fields, as an exercise:\n\n```r\nnew_block(\n fields = fields,\n expr = substitute(\n as.data.frame(\n ode45(\n fun,\n y0 = c(X = 1, Y = 1, Z = 1),\n t0 = 0,\n tfinal = 100,\n parms = c(.(a), .(b), .(c))\n )\n ),\n list(fun = lorenz)\n )\n # TBD\n)\n```\n\n### Add the right classes\n\nWe give our block 2 classes, namely `ode_block` and `data_block`:\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nnew_ode_block <- function(...) {\n fields <- list(\n a = new_numeric_field(-8 / 3, -10, 20),\n b = new_numeric_field(-10, -50, 100),\n c = new_numeric_field(28, 1, 100)\n )\n\n new_block(\n fields = fields,\n expr = substitute(\n as.data.frame(\n ode45(\n fun,\n y0 = c(X = 1, Y = 1, Z = 1),\n t0 = 0,\n tfinal = 100,\n parms = c(.(a), .(b), .(c))\n )\n ),\n list(fun = lorenz)\n ),\n ...,\n class = c(\"ode_block\", \"data_block\")\n )\n}\n```\n:::\n\n\n\n\nAs explained earlier, they are required to control the block behavior, as blockr is build with [S3](https://adv-r.hadley.nz/s3.html). For instance, `data_block` have a specific __evaluation__ method, to calculate the expression:\n\n```r\nevaluate_block.data_block <- function (x, ...) \n{\n stopifnot(...length() == 0L)\n eval(generate_code(x), new.env())\n}\n```\n\nwhere `generate_code` processes the block code. __Data__ blocks are considered as entry point blocks, as opposed to __transformation__ blocks, that operate on data. Therefore, you may easily understand that the evaluation method for a transform block requires to pass the data from the previous block with `%>%`:\n\n```r\nevaluate_block.block <- function (x, data, ...) \n{\n stopifnot(...length() == 0L)\n eval(substitute(data %>% expr, list(expr = generate_code(x))), list(data = data))\n}\n```\n\nIf you want to build a plot block and plot layers blocks, you would have to design a specific evaluate method, that accounts for the `+` operator required by ggplot2. To learn more about how to create a plot block, you can read this [article](https://bristolmyerssquibb.github.io/blockr/articles/plot-block.html).\n\n### Demo\n\n:::{.column-screen-inset}\n```{shinylive-r}\n#| standalone: true\n#| components: [viewer, editor]\n#| column: screen-inset-shaded\n#| viewerHeight: 800\nwebr::install(\"blockr\", repos = c(\"https://bristolmyerssquibb.github.io/webr-repos\", \"https://repo.r-wasm.org\"))\nwebr::install(\"blockr.ggplot2\", repos = c(\"https://bristolmyerssquibb.github.io/webr-repos\", \"https://repo.r-wasm.org\"))\n\nlibrary(blockr)\nlibrary(pracma)\nlibrary(blockr.ggplot2)\n\nlorenz <- function(t, y, parms) {\n c(\n X = parms[1] * y[1] + y[2] * y[3],\n Y = parms[2] * (y[2] - y[3]),\n Z = -y[1] * y[2] + parms[3] * y[2] - y[3]\n )\n}\n\nnew_ode_block <- function(...) {\n fields <- list(\n a = new_numeric_field(-8 / 3, -10, 20),\n b = new_numeric_field(-10, -50, 100),\n c = new_numeric_field(28, 1, 100)\n )\n\n new_block(\n fields = fields,\n expr = substitute(\n as.data.frame(\n ode45(\n fun,\n y0 = c(X = 1, Y = 1, Z = 1),\n t0 = 0,\n tfinal = 100,\n parms = c(.(a), .(b), .(c))\n )\n ),\n list(fun = lorenz)\n ),\n ...,\n class = c(\"ode_block\", \"data_block\")\n )\n}\n\nstack <- new_stack(\n new_ode_block,\n new_ggplot_block(\n func = c(\"x\", \"y\"),\n default_columns = c(\"y.1\", \"y.2\")\n ),\n new_geompoint_block\n)\nserve_stack(stack)\n```\n:::\n\n### Packaging new blocks: the registry\n\nIn the above example, we define the block on the fly. However, an other outstanding feature of blockr is the __registry__, which you can see as a blocks __supermarket__. From the R side, the registry is an __environment__ that can be extended by developers who bring their own blocks packages:\n\n\n\n\n```{mermaid}\n%%| mermaid-format: svg\n%%| mermaid-theme: default\nflowchart LR\n subgraph blockr_custom[your_block_package]\n new_block1[New block 1]\n new_block2[New block 2]\n end\n blockr_custom--> |register| registry\n subgraph registry[Registry]\n subgraph select_reg[Select block]\n reg_name[Name: select block]\n reg_descr[Description: select columns in a table]\n reg_classes[Classes: select_block, tranform_block]\n reg_input[Input: data.frame]\n reg_output[Output: data.frame]\n reg_package[Package: blockr]\n end\n subgraph filter_reg[Filter block]\n end\n filter_reg --x |unregister| trash['fa:fa-trash']\n end\n```\n\n\n\n\nTo get an overview of all available blocks within the blockr core package, we call `get_registry`:\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nget_registry()\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n ctor description category\n1 arrange_block Arrange columns transform\n2 csv_block Read a csv dataset parser\n3 dataset_block Choose a dataset from a package data\n4 filesbrowser_block Select files on the server file system data\n5 filter_block filter rows in a table transform\n6 group_by_block Group by columns transform\n7 head_block Select n first rows of dataset transform\n8 join_block Join 2 datasets transform\n9 json_block Read a json dataset parser\n10 mutate_block Mutate block transform\n11 rds_block Read a rds dataset parser\n12 result_block Shows result of another stack as data source data\n13 select_block select columns in a table transform\n14 summarize_block summarize data groups transform\n15 upload_block Upload files from location data\n16 xpt_block Read a xpt dataset parser\n classes input output\n1 arrange_block, transform_block, block data.frame data.frame\n2 csv_block, parser_block, transform_block, block string data.frame\n3 dataset_block, data_block, block data.frame\n4 filesbrowser_block, data_block, block string\n5 filter_block, transform_block, block data.frame data.frame\n6 group_by_block, transform_block, block data.frame data.frame\n7 head_block, transform_block, block data.frame data.frame\n8 join_block, transform_block, block data.frame data.frame\n9 json_block, parser_block, transform_block, block string data.frame\n10 mutate_block, transform_block, block data.frame data.frame\n11 rds_block, parser_block, transform_block, block string data.frame\n12 result_block, data_block, block data.frame\n13 select_block, transform_block, block data.frame data.frame\n14 summarize_block, transform_block, block data.frame data.frame\n15 upload_block, data_block, block string\n16 xpt_block, parser_block, transform_block, block string data.frame\n package\n1 blockr\n2 blockr\n3 blockr\n4 blockr\n5 blockr\n6 blockr\n7 blockr\n8 blockr\n9 blockr\n10 blockr\n11 blockr\n12 blockr\n13 blockr\n14 blockr\n15 blockr\n16 blockr\n```\n\n\n:::\n:::\n\n\n\n\nThis function returns a dataframe containing information about blocks such as their constructors, like `new_ode_block`, the description, the category (data, transform, plot ... this is user defined), classes, accepted input, returned output and package.\n\nTo register a block we call `register_block` (or `register_blocks` for multiple blocks):\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nregister_my_blocks <- function() {\n register_block(\n constructor = new_ode_block,\n name = \"ode block\",\n description = \"Computed the Lorent attractor solutions\",\n classes = c(\"ode_block\", \"data_block\"),\n input = NA_character_,\n output = \"data.frame\",\n package = \"\",\n category = \"data\"\n )\n # You can register any other blocks here ...\n}\n```\n:::\n\n\n\n\nwhere `` must be replaced by your real package name.\n\nWithin a `zzz.R` script, you can ensure to register any block when the package loads with a __hook__:\n\n```r\n.onLoad <- function(libname, pkgname) {\n register_my_blocks()\n invisible(NULL)\n}\n```\n\nAfter the registration, you can check whether the registry is updated, by looking at the ode block:\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nregister_my_blocks()\nreg <- get_registry()\nreg[reg$package == \"\", ]\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n ctor description category\n11 ode_block Computed the Lorent attractor solutions data\n classes input output package\n11 ode_block, data_block, block data.frame \n```\n\n\n:::\n:::\n", + "supporting": [], + "filters": [ + "rmarkdown/pagebreak.lua" + ], + "includes": {}, + "engineDependencies": {}, + "preserve": {}, + "postProcess": true + } +} \ No newline at end of file diff --git a/posts/2020-11-02-jitsi-load-balanced/index.qmd b/posts/2020-11-02-jitsi-load-balanced/index.qmd index c05c449..d80b6d4 100644 --- a/posts/2020-11-02-jitsi-load-balanced/index.qmd +++ b/posts/2020-11-02-jitsi-load-balanced/index.qmd @@ -28,7 +28,7 @@ At cynkra, while we have been running our own Jitsi instance quite happily for s ## The Challenge cynkra actively supports the local [Zurich R User Group](https://www.meetup.com/Zurich-R-User-Group/). -For one of their [recent meetings](https://www.meetup.com/Zurich-R-User-Group/events/273910283/), about 100 people RSVP'ed. +For one of their [recent meetings](https://www.meetup.com/Zurich-R-User-Group), about 100 people RSVP'ed. When browsing the load capabilities of a single Jitsi instance, we found that the stock setup begins to experience some challenges at around 35 people and fails at around 70 people. The limiting factor appears to be the "videobridge". diff --git a/posts/2024-09-09-zurich-roadcycling-wc-2024/index.qmd b/posts/2024-09-09-zurich-roadcycling-wc-2024/index.qmd index 6e46870..7a163be 100644 --- a/posts/2024-09-09-zurich-roadcycling-wc-2024/index.qmd +++ b/posts/2024-09-09-zurich-roadcycling-wc-2024/index.qmd @@ -195,7 +195,7 @@ while (i <= n_iter) { } ``` -Note that the website gain is officially 4470m whereas ours is `r round(gain)`m. This difference might be explained by the usage of different __smoothing algorithms__ for the [elevation](https://support.strava.com/hc/en-us/articles/216919447-Elevation). Funnily, we all had different bike computers and none of us had the same elevation result at the end of the ride. +Note that the website gain is officially 4470m whereas ours is `r round(gain)`m. This difference might be explained by the usage of different __smoothing algorithms__ for the [elevation](https://larahamilton.com/strava-elevation/). Funnily, we all had different bike computers and none of us had the same elevation result at the end of the ride. We split the trace into 2 parts. The first loop takes place around Winterthur, north of Zurich. Then, a transition leads to the __city loop__, which is repeated 7 times. diff --git a/posts/2024-09-16-blockr/index.qmd b/posts/2024-09-16-blockr/index.qmd new file mode 100644 index 0000000..ebfa3da --- /dev/null +++ b/posts/2024-09-16-blockr/index.qmd @@ -0,0 +1,426 @@ +--- +layout: post +title: "Introducing blockr: a no-code dashboard builder for R" +image: https://avatars.githubusercontent.com/u/145758851?s=400&u=1545a34095e8e84f5cb2b292b1e900df59ba7239&v=4 +author: David Granjon +date: '2024-09-16' +categories: + - R +format: + html: + code-fold: 'show' +filters: + - shinylive +--- + +![](https://avatars.githubusercontent.com/u/145758851?s=400&u=1545a34095e8e84f5cb2b292b1e900df59ba7239&v=4){width=25% fig-align="center"} + +Since 2023, [BristolMyersSquibb](https://www.bms.com/), the Y [company](https://the-y-company.com/) and [cynkra](https://cynkra.com) have teamed up to develop a novel __no-code__ solution for R. + +```{r, setup} +library(blockr) +library(pracma) +``` + +## Introduction + +blockr is an R package designed to democratize __data analysis__ by providing a __flexible__, __intuitive__, and __code-free__ approach to building data pipelines. It has 2 main user targets: + +1. On the one hand, it empowers __non technical__ users to create insightful data workflows using __pre-built__ blocks that can be easily connected, all without writing a single line of code. +2. On the other hand, it provides developers with a set of tools to seamlessly create new blocks, thereby enhancing the entire framework and fostering __collaboration__ within organizations teams. + +blockr is data __agnostic__, meaning it can work with any kind of dataset, that is pharmaceutical data or sport analytics data. It builds on top of [shiny](https://shiny.posit.co/) to ensure real time feedback to any data change. Finally, it allows to export code to create __reproducible__ data analysis. + + +## Getting started + +### As a simple user + +As a simple user, you're not expected to write any single line of code to use blockr. You can use the below kitchen sink to get started. This example is based on the palmer penguins data and running a single stack with 3 blocks: the first block to select the data, another one to create the plot and then add the points to it. + +blockr has a its own __validation__ system. For instance, using the below example, you can try to press return on the first block select box (penguins is the selected default). You'll notice an immediate feedback message. A global message is displayed in the block upper middle part: "1 error(s) found in this block". You get more detailed mesages next to the faulty input(s): "selected value(s) not among provided choices". You can repeat the same experience with the last plot layer block, by emptying the color and shape select inputs. Error messages can accumulate. + +You can dynamically add blocks to a current __stack__, that gathers a set of related blocks. You can think a stack as a data analysis __recipe__ as in cooking, where blocks are instructions. To add a new block, you can click on the `+` icon on the stack top right corner. This opens a sidebar on the left side, where one may search for blocks that are compatible with the current state of the pipeline. With an empty stack, only entry point blocks are suggested, so you can import data. Then, after clicking on the block, the suggestion list changes so you can, for instance, filter data or select only a subset of columns, and much more. + +:::{.column-page} +```{shinylive-r} +#| standalone: true +#| components: [viewer] +#| column: screen-inset-shaded +#| viewerHeight: 800 +webr::install("blockr", repos = c("https://bristolmyerssquibb.github.io/webr-repos", "https://repo.r-wasm.org")) + +library(blockr) +library(palmerpenguins) +library(ggplot2) + +new_ggplot_block <- function(col_x = character(), col_y = character(), ...) { + + data_cols <- function(data) colnames(data) + + new_block( + fields = list( + x = new_select_field(col_x, data_cols, type = "name"), + y = new_select_field(col_y, data_cols, type = "name") + ), + expr = quote( + ggplot(mapping = aes(x = .(x), y = .(y))) + ), + class = c("ggplot_block", "plot_block"), + ... + ) +} + +new_geompoint_block <- function(color = character(), shape = character(), ...) { + + data_cols <- function(data) colnames(data$data) + + new_block( + fields = list( + color = new_select_field(color, data_cols, type = "name"), + shape = new_select_field(shape, data_cols, type = "name") + ), + expr = quote( + geom_point(aes(color = .(color), shape = .(shape)), size = 2) + ), + class = c("geompoint_block", "plot_layer_block", "plot_block"), + ... + ) +} + +stack <- new_stack( + data_block = new_dataset_block("penguins", "palmerpenguins"), + plot_block = new_ggplot_block("flipper_length_mm", "body_mass_g"), + layer_block = new_geompoint_block("species", "species") +) +serve_stack(stack) +``` +::: + +#### Toward more complex analysis + +Let's consider this [dataset](https://www.kaggle.com/datasets/heesoo37/120-years-of-olympic-history-athletes-and-results?select=athlete_events.csv), which contains 120 years of olympics athletes data until Rio in 2016. In the below kitchen sink, we first add an upload block: + +1. Download the [dataset](https://www.kaggle.com/datasets/heesoo37/120-years-of-olympic-history-athletes-and-results?select=athlete_events.csv) file locally. +2. CLick on `Add stack`. +3. Click on the stack `+` button and search for `browser`, then select the `new_filesbrowser_block`. +4. Uncollapse the stack by click on the top right arrow icon. This makes the upload block file input visible. +5. Click on `File select` and select the downloaded file at step 1 (`athlete_events.csv`). +6. As we obtain a csv file, we must parse it with a `new_csv_block`. Repeat step 3 to add the `new_csv_block`. The table is `271116` rows and `15` columns. +7. Add a `new_filter_block` and select `Sex` as column and then `F` in the values input. We leave the comparison to `==` and click on the `Run` button. Notice we now have 74522 rows. +8. Add a `new_mutate_block` with the following expression: `birth_year = Year - Age` (this gives us an approximate birth year). Click on submit. + +From now on, we leave the first stack as is and will reuse it in other stacks. We want to display the average height distribution for female athletes. Let's do it below. + +9. Create a new stack by clicking on `Add stack`. +10. Add it a `new_result_block`. This allows to import the data from the first stack (and potentially any stack from the dashboard). If you don't see any data, select another stack name from the dropdown menu. +11. Add a `new_ggplot_block`, leave `x` as default function and select `Height` as variable in the columns input. +12. Add a `new_geomhistogram_block`. Now we have our distribution plot. + +Alternatively, you could remove the 2 plot blocks and add a `new_summarize_block` using `mean` as function and `Height` as column (result: 168 cm). + +In the following, we create a look-up table to be able to retrieve the athlete names based on their `ID`. + +13. Create a new stack. +14. Add a result block to import data from the very first stack. +15. Add a `new_select_block` and only select `ID`, `Name`, `birth_year`, `Team` and `Sport` as columns. + +Our goal is now to find which athlete did 2 or more different sports. + +16. Create a new stack. +17. Add a result block to import data from the very first stack. +18. Add a `new_filter_block` , select `Medal` as column, `!=` as comparison operator and leave the value empty. Click on run, which will only get athletes with medals. +19. Add a `new_group_by_block`, grouping by `ID` (as some athletes have the same name). +20. Add a `new_summarize_block` by choising the function `n_distinct` applied on the `Sport` columns. +21. Add a `new_filter_block` , select `N_DISTINCT` as column, `>=` as comparison operator and set the value to 2. Click on run. This gives us the athletes that are doing 2 sports or more. +22. Add a `new_join_block`. Select `left_join` as join function, select the third stack (lookup table) as join table and `ID` as column. +23. Add a `new_arrange_block` for the `birth_year` column. + +As a conclusion, Hjrdis Viktoria Tpel (1904) was the first recorded athlete to compete in 2 different sports, swimming and diving for Sweden. Lauryn Chenet Williams (1984) is the latest for US with Athletics and Bobsleigh. It's actually quite amazing to see people competing in two quite unrelated sports like swimming and handbain the case of Roswitha Krause. + +:::{.column-screen-inset} +```{shinylive-r} +#| standalone: true +#| components: [viewer] +#| column: screen-inset-shaded +#| viewerHeight: 800 +webr::install("blockr", repos = c("https://bristolmyerssquibb.github.io/webr-repos", "https://repo.r-wasm.org")) +webr::install("blockr.ggplot2", repos = c("https://bristolmyerssquibb.github.io/webr-repos", "https://repo.r-wasm.org")) + +library(blockr) +library(blockr.ggplot2) + +options(shiny.maxRequestSize = 100*1024^2) +do.call(set_workspace, args = list(title = "My workspace")) +serve_workspace(clear = FALSE) +``` +::: + +As an end-user, you are not supposed to write code. As such, if you think anything is missing, you can open an issue [here](https://github.com/BristolMyersSquibb/blockr/issues), or ask any developer you are working with to create new blocks. This leads us to the second part of this blog post ... How to use blockr as a developers? + +### As a developer + +How to install it: + +```r +pak::pak("BristolMyersSquibb/blockr") +``` + +blockr can't provide any single data manipulation or visualization block. That's the reason why we made it easily __extensible__. You can get an introduction to blockr for developers [here](https://bristolmyerssquibb.github.io/blockr/articles/blockr.html#blockr-for-developers). + +In the following, we create an ordinary differential equations solver block using the pracma package. We choose the Lorenz [attractor](https://en.wikipedia.org/wiki/Lorenz_system). With R, equations may be written as: + +```r +lorenz <- function(t, y, parms) { + c( + X = parms[1] * y[1] + y[2] * y[3], + Y = parms[2] * (y[2] - y[3]), + Z = -y[1] * y[2] + parms[3] * y[2] - y[3] + ) +} +``` + +where `t` is the time, `y` a vector of solutions and `params` the various parameters. If you are familiar with [deSolve](https://cran.r-project.org/web/packages/deSolve/index.html), equations are defined with similar functions. For this blog post, we selected pracma as deSolve does not run in shinylive, so you could not see the embedded demonstration. + +### Add interactivity with the __fields__ + +We want to add interactivity on the 3 different parameters. Hence, we create our new block function with 3 __fields__ inside a list. Since the expected values are numbers, we leverage the `new_numeric_field`. Parameters are only explicitly shown for the first field: + +```r +new_ode_block <- function(...) { + fields <- list( + a = new_numeric_field(value = -8 / 3, min = -10, max = 20), + b = new_numeric_field(-10, -50, 100), + c = new_numeric_field(28, 1, 100) + ) + # TBD + # ... +} +``` + +As you may imagine, these fields are subsequently translated into shiny inputs, that is `numericInput` in our example. If you face a situation where you need to implement a custom field, not included in blockr, you can read this [vignette](https://bristolmyerssquibb.github.io/blockr/articles/new-field.html). + + +### Create the block expression +As next step, we __instantiate__ our block with the `new_block` blockr __constructor__: + +```r +new_block( + fields = fields, + expr = quote(), + ..., + class = , + submit = FALSE +) +``` + +A block is composed of fields, a quoted __expression__ which involved fields (to delay the evaluation), somes __classes__ which control the block behavior, and extra parameters passed with `...`. Finally, `submit` allows to delay the block evaluation by requiring the user to click on a submit button (FALSE by default). This prevents from triggering unwanted intensive computations. + +In our example, the expression calls the `ode45` function. Notice the usage of `substitute` to inject the `lorenz` function within the expression. This is necessary since `lorenz` is defined outside of the expression, and using `quote` would fail. Fields are invoked with `.(field_name)`, a rather strange notation, required by `bquote` to process the expression. It is not mandory to understand this technical underlying detail, but this standard must be respected. You may also notice that some parameters like the initial conditions `y0` or time values are hardcoded. We leave the reader to transform them into fields, as an exercise: + +```r +new_block( + fields = fields, + expr = substitute( + as.data.frame( + ode45( + fun, + y0 = c(X = 1, Y = 1, Z = 1), + t0 = 0, + tfinal = 100, + parms = c(.(a), .(b), .(c)) + ) + ), + list(fun = lorenz) + ) + # TBD +) +``` + +### Add the right classes + +We give our block 2 classes, namely `ode_block` and `data_block`: + +```{r} +new_ode_block <- function(...) { + fields <- list( + a = new_numeric_field(-8 / 3, -10, 20), + b = new_numeric_field(-10, -50, 100), + c = new_numeric_field(28, 1, 100) + ) + + new_block( + fields = fields, + expr = substitute( + as.data.frame( + ode45( + fun, + y0 = c(X = 1, Y = 1, Z = 1), + t0 = 0, + tfinal = 100, + parms = c(.(a), .(b), .(c)) + ) + ), + list(fun = lorenz) + ), + ..., + class = c("ode_block", "data_block") + ) +} +``` + +As explained earlier, they are required to control the block behavior, as blockr is build with [S3](https://adv-r.hadley.nz/s3.html). For instance, `data_block` have a specific __evaluation__ method, to calculate the expression: + +```r +evaluate_block.data_block <- function (x, ...) +{ + stopifnot(...length() == 0L) + eval(generate_code(x), new.env()) +} +``` + +where `generate_code` processes the block code. __Data__ blocks are considered as entry point blocks, as opposed to __transformation__ blocks, that operate on data. Therefore, you may easily understand that the evaluation method for a transform block requires to pass the data from the previous block with `%>%`: + +```r +evaluate_block.block <- function (x, data, ...) +{ + stopifnot(...length() == 0L) + eval(substitute(data %>% expr, list(expr = generate_code(x))), list(data = data)) +} +``` + +If you want to build a plot block and plot layers blocks, you would have to design a specific evaluate method, that accounts for the `+` operator required by ggplot2. To learn more about how to create a plot block, you can read this [article](https://bristolmyerssquibb.github.io/blockr/articles/plot-block.html). + +### Demo + +:::{.column-screen-inset} +```{shinylive-r} +#| standalone: true +#| components: [viewer, editor] +#| column: screen-inset-shaded +#| viewerHeight: 800 +webr::install("blockr", repos = c("https://bristolmyerssquibb.github.io/webr-repos", "https://repo.r-wasm.org")) +webr::install("blockr.ggplot2", repos = c("https://bristolmyerssquibb.github.io/webr-repos", "https://repo.r-wasm.org")) + +library(blockr) +library(pracma) +library(blockr.ggplot2) + +lorenz <- function(t, y, parms) { + c( + X = parms[1] * y[1] + y[2] * y[3], + Y = parms[2] * (y[2] - y[3]), + Z = -y[1] * y[2] + parms[3] * y[2] - y[3] + ) +} + +new_ode_block <- function(...) { + fields <- list( + a = new_numeric_field(-8 / 3, -10, 20), + b = new_numeric_field(-10, -50, 100), + c = new_numeric_field(28, 1, 100) + ) + + new_block( + fields = fields, + expr = substitute( + as.data.frame( + ode45( + fun, + y0 = c(X = 1, Y = 1, Z = 1), + t0 = 0, + tfinal = 100, + parms = c(.(a), .(b), .(c)) + ) + ), + list(fun = lorenz) + ), + ..., + class = c("ode_block", "data_block") + ) +} + +stack <- new_stack( + new_ode_block, + new_ggplot_block( + func = c("x", "y"), + default_columns = c("y.1", "y.2") + ), + new_geompoint_block +) +serve_stack(stack) +``` +::: + +### Packaging new blocks: the registry + +In the above example, we define the block on the fly. However, an other outstanding feature of blockr is the __registry__, which you can see as a blocks __supermarket__. From the R side, the registry is an __environment__ that can be extended by developers who bring their own blocks packages: + +```{mermaid} +%%| mermaid-format: svg +%%| mermaid-theme: default +flowchart LR + subgraph blockr_custom[your_block_package] + new_block1[New block 1] + new_block2[New block 2] + end + blockr_custom--> |register| registry + subgraph registry[Registry] + subgraph select_reg[Select block] + reg_name[Name: select block] + reg_descr[Description: select columns in a table] + reg_classes[Classes: select_block, tranform_block] + reg_input[Input: data.frame] + reg_output[Output: data.frame] + reg_package[Package: blockr] + end + subgraph filter_reg[Filter block] + end + filter_reg --x |unregister| trash['fa:fa-trash'] + end +``` + +To get an overview of all available blocks within the blockr core package, we call `get_registry`: + +```{r} +get_registry() +``` + +This function returns a dataframe containing information about blocks such as their constructors, like `new_ode_block`, the description, the category (data, transform, plot ... this is user defined), classes, accepted input, returned output and package. + +To register a block we call `register_block` (or `register_blocks` for multiple blocks): + +```{r} +register_my_blocks <- function() { + register_block( + constructor = new_ode_block, + name = "ode block", + description = "Computed the Lorent attractor solutions", + classes = c("ode_block", "data_block"), + input = NA_character_, + output = "data.frame", + package = "", + category = "data" + ) + # You can register any other blocks here ... +} +``` + +where `` must be replaced by your real package name. + +Within a `zzz.R` script, you can ensure to register any block when the package loads with a __hook__: + +```r +.onLoad <- function(libname, pkgname) { + register_my_blocks() + invisible(NULL) +} +``` + +After the registration, you can check whether the registry is updated, by looking at the ode block: + +```{r} +register_my_blocks() +reg <- get_registry() +reg[reg$package == "", ] +``` diff --git a/posts/2024-10-21-seasonal-1.10/index.qmd b/posts/2024-10-21-seasonal-1.10/index.qmd new file mode 100644 index 0000000..5cc245a --- /dev/null +++ b/posts/2024-10-21-seasonal-1.10/index.qmd @@ -0,0 +1,89 @@ +--- +date: 2024-10-21 +layout: post +image: seasonalview.png +author: Christoph Sax +title: 'seasonal 1.10: R-interface to X-13ARIMA-SEATS' +categories: +- R +- time-series +badges: +- label: CRAN + bg: bg-warning +--- + +We are happy to announce that the latest [CRAN version](https://cran.r-project.org/package=seasonal) of *seasonal* fixes several bugs and makes it easier to read specialized output from X-13ARIMA-SEATS. See [here](https://github.com/christophsax/seasonal/blob/main/NEWS.md#110) for a complete list of changes. In addition, the accompanying *seasonalview* package has been [updated](https://cran.r-project.org/package=seasonalview) and finally gets rid of some annoying warning messages. + + +## What is seasonal adjustment? + +Time series data often display recurring seasonal patterns throughout the year. For instance, unemployment rates in the United States typically rise from January to March and again in June and July. By applying seasonal adjustment, analysts can identify and remove these predictable annual patterns, allowing for clearer interpretation of fundamental changes in the data. The R package *seasonal* provides a powerful and user-friendly way to perform this task in R, leveraging the X-13ARIMA-SEATS procedure developed by the [U.S. Census Bureau](https://www.census.gov/data/software/x13as.X-13ARIMA-SEATS.html). + + +## Getting started in R + +To get started, use the `seas` function on any time series: + +```{r} +library(seasonal) +m <- seas(AirPassengers) +summary(m) +``` + +By default, `seas` calls the automatic procedures of X-13ARIMA-SEATS to perform a seasonal adjustment that works well in most circumstances. + +To extract the final series, use the `final()` function: + +```{r, eval = FALSE} +final(m) +``` + +To plot the results, use the `plot()` function: + +```{r} +plot(m) +``` + +## Input and output + +In *seasonal*, it is possible to use the complete syntax of +X-13ARIMA-SEATS. The X-13ARIMA-SEATS syntax uses *specs* and *arguments*, with each spec +optionally containing some arguments. For +example, in order to set the 'variables' argument of the 'regression' spec equal +to `td` and `ao1999.jan`, the input to `seas` can be specified like this: + +```{r, eval = FALSE} +m <- seas(AirPassengers, regression.variables = c("td", "ao1955.jan")) +``` + +*seasonal* has a flexible mechanism to read data from X-13ARIMA-SEATS. With the +`series` function, it is possible to extract any output that can be +generated by X-13ARIMA-SEATS. For example, the following command returns the +forecasts of the ARIMA model as a time series: + +```{r, eval = FALSE} +m <- seas(AirPassengers) +series(m, "forecast.forecasts") +``` + + +## Graphical user interface + +The *seasonalview* package provides a graphical user interface to explore the results of the seasonal adjustment. Use the `view()` function to open it: + +```{r, eval = FALSE} +view(m) +``` + +![Seasonalview GUI](seasonalview.png) + +This interface allows for interactive exploration and adjustment of seasonal models, making it easier to fine-tune your seasonal adjustments and visualize the results. + + +## More information + +For a more detailed introduction, check our [article in the *Journal of Statistical Software*](https://doi.org/10.18637/jss.v087.i11) or visit [the package website](http://www.seasonal.website). The package website also allows you to upload your own data and explore the results interactively. + +You can report bugs, ask questions, or contribute to the development on our [GitHub repository](https://github.com/christophsax/seasonal). Thanks for using *seasonal*! + + diff --git a/posts/2024-10-21-seasonal-1.10/seasonalview.png b/posts/2024-10-21-seasonal-1.10/seasonalview.png new file mode 100644 index 0000000..5e34c0a Binary files /dev/null and b/posts/2024-10-21-seasonal-1.10/seasonalview.png differ