Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📝 Conditional Surveys depending on trip characteristics #1129

Merged
merged 4 commits into from
Mar 27, 2024

Conversation

JGreenlee
Copy link
Collaborator

We plan to support a conditional approach to surveys: depending on characteristics of the trip / timelineEntry, it may prompt for one survey or another. This involves rearranging UserInputButton so that the survey name is not hardcoded as "TripConfirmSurvey". Instead, survey names and button labels will be pulled from the config. Each survey that could show conditionally has a showsIf field, which is evaluated at runtime in conditionalSurveys.ts. The first survey whose condition matches is the one that is used for that timelineEntry.

This is a draft, and I don't expect it to work as-is, but it should serve as an outline to the planned approach

We plan to support a conditional approach to surveys: depending on characteristics of the trip / timelineEntry, it may prompt for one survey or another.
This involves rearranging UserInputButton so that the survey name is not hardcoded as "TripConfirmSurvey". Instead, survey names and button labels will be pulled from the config.
Each survey that could show conditionally has a `showsIf` field, which is evaluated at runtime in conditionalSurveys.ts. The first survey whose condition matches is the one that is used for that timelineEntry.
@JGreenlee
Copy link
Collaborator Author

After discussing in e-mission/nrel-openpath-deploy-configs#69, I am proposing that the config structure will look like...

...
  "survey_info": {
    "surveys": {
      "NonEVTripSurvey": {
        "formPath": "<URL>/gpg-non-ev-trip-v1.xml",
        "version": 1,
        "compatibleWith": 1,
        "dataKey": "manual/trip_user_input",
        "labelTemplate": { "en": "Answered" }
      },
      "EVRoamingTripSurvey": {
        "formPath": "<URL>/gpg-ev-roaming-trip-v1.xml",
        "version": 1,
        "compatibleWith": 1,
        "dataKey": "manual/trip_user_input",
        "labelTemplate": { "en": "Answered" }
      },
      "EVReturnTripSurvey": {
        "formPath": "<URL>/gpg-ev-return-trip-v1.xml",
        "version": 1,
        "compatibleWith": 1,
        "dataKey": "manual/trip_user_input",
        "labelTemplate": { "en": "Answered" }
      },
      "UserProfileSurvey": {
        "formPath": "<URL>/gpg-onboarding-v1.xml",
        "version": 1,
        "compatibleWith": 1,
        "dataKey": "manual/demographic_survey",
        "labelTemplate": { "en": "Answered" }
      }
    },
    "buttons": {
      "trip-label": [
        {
          "surveyName": "NonEVTripSurvey",
          "not-filled-in-label": { "en": "Gas Car Survey" },
          "showsIf": "!bt_sensed_mode.startsWith('e_car')"
        },
        {
          "surveyName": "EVRoamingTripSurvey",
          "not-filled-in-label": { "en": "EV Survey" },
          "showsIf": "bt_sensed_mode.startsWith('e_car') && !pointIsWithinBounds(end_loc.coordinates, [[39.719, -105.118], [39.717, -105.115]])"
        },
        {
          "surveyName": "EVReturnTripSurvey",
          "not-filled-in-label": { "en": "EV Survey" },
          "showsIf": "bt_sensed_mode.startsWith('e_car') && pointIsWithinBounds(end_loc.coordinates, [[39.719, -105.118], [39.717, -105.115]])"
        }
      ]
    },
    "trip-labels": "ENKETO"
  },
...

...for a program that has a conditional trip survey: either NonEVTripSurvey, EVRoamingTripSurvey, or EVReturnTripSurvey depending on the Bluetooth sensed mode and end location of the trip

GIven that the configs are JSON and we can only give the "showsIf" conditions as strings, it will be annoying if they get too lengthy.
This will allow us to reference trip properties directly (e.g. "end_loc" instead of "tlEntry.end_loc") which is a bit more concise and readable.
@JGreenlee
Copy link
Collaborator Author

@jiji14 for visibility

We want to use scopedEval to evaluate boolean expressions. It executes the code but doesn't return back the result unless we include 'return ...' in the body.
With this quick change it returns the result of the evaluation and works on our inline conditionals.
I thought coordinates would be given with latitude first. But the geojson spec is to have longitude first https://www.rfc-editor.org/rfc/rfc7946#section-3.1.1

Flipped the logic for this function and added a bit of JSDoc just so it's clear
@JGreenlee
Copy link
Collaborator Author

JGreenlee commented Mar 1, 2024

I had some extra time today, so I tested out this branch.

First, I tried it with the existing dfc-fermata config just to make rule out regressions. It seemed normal.

Then I made local modifications to the config. For the purpose of testing, I used trip duration as a qualifier and my home as a qualifying location.

I wanted something like this:

  • If the trip was 30 minutes (1800 seconds) or more, show me the Gas Car survey.
  • If the trip was under 30 minutes and ended at home, show me the EV Return Trip survey.
  • If the trip was under 30 minutes and not ended at home, show me the EV Roaming Trip survey.

On first try, the UserInputButton did not show! It was because I made a mistake with scopedEval.

/**
 * @description Executes a JS expression `script` in a restricted `scope`
 * @example scopedEval('console.log(foo)', { foo: 'bar' })
 */
const scopedEval = (script: string, scope: { [k: string]: any }) =>
  Function(...Object.keys(scope), script)(...Object.values(scope));

It needs to return the result of the expression, otherwise it always return undefined.

/**
 * @description Executes a JS expression `script` in a restricted `scope`
 * @example scopedEval('console.log(foo)', { foo: 'bar' })
 */
const scopedEval = (script: string, scope: { [k: string]: any }) =>
  Function(...Object.keys(scope), `return ${script}`)(...Object.values(scope));

Now it evaluates conditional expressions correctly.


It is working for trip duration. I see Gas Car surveys for trips >30min and EV surveys for <30min.
However, I am seeing the "EV Roaming" survey for all trips, even the ones at home. Why?


Ah, because of the coordinates format.

The trip object has:

{
  ...
  end_loc: {
    coordinates: [-84.xxxxxx, 39.xxxxxx],
    type: "Point"
  }
}

-84 is longitude and 39 is latitude. I thought it would be latitude first. longitude first is the geojson specification

So I will modify the pointIsWithinBounds function to accept [lon, lat]. Here is what that looks like now:

/**
  @description Returns true if the given point is within the given bounds.
    Coordinates are in [longitude, latitude] order, since that is the GeoJSON spec.
  @param pt point to check as [lon, lat]
  @param bounds NW and SE corners as [[lon, lat], [lon, lat]] 
  @returns true if pt is within bounds
  */
  pointIsWithinBounds: (pt: Position, bounds: Position[]) => {
    // pt's lon must be east of, or greater than, NW's lon; and west of, or less than, SE's lon
    const lonInRange = pt[0] > bounds[0][0] && pt[0] < bounds[1][0];
    // pt's lat must be south of, or less than, NW's lat; and north of, or greater than, SE's lat
    const latInRange = pt[1] < bounds[0][1] && pt[1] > bounds[1][1];
    return latInRange && lonInRange;
  },

I also updated my home coordinates in the config. This is what it looked like in the end:

{
  ...
  "survey_info": {
    "surveys": {
      "UserProfileSurvey": {
        "formPath": "https://raw.githubusercontent.com/JGreenlee/nrel-openpath-deploy-configs/fermata-demo/survey_resources/dfc-fermata/fermata-onboarding-v0.xml",
        "version": 1,
        "compatibleWith": 1,
        "dataKey": "manual/demographic_survey",
        "labelTemplate": { "en": "Answered" }
      },
      "DfcEvReturnTrip": {
        "formPath": "https://raw.githubusercontent.com/JGreenlee/nrel-openpath-deploy-configs/fermata-demo/survey_resources/dfc-fermata/dfc-ev-return-trip-v0.xml",
        "version": 1,
        "compatibleWith": 1,
        "dataKey": "manual/trip_user_input",
        "labelTemplate": { "en": "Answered" }
      },
      "DfcEvRoamingTrip": {
        "formPath": "https://raw.githubusercontent.com/JGreenlee/nrel-openpath-deploy-configs/fermata-demo/survey_resources/dfc-fermata/dfc-ev-roaming-trip-v0.xml",
        "version": 1,
        "compatibleWith": 1,
        "dataKey": "manual/trip_user_input",
        "labelTemplate": { "en": "Answered" }
      },
      "DfcGasTrip": {
        "formPath": "https://raw.githubusercontent.com/JGreenlee/nrel-openpath-deploy-configs/fermata-demo/survey_resources/dfc-fermata/dfc-gas-trip-v0.xml",
        "version": 1,
        "compatibleWith": 1,
        "dataKey": "manual/trip_user_input",
        "labelTemplate": { "en": "Answered" }
      }
    },
    "buttons": {
      "trip-label": [
        {
          "surveyName": "DfcGasTrip",
          "not-filled-in-label": { "en": "Gas Car Survey" },
          "showsIf": "duration >= 1800"
        },
        {
          "surveyName": "DfcEvRoamingTrip",
          "not-filled-in-label": { "en": "EV Survey" },
          "showsIf": "duration < 1800 && !pointIsWithinBounds(end_loc.coordinates, [[-84.xxx, 39.xxx], [-84.xxx, 39.xxx]])"
        },
        {
          "surveyName": "DfcEvReturnTrip",
          "not-filled-in-label": { "en": "EV Survey" },
          "showsIf": "duration < 1800 && pointIsWithinBounds(end_loc.coordinates, [[-84.xxx, 39.xxx], [-84.xxx, 39.xxx]])"
        }
      ]
    },
    "trip-labels": "ENKETO"
  },
  ...
}

Now it is working great! 😁

@JGreenlee
Copy link
Collaborator Author

trips >30 min have gas car survey

image

homebound trips have return survey

image

non-home trips have roaming survey

image

@JGreenlee JGreenlee marked this pull request as ready for review March 12, 2024 18:55
Copy link

codecov bot commented Mar 12, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 78.37%. Comparing base (c7e9244) to head (e0e2d75).
Report is 58 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1129      +/-   ##
==========================================
+ Coverage   77.55%   78.37%   +0.82%     
==========================================
  Files          28       28              
  Lines        1702     1702              
  Branches      367      366       -1     
==========================================
+ Hits         1320     1334      +14     
+ Misses        382      368      -14     
Flag Coverage Δ
unit 78.37% <ø> (+0.82%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

see 4 files with indirect coverage changes

@JGreenlee
Copy link
Collaborator Author

JGreenlee commented Mar 12, 2024

@jiji14 Can you review this before I pass it on for Shankari to review? Let me know if you have suggestions or questions

@jiji14
Copy link
Contributor

jiji14 commented Mar 13, 2024

@JGreenlee Thank you for the clear organization! Your documentation and comments are very helpful, and I can understand everything well. I just have a few small questions. I was wondering if there is no need to handle the edge case where pt (end_loc.coordinates) is a null value.

  pointIsWithinBounds: (pt: Position, bounds: Position[]) => {
  },

Additionally, I'm curious about the inspiration behind the scopedEval function; it seems quite clever!

@JGreenlee
Copy link
Collaborator Author

I was wondering if there is no need to handle the edge case where pt (end_loc.coordinates) is a null value.

Good thinking!

I did include a try/catch block around the call to scopedEval. So if end_loc or end_loc.coordinates were undefined/null/ /empty, the error will be caught and displayError will be called (which triggers a user-facing popup).

But I do wonder if it would be better to explicitly throw the error from the pointIsWithinBounds function.
Need input from shankari on this.

Additionally, I'm curious about the inspiration behind the scopedEval function; it seems quite clever!

Here's the discussion between shankari and me: e-mission/nrel-openpath-deploy-configs#69 (comment)
This was before I made the code changes, but was in the stage of planning

Copy link
Contributor

@jiji14 jiji14 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your comment! Approved it :D

Copy link
Contributor

@shankari shankari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is still a bit experimental, so I am merging this. I fully anticipate that we will fine-tune it later as we experiment on staging.

@JGreenlee @Abby-Wheelis Also, for the record, we don't have the primary mode stored in the confirmed or composite trips. As I said, we use it in the public dashboard, but we determine it as part of the scaffolding code.
https://github.com/e-mission/em-public-dashboard/blob/434dec58186bad70877df3fd43b599738807a43b/viz_scripts/scaffolding.py#L205

        expanded_ct["primary_mode_non_other"] = participant_ct_df.cleaned_section_summary.apply(lambda md: max(md["distance"], key=md["distance"].get))
        expanded_ct.primary_mode_non_other.replace({"ON_FOOT": "WALKING"}, inplace=True)
        valid_sensed_modes = ["WALKING", "BICYCLING", "IN_VEHICLE", "AIR_OR_HSR", "UNKNOWN"]
        expanded_ct["primary_mode"] = expanded_ct.primary_mode_non_other.apply(lambda pm: "OTHER" if pm not in valid_sensed_modes else pm)

Comment on lines +36 to +39
if (!tripLabelConfig) {
// config doesn't specify; use default
return ['TripConfirmSurvey', t('diary.choose-survey')];
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it ever make sense to never have a trip-level survey? Similar to the argument for the null match, maybe we want to only show place surveys and not trip surveys? I see that this is what we do now, which is why the time use displays the TripConfirmSurvey although there is no button configured for it, but we might want to revisit that later

@shankari shankari merged commit d0ed16b into e-mission:master Mar 27, 2024
9 checks passed
JGreenlee added a commit to JGreenlee/nrel-openpath-deploy-configs that referenced this pull request Mar 28, 2024
This will allow us to test:
- conditional surveys (e-mission/e-mission-phone#1129, merged)
- configurable dashboard (e-mission/e-mission-docs#1055, still in progress)
JGreenlee added a commit to JGreenlee/nrel-openpath-deploy-configs that referenced this pull request Mar 28, 2024
This will allow us to test:
- conditional surveys (e-mission/e-mission-phone#1129, merged)
- configurable dashboard (e-mission/e-mission-docs#1055, still in progress)
JGreenlee added a commit to JGreenlee/e-mission-common that referenced this pull request Apr 9, 2024
This module allows us to create a 'summary' of how many survey responses have been completed vs. prompted, given a list of trips.
get_surveys_summary returns a dict in the form of:
```
{
  'SurveyNameA': {
    'answered': 4,
    'unanswered': 3,
    'mismatched': 0
  },
  'SuveryNameB': {
    ...
    ...
  }
}
```
These totals represent the number of trips for which a survey was answered / unanswered / mismatched. "Mismatched" means the prompted survey and answered survey are not the same (this could happen if the config was updated causing a different survey to be assigned to the trip).

Calculating this requires us to determine which surveys are prompted for which trips, using the 'showsIf' expressions in the app config. This is handled in conditional_surveys.py, which is basically a Python implementation of conditionalSurveys.ts (added in e-mission/e-mission-phone#1129)
Additional discussion around evaluating the expressions in Python: e-mission/e-mission-docs#1051 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants