Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default new ds project 188559300 #650

Merged
merged 35 commits into from
Jan 11, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
0213eab
[188532827]: default to materializing expressions on creation
gergness Nov 6, 2024
047afe6
[188532827]: update failing unit tests
gergness Nov 6, 2024
a724b53
[188532827]: Add test of new default derived behavior
gergness Nov 6, 2024
7e47070
[188532827]: deprecate setting derivation after variable is created
gergness Nov 8, 2024
704f74c
[188532827]: modernize redaction
gergness Nov 8, 2024
2913eca
[188532827]: update fixtures for crunch vignette
gergness Nov 8, 2024
9dfae49
[188532827]: fix bad test option
gergness Nov 8, 2024
18f0a50
[188532827]: fix broken integration tests
gergness Nov 8, 2024
e3cbd66
[188532827]: update README
gergness Nov 8, 2024
43b139e
[188532827]: update vignettes
gergness Nov 8, 2024
1a34fa3
[188532827]: fix circle ci integration tests?
gergness Nov 8, 2024
8ea3c60
update news
gergness Nov 8, 2024
8ede3d6
[188559300]: require project when creating a dataset
gergness Nov 12, 2024
dc6c5f3
[188559300]: Update existing unit tests
gergness Nov 12, 2024
84ce460
[188559300]: newExampleDataset also can set project
gergness Nov 12, 2024
5fcdde1
[188559300]: new tests
gergness Nov 13, 2024
ec754fa
[188559300]: more docs
gergness Nov 13, 2024
4b8c7f0
[188559300]: fixes for integration tests?
gergness Nov 13, 2024
08f5056
[188559300]: add ability to get project folder of a dataset
gergness Nov 13, 2024
6232062
[188559300]: put forks in parent's project folder by default
gergness Nov 13, 2024
777bcb1
[188559300]: loading a dataset gets the same flexibility for project …
gergness Nov 13, 2024
f99b7b0
[188559300]: amend tests about personal folder
gergness Nov 13, 2024
9fc972e
[188559300]: integration test fixes?
gergness Nov 13, 2024
7cf9741
[188559300]: Remove integration tests (redundant to zoom/blob/master/…
gergness Nov 13, 2024
3b41c82
[188559300]: Use existing function
gergness Nov 18, 2024
cc9ca1b
[188559300]: load/delete dataset don't use search by name API any more
gergness Nov 18, 2024
5dcd840
[188559300]: Update tests for new project behavior
gergness Nov 18, 2024
cd7ad62
[188559300]: We don't use by_name any more
gergness Nov 18, 2024
e25ae69
[188559300]: update vignette code
gergness Nov 18, 2024
6d73914
[188559300]: update vignette fixtures
gergness Nov 21, 2024
1509e66
[188559300]: update example code
gergness Nov 21, 2024
4c3980c
[188559300]: fix warnings about `structure(NULL` when setupCrunchAuth…
gergness Nov 22, 2024
cac70e7
[188559300]: update NEWS
gergness Nov 22, 2024
31de9f7
[188559300]: update default to create derived variables again
gergness Jan 9, 2025
30c98e1
[188559300]: update broken vignette
gergness Jan 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
1 change: 0 additions & 1 deletion NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -577,7 +577,6 @@ importFrom(crayon,italic)
importFrom(crayon,make_style)
importFrom(crayon,red)
importFrom(crayon,underline)
importFrom(curl,curl_escape)
importFrom(curl,curl_version)
importFrom(grDevices,col2rgb)
importFrom(grDevices,colors)
Expand Down
20 changes: 19 additions & 1 deletion NEWS.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,22 @@
# crunch 1.30.4 (Development Version)
# crunch 1.31.0 (Development Version)
* Variables can now be created as materialized by default instead of derived,
by setting environment variable `R_CRUNCH_DEFAULT_DERIVED` or option
`crunch.default.derived` to `FALSE`. See `?toVariable` for more information (#648).

* The concept of a personal folder is being removed from the API imminently. This has
a few implications for rcrunch:

* All datasets must be created with a project (eg via the `project` argument of `newDataset()`)

* Dataset forks will be created in the same folder as their parent

* Because loading datasets by name doesn't work for datasets in projects, it's not really
possible to load a dataset by name without specifying the full project path.

* To make things easier, it is possible to set a default project path with environment
variable `R_CRUNCH_DEFAULT_PROJECT` or option `crunch.default.project`. This will be used
as the default project folder when creating and loading datasets. Forks will still be put
next to parents.

# crunch 1.30.3
* Fix typo which relied on partial argument matching when using the variable catalog cache
Expand Down
20 changes: 18 additions & 2 deletions R/R-to-variable.R
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,20 @@ setGeneric("toVariable", function(x, ...) standardGeneric("toVariable"))

#' @rdname toVariable
#' @export
setMethod("toVariable", "CrunchVarOrExpr", function(x, ...) {
structure(list(derivation = zcl(x), ...), class = "VariableDefinition")
#' @param derived Logical, when `FALSE` indicates a variable should be
#' materialized on creation (saved as data, which can have performance benefits
#' in certain situation) and when `TRUE` indicates it should remain derived
#' (saved as an expression that can update along with the underlying data)
#' Defaults to `TRUE` unless `envOrOption('crunch.default.derived')` has been set.
setMethod("toVariable", "CrunchVarOrExpr", function(
x,
...,
derived = derivedVariableDefault()
) {
structure(
list(derivation = zcl(x), ..., derived = derived),
class = "VariableDefinition"
)
})

#' @rdname toVariable
Expand Down Expand Up @@ -219,3 +231,7 @@ categoriesFromLevels <- function(level_vect) {
list(id = i, name = level_vect[i], numeric_value = i, missing = FALSE)
}), list(.no.data)))
}

derivedVariableDefault <- function() {
envOrOption("crunch.default.derived", TRUE, expect_lgl = TRUE)
}
4 changes: 2 additions & 2 deletions R/append-dataset.R
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@
#' variables, appended to it.
#' @examples
#' \dontrun{
#' ds <- loadDataset("Survey, 2016")
#' new_wave <- loadDataset("Survey, 2017")
#' ds <- loadDataset("Survey, 2016", project = "client 1")
#' new_wave <- loadDataset("Survey, 2017", project = "client 1")
#' ds <- appendDataset(ds, new_wave)
#' }
#' @export
Expand Down
2 changes: 1 addition & 1 deletion R/archive-and-publish.R
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
#' is.published<- publish
#' @examples
#' \dontrun{
#' ds <- loadDataset("mtcars")
#' ds <- loadDataset("mtcars", project = "current")
#' is.draft(ds) # FALSE
#' is.published(ds) # TRUE
#' identical(is.draft(ds), !is.published(ds))
Expand Down
2 changes: 1 addition & 1 deletion R/as-data-frame.R
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@
paste0(unique(names(ds_out)[dup_csv_names]), collapse = ", ")
)
}
return(csvToDataFrame(ds_out, x, parsing_info, array_strategy, categorical.mode = attr(x, "mode")))

Check warning on line 132 in R/as-data-frame.R

View workflow job for this annotation

GitHub Actions / test-coverage

file=R/as-data-frame.R,line=132,col=101,[line_length_linter] Lines should not be more than 100 characters. This line is 103 characters.
}

csvColInfo <- function(ds, verbose = TRUE) {
Expand All @@ -138,7 +138,7 @@
flattened_meta <- flattenVariableMetadata(meta)

orig_aliases <- aliases(flattened_meta)
parent_aliases <- vapply(flattened_meta, function(x) x$parent_alias %||% NA_character_, character(1))

Check warning on line 141 in R/as-data-frame.R

View workflow job for this annotation

GitHub Actions / test-coverage

file=R/as-data-frame.R,line=141,col=101,[line_length_linter] Lines should not be more than 100 characters. This line is 105 characters.
qualified_aliases <- ifelse(
is.na(parent_aliases),
orig_aliases,
Expand All @@ -159,7 +159,7 @@
if (verbose) {
msg_rows <- out$cond_qualified_alias != out$orig_alias
if (any(msg_rows)) {
alias_info <- paste0(out$orig_alias[msg_rows], " -> ", out$cond_qualified_alias[msg_rows])

Check warning on line 162 in R/as-data-frame.R

View workflow job for this annotation

GitHub Actions / test-coverage

file=R/as-data-frame.R,line=162,col=101,[line_length_linter] Lines should not be more than 100 characters. This line is 102 characters.
message(
"Some column names are qualified because there were duplicate aliases ",
"in dataset:\n", paste0(alias_info, collapse = ", ")
Expand All @@ -179,7 +179,7 @@
array_strategy <- match.arg(array_strategy)
meta <- attr(parsing_info, "meta")
## CrunchDataFrames contain both server variables and local variables.
var_order <- if (inherits(cr_data, "CrunchDataFrame")) names(cr_data) else aliases(allVariables(cr_data))

Check warning on line 182 in R/as-data-frame.R

View workflow job for this annotation

GitHub Actions / test-coverage

file=R/as-data-frame.R,line=182,col=101,[line_length_linter] Lines should not be more than 100 characters. This line is 109 characters.
## Iterate over the names of cr_data to preserve the desired order.
## Nest everything in an extra layer of lists because one layer is removed
out <- unlist(lapply(var_order, function(a) {
Expand All @@ -195,12 +195,12 @@
} else {
cp <- columnParser("categorical")
}
subvar_info <- parsing_info[!is.na(parsing_info$parent_alias) & parsing_info$parent_alias == alias(v), ]

Check warning on line 198 in R/as-data-frame.R

View workflow job for this annotation

GitHub Actions / test-coverage

file=R/as-data-frame.R,line=198,col=101,[line_length_linter] Lines should not be more than 100 characters. This line is 116 characters.
cols <- csv_df[, subvar_info$qualified_alias]
if (array_strategy == "alias"){
return(structure(lapply(cols, cp, v, categorical.mode), .Names = subvar_info$cond_qualified_alias))

Check warning on line 201 in R/as-data-frame.R

View workflow job for this annotation

GitHub Actions / test-coverage

file=R/as-data-frame.R,line=201,col=101,[line_length_linter] Lines should not be more than 100 characters. This line is 115 characters.
} else if (array_strategy == "qualified_alias") {
return(structure(lapply(cols, cp, v, categorical.mode), .Names = subvar_info$qualified_alias))

Check warning on line 203 in R/as-data-frame.R

View workflow job for this annotation

GitHub Actions / test-coverage

file=R/as-data-frame.R,line=203,col=101,[line_length_linter] Lines should not be more than 100 characters. This line is 110 characters.
} else { # array_strategy==packed
# Extra list layer to hold the array variable's alias
return(structure(
Expand All @@ -217,7 +217,7 @@
}
} else {
type <- type(v)
cp <- switch(type, "numeric" = numericCsvParser, "text" = textCsvParser, columnParser(type))

Check warning on line 220 in R/as-data-frame.R

View workflow job for this annotation

GitHub Actions / test-coverage

file=R/as-data-frame.R,line=220,col=101,[line_length_linter] Lines should not be more than 100 characters. This line is 104 characters.
return(structure(list(cp(csv_df[[a]], v, categorical.mode)), .Names = a))
}
}), recursive = FALSE)
Expand Down Expand Up @@ -261,7 +261,7 @@
#' provided to the function, and each row represents a entity.
#' @examples
#' \dontrun{
#' ds <- loadDataset("iris")
#' ds <- loadDataset("iris", project = "ACME")
#' vars <- variables(ds)
#' var_df <- as.data.frame(vars, keys = TRUE)
#' # With row names
Expand Down
3 changes: 3 additions & 0 deletions R/auth.R
Original file line number Diff line number Diff line change
Expand Up @@ -133,10 +133,13 @@ setupCrunchAuth <- function(id) {
if (is.null(key)) {
halt("Could not find key in `envOrOption('", paste0("crunch.api.key.", id), "')`")
}
# Allowed to be NULL
default_project <- envOrOption(paste0("crunch.default.project.", id))

set_crunch_opts(
crunch.api = api,
crunch.api.key = key,
crunch.default.project = default_project,
.source = paste0("setupCrunchAuth('", id, "')")
)
}
Expand Down
1 change: 1 addition & 0 deletions R/conditional-transform.R
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ conditionalTransform <- function(..., data, else_condition = NA, type = NULL,
formulas <- dot_formulas
}
var_def <- Filter(Negate(is_formula), dots)
if (!"derived" %in% names(var_def)) var_def$derived <- derivedVariableDefault()

if (length(formulas) == 0) {
halt(
Expand Down
4 changes: 2 additions & 2 deletions R/cut.R
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
#' it as a derived variable on the server.
#' @examples
#' \dontrun{
#' ds <- loadDataset("mtcars")
#' ds <- loadDataset("mtcars", project = "p1")
#' ds$cat_var <- cut(ds$mpg,
#' breaks = c(10, 15, 20),
#' labels = c("small", "medium"), name = "Fuel efficiency"
Expand Down Expand Up @@ -198,7 +198,7 @@ generateNumCutLabels <- function(dig.lab, breaks, nb, right, include.lowest) {
#' it as a derived variable on the server.
#' @examples
#' \dontrun{
#' ds <- loadDataset("example")
#' ds <- loadDataset("example", project = "client 1")
#' ds$month_cat <- cut(ds$date, breaks = "month", name = "monthly")
#' ds$four_weeks_cat <- cut(ds$date, breaks = "4 weeks", name = "four week categorical date")
#'
Expand Down
9 changes: 9 additions & 0 deletions R/dataset.R
Original file line number Diff line number Diff line change
Expand Up @@ -560,3 +560,12 @@ setDashboardURL <- function(x, value) {
#' @rdname dashboard
#' @export
"dashboard<-" <- setDashboardURL


setMethod("rootFolder", "CrunchDataset", function(x) {
halt(
"Can't find root folder of a dataset. To find the root variable folder use",
"`rootFolder(allVariables(ds))` or to find the root project folder use ",
"`rootFolder(folder(ds))`"
)
})
7 changes: 5 additions & 2 deletions R/delete.R
Original file line number Diff line number Diff line change
Expand Up @@ -161,11 +161,14 @@ setMethod("delete", "ANY", function(x, ...) {
#' `CrunchDataset`. Unless `x` is a parsed folder path, it can only be of
#' length 1--for your protection, this function is not vectorized.
#' @param ... additional parameters passed to [delete()]
#' @param project `ProjectFolder` entity, character name (path) to a project.
#' Defaults to the project set in `envOrOption('crunch.default.project')`
#' or "./" (the project root), if the default is not set.
#' @return (Invisibly) the API response from deleting the dataset
#' @seealso [delete()]; [cd()] for details of parsing and walking dataset
#' folder/project paths.
#' @export
deleteDataset <- function(x, ...) {
deleteDataset <- function(x, ..., project = defaultCrunchProject("./")) {
if (is.dataset(x)) {
return(delete(x, ...))
}
Expand All @@ -178,7 +181,7 @@ deleteDataset <- function(x, ...) {
}
} else {
# Assume it is a path or name
found <- lookupDataset(x)
found <- lookupDataset(x, project = project)
if (length(found) != 1) {
halt(
dQuote(x), " identifies ", length(found),
Expand Down
10 changes: 6 additions & 4 deletions R/folders.R
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
#' `mkdir()` does not do
#' @examples
#' \dontrun{
#' ds <- loadDataset("Example survey")
#' ds <- loadDataset("Example survey", project = "Studies")
#' ds <- mv(ds, c("gender", "age", "educ"), "Demographics")
#' ds <- mkdir(ds, "Key Performance Indicators/Brand X")
#' # These can also be chained together
Expand Down Expand Up @@ -129,7 +129,7 @@ setName <- function(object, nm) {
#' directory in your local file system, which `cd()` does not do
#' @examples
#' \dontrun{
#' ds <- loadDataset("Example survey")
#' ds <- loadDataset("Example survey", project = "Studies")
#' demo <- cd(ds, "Demographics")
#' names(demo)
#' # Or with %>%
Expand Down Expand Up @@ -185,7 +185,7 @@ cd <- function(x, path, create = FALSE) {
#' from your local file system, which `rmdir()` does not do
#' @examples
#' \dontrun{
#' ds <- loadDataset("Example survey")
#' ds <- loadDataset("Example survey", project = "Studies")
#' rmdir(ds, "Demographics")
#' # Or with %>%
#' require(magrittr)
Expand Down Expand Up @@ -216,7 +216,7 @@ rmdir <- function(x, path) {
#' @export
#' @examples
#' \dontrun{
#' ds <- loadDataset("Example survey")
#' ds <- loadDataset("Example survey", project = "Studies")
#' folder(ds$income) <- "Demographics/Economic"
#' folder(ds$income)
#' ## [1] "Demographics" "Economic"
Expand All @@ -228,6 +228,8 @@ folder <- function(x) {
cls <- class(x)
} else if (is.variable(x)) {
cls <- "VariableFolder"
} else if (is.dataset(x)) {
cls <- "ProjectFolder"
} else {
halt("No folder for object of class ", class(x))
}
Expand Down
9 changes: 6 additions & 3 deletions R/fork-and-merge.R
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,16 @@
#' @param draft logical: Should the dataset be a draft, visible only to
#' those with edit permissions? Default is `FALSE`.
#' @param ... Additional dataset metadata to provide to the fork
#' @param project A `ProjectFolder` object, string path that could be passed to [`cd()`]
#' relative to the root project, or a URL for a `ProjectFolder`. Defaults to the same
#' folder as the existing dataset.
#' @return The new fork, a `CrunchDataset`.
#' @seealso [mergeFork()]
#' @export
forkDataset <- function(dataset, name = defaultForkName(dataset), draft = FALSE, ...) {
forkDataset <- function(dataset, name = defaultForkName(dataset), draft = FALSE, ..., project = folder(dataset)) {

Check warning on line 33 in R/fork-and-merge.R

View workflow job for this annotation

GitHub Actions / test-coverage

file=R/fork-and-merge.R,line=33,col=101,[line_length_linter] Lines should not be more than 100 characters. This line is 114 characters.
## TODO: add owner field, default to self(me())
fork_url <- crPOST(shojiURL(dataset, "catalogs", "forks"),
body = toJSON(wrapEntity(name = name, is_published = !draft, ...))
body = toJSON(wrapEntity(name = name, is_published = !draft, ..., project = resolveProjectURL(project)))
)
dropOnly(sessionURL("datasets"))
invisible(loadDatasetFromURL(fork_url))
Expand Down Expand Up @@ -73,7 +76,7 @@
#' @seealso [forkDataset()]
#' @examples
#' \dontrun{
#' ds <- loadDataset("My survey")
#' ds <- loadDataset("My survey", project = "Studies")
#' fork <- forkDataset(ds)
#' # Do stuff to fork
#' ds <- mergeFork(ds, fork)
Expand Down
69 changes: 41 additions & 28 deletions R/get-datasets.R
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
#' names()
#' # The assignment method lets you move a dataset to a project
#' proj <- cd(projects(), "Important Clients")
#' ds <- loadDataset("New important client survey")
#' ds <- loadDataset("New important client survey", project = "Studies")
#' datasets(proj) <- ds
#' }
datasets <- function(x = getAPIRoot()) {
Expand Down Expand Up @@ -116,42 +116,40 @@ listDatasets <- function(kind = c("active", "all", "archived"),
#' and analysis as if the dataset were fully resident on your computer, without
#' having to pull data locally.
#'
#' You can specify a dataset to load by its human-friendly "name", possibly also
#' by indicating a project (folder) to find it in. This makes code more
#' You can specify a dataset to load by its human-friendly "name", within
#' the project (folder) to find it in. This makes code more
#' readable, but it does mean that if the dataset is renamed or moved to a
#' different folder, your code may no longer work. The fastest, most reliable
#' way to use `loadDataset()` is to provide a URL to the dataset--the dataset's
#' URL will never change.
#'
#' @param dataset character, the name or path to a Crunch dataset to load, or a
#' @param dataset character, the path to a Crunch dataset to load, or a
#' dataset URL. If `dataset` is a path to a dataset in a project, the path will
#' be be parsed and walked, relative to `project` if specified, and the
#' function will look for the dataset inside that project. If no path is
#' specified and no `project` provided, the function will call a search API to
#' do an exact string match on dataset names.
#' be be parsed and walked, relative to `project`, and the function will look
#' for the dataset inside that project. If `dataset` is just a string and `project`
#' is set to `NULL`, the function will assume that `dataset` is the dataset id.
#' @param kind character specifying whether to look in active, archived, or all
#' datasets. Default is "active", i.e. non-archived.
#' @param project `ProjectFolder` entity, character name (path) to a project, or
#' `NULL`, the default. If a Project entity or reference is supplied, either
#' here or as a path in `dataset`, the dataset lookup will be limited to that
#' project only.
#' @param project `ProjectFolder` entity, character name (path) to a project.
#' Defaults to the project set in `envOrOption('crunch.default.project')`
#' or "./" (the project root), if the default is not set.
#' @param refresh logical: should the function check the Crunch API for new
#' datasets? Default is `FALSE`.
#' @return An object of class `CrunchDataset`.
#'
#' @examples
#' \dontrun{
#' ds <- loadDatasets("A special dataset")
#' ds2 <- loadDatasets("~/My dataset")
#' ds3 <- loadDataset("My dataset", project = "~") # Same as ds2
#' ds <- loadDatasets("A special dataset", project = "Studies")
#' ds2 <- loadDatasets("~/My dataset", project = "Studies")
#' ds3 <- loadDataset("My dataset", project = projects()[["Studies"]]) # Same as ds2
#' ds4 <- loadDataset("https://app.crunch.io/api/datasets/bd3ad2/")
#' }
#' @export
#' @seealso See [cd()] for details of parsing and walking dataset folder/project
#' paths.
loadDataset <- function(dataset,
kind = c("active", "all", "archived"),
project = NULL,
project = defaultCrunchProject("."),
refresh = FALSE) {
if (inherits(dataset, "DatasetTuple")) {
return(entity(dataset))
Expand All @@ -174,6 +172,12 @@ loadDataset <- function(dataset,
archived = archived(found)
)
if (length(found) == 0) {
if (missing(project)) {
warn_once(
"Finding datasets by name without specifying a path is no longer supported.",
option = "find.dataset.no.project"
)
}
halt(dQuote(dataset), " not found")
}
## This odd selecting behavior handles the multiple matches case
Expand Down Expand Up @@ -239,27 +243,36 @@ lookupDataset <- function(x, project = NULL) {
# `project`
dspath <- parseFolderPath(x)
x <- tail(dspath, 1)

if (length(dspath) == 1 && is.null(project)) {
# If don't have a project, query by name
return(findDatasetsByName(x))
# This code path used to use the datasets by_name endpoint. However
# As of 2024-11, that endpoint is no longer very useful because it only
# surfaces datasets that are in personal folders (going away very soon) &
# direct dataset shares (deprecated).
# So we use this to load by dataset id, a nice convenience feature.
# To get here, a user had to explicitly set `project=NULL` so they're
# presumably not here accidentally
pseudo_shoji <- tryCatch({
ds_base_url <- absoluteURL("datasets/", envOrOption("crunch.api"))
ds_entity <- crGET(paste0(ds_base_url, "/", x))
# Need to make this pseudo DatasetCatalog to match old API call (because `loadDataset`
# will later pass it through `active()`/`archived()`)
structure(list(
self = ds_base_url,
index = setNames(list(ds_entity$body), ds_entity$self)
), class = "shoji")
}, error = function(...) NULL) # But if we don't find it just return empty catalog

return(DatasetCatalog(pseudo_shoji))
}

# Resolve `project`
if (is.null(project)) {
project <- projects()
} else if (!is.project(project)) {
## Project name, URL, or index
project <- projects()[[project]]
}
if (is.null(project)) {
## Means a project was specified (like by name) but it didn't exist
halt(
"Project ", deparseAndFlatten(eval.parent(Call$project)),
" is not valid"
)
project <- ProjectFolder(crGET(resolveProjectURL(project)))
}

# If there is a path in `x`, walk it within `project`
if (length(dspath) > 1) {
project <- cd(project, dspath[-length(dspath)])
}
Expand Down
6 changes: 5 additions & 1 deletion R/make-array.R
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,11 @@ makeMRFromText <- function(var,

# hide the original variable
var <- hide(var)
return(VariableDefinition(derivation = derivation, name = name, ...))
# Add default derived behavior
args <- list(derivation = derivation, name = name, ...)
if (!"derived" %in% names(args)) args$derived <- derivedVariableDefault()

return(do.call(VariableDefinition, args))
}

#' Create subvariable derivation expressions
Expand Down
Loading
Loading