Skip to content

Commit

Permalink
Merge branch 'dev-rpackage' into 'master'
Browse files Browse the repository at this point in the history
v0.0.0.9004

See merge request eoc_foundation_wip/analysis-pipelines!11
  • Loading branch information
naren1991 committed Dec 3, 2018
2 parents 5a6365e + 1d564d9 commit cbffda3
Show file tree
Hide file tree
Showing 70 changed files with 1,163 additions and 733 deletions.
1 change: 1 addition & 0 deletions .Rbuildignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
^.*\.Rproj$
^\.Rproj\.user$
data-raw/
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
.RData
.Ruserdata
metastore_db/
inst/python/.ipynb_checkpoints/
.DS_Store
vignettes/metastore_db/
vignettes/*.RDS
Expand Down
16 changes: 8 additions & 8 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,20 +1,19 @@
Package: analysisPipelines
Type: Package
Title: Compose interoperable analysis pipelines, and put them into production
Version: 0.0.0.9003
Title: This Package Allows Data Scientists to Compose Interoperable Analysis Pipelines, and Put Them into Production
Version: 0.0.0.9004
Authors@R: c(
person("Naren","Srinivasan", email = "Naren.Srinivasan@mu-sigma.com", role = c("aut")),
person("Naren","Srinivasan", email = "naren1991@gmail.com", role = c("aut")),
person("Zubin Dowlaty","", email = "Zubin.Dowlaty@mu-sigma.com", role = c("ctb")),
person("Sanjay","", email = "Sanjay@mu-sigma.com", role = c("ctb")),
person("Neeratyoy","Mallik", email = "Neeratyoy.Mallik@mu-sigma.com", role = c("ctb")),
person("Anoop S","", email = "Anoop.S@mu-sigma.com", role = c("ctb")),
person("Mu Sigma, Inc.", email = "ird.experiencelab@mu-sigma.com", role = c("cre"))
)
Description: The package aims at enabling data scientists to compose pipelines of analysis which consist of data manipulation, exploratory analysis & reporting, as well as modeling steps. It also aims to enable data scientists to use tools of their choice through an R interface, and compose interoperable pipelines between R, Spark, and Python. Credits to Mu Sigma for supporting the development of the package.
Depends: R (>= 3.4.0), tibble, magrittr, data.table, pipeR, devtools
Imports: ggplot2, dplyr, futile.logger, RCurl, proto
Suggests: plotly, knitr, rmarkdown, SparkR, parallel, visNetwork, rjson, DT, shiny
Remotes: github::cran/SparkR
Description: This package aims at enabling data scientists to compose pipelines of analysis which consist of data manipulation, exploratory analysis & reporting, as well as modeling steps. It also aims to enable data scientists to use tools of their choice through an R interface, and compose interoperable pipelines between R, Spark, and Python. Credits to Mu Sigma for supporting the development of the package.
Depends: R (>= 3.4.0), magrittr, pipeR, methods
Imports: ggplot2, dplyr, futile.logger, RCurl, proto, rlang, purrr, devtools
Suggests: plotly, knitr, rmarkdown, parallel, visNetwork, rjson, DT, shiny, R.devices, corrplot, reticulate
Encoding: UTF-8
License: Apache License 2.0
LazyLoad: yes
Expand All @@ -28,5 +27,6 @@ Collate:
'core-functions-meta-pipelines.R'
'core-streaming-functions.R'
'r-batch-eda-utilities.R'
'r-helper-utilites-python.R'
'spark-structured-streaming-utilities.R'
'zzz.R'
24 changes: 22 additions & 2 deletions NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,18 @@ export(exportAsMetaPipeline)
export(generateReport)
export(genericPipelineException)
export(getDatatype)
export(getFeaturesForPyClassification)
export(getInput)
export(getLoggerDetails)
export(getOutputById)
export(getPipeline)
export(getPipelinePrototype)
export(getRegistry)
export(getResponse)
export(getTargetForPyClassification)
export(getTerm)
export(ignoreCols)
export(isDependencyParam)
export(loadMetaPipeline)
export(loadPipeline)
export(loadPredefinedFunctionRegistry)
Expand All @@ -34,6 +39,7 @@ export(savePipeline)
export(saveRegistry)
export(setInput)
export(setLoggerDetails)
export(setPythonEnvir)
export(sparkRSessionCreateIfNotPresent)
export(univarCatDistPlots)
export(updateObject)
Expand All @@ -44,5 +50,19 @@ exportClasses(MetaAnalysisPipeline)
exportClasses(StreamingAnalysisPipeline)
exportMethods(checkSchemaMatch)
exportMethods(generateOutput)
exportMethods(initialize)
import(SparkR)
importFrom(graphics,image)
importFrom(magrittr,"%>%")
importFrom(methods,getClass)
importFrom(methods,new)
importFrom(methods,removeMethod)
importFrom(methods,setClassUnion)
importFrom(methods,setGeneric)
importFrom(methods,setOldClass)
importFrom(pipeR,"%>>%")
importFrom(rlang,.data)
importFrom(stats,as.formula)
importFrom(stats,lm)
importFrom(stats,reorder)
importFrom(stats,terms)
importFrom(utils,installed.packages)
importFrom(utils,read.csv)
3 changes: 3 additions & 0 deletions R/analysisPipelines_package.R
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@
#' The package aims at enabling data scientists to compose pipelines of analysis which consist of data manipulation,
#' exploratory analysis & reporting, as well as modeling steps. It also aims to enable data scientists to use tools
#' of their choice through an R interface, and compose interoperable pipelines between R, Spark, and Python.
#'
#' Important Note - This package uses 'SparkR' to interact with Spark and automatically installs it if not present
#' from a Github repo as 'SparkR' is not distrubuted on CRAN
#' @docType package
#' @name analysisPipelines
NULL
Loading

0 comments on commit cbffda3

Please sign in to comment.