-
Notifications
You must be signed in to change notification settings - Fork 3
/
README.Rmd
312 lines (223 loc) · 12.5 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
---
title: "gofastr"
date: "`r format(Sys.time(), '%d %B, %Y')`"
output:
md_document:
toc: true
---
```{r, echo=FALSE}
desc <- suppressWarnings(readLines("DESCRIPTION"))
regex <- "(^Version:\\s+)(\\d+\\.\\d+\\.\\d+)"
loc <- grep(regex, desc)
ver <- gsub(regex, "\\2", desc[loc])
verbadge <- sprintf('<a href="https://img.shields.io/badge/Version-%s-orange.svg"><img src="https://img.shields.io/badge/Version-%s-orange.svg" alt="Version"/></a></p>', ver, ver)
verbadge <- ''
````
[![Project Status: Active - The project has reached a stable, usable state and is being actively developed.](http://www.repostatus.org/badges/latest/active.svg)](http://www.repostatus.org/#active)
[![Build Status](https://travis-ci.org/trinker/gofastr.svg?branch=master)](https://travis-ci.org/trinker/gofastr)
[![Coverage Status](https://coveralls.io/repos/trinker/gofastr/badge.svg?branch=master)](https://coveralls.io/r/trinker/gofastr?branch=master)
[![](http://cranlogs.r-pkg.org/badges/gofastr)](https://cran.r-project.org/package=gofastr)
`r verbadge`
```{r, echo=FALSE, message=FALSE}
library(knitr)
knit_hooks$set(htmlcap = function(before, options, envir) {
if(!before) {
paste('<p class="caption"><b><em>',options$htmlcap,"</em></b></p>",sep="")
}
})
knitr::opts_knit$set(self.contained = TRUE, cache = FALSE)
knitr::opts_chunk$set(fig.path = "tools/figure/")
```
![](tools/gofastr_logo/r_gofastr.png)
**gofastr** is designed to do one thing really well...make a `DocumentTermMatrix`. It harnesses the
power [**quanteda**](https://github.com/kbenoit/quanteda) (which in
turn wraps **data.table**, **stringi**, & **Matrix**) to quickly generate **tm** `DocumentTermMatrix` and `TermDocumentMatrix` data structures. There are two ways in which time is meaingingful to an analyst: (a) coding time, or the time spent writing code and (b) computational run time, or the time the computer takes to run the code. Ideally, we want to minimize both of these sources of time expenditures. The **gofaster** package is my attempt to reduce the time an analysts takes to turn raw text into an analysis ready data format and relies on **quanteda** to minimize the run time.
In my work I often get data in the form of large .csv files or SQL databases. Additionally, most of the higher level analysis of text I undertake utilizes a `TermDocumentMatrix` or `DocumentTermMatrix` as the input data. Generally, the **tm** package's `Corpus` structure is an unnecessary step in building a usable data structure that requires additional coding and run time. **gofastr** skips this step and uses [**quanteda**](https://github.com/kbenoit/quanteda) to quickly make the `DocumentTermMatrix` or `TermDocumentMatrix` structures that are fast to code up and fast for the computer to build.
# Function Usage
Functions typically fall into the task category of matrix (1) *creation* & (2) *manipulating*. The main functions, task category, & descriptions are summarized in the table below:
| Function | Category | Description |
|------------------------|--------------|------------------------------------------------------------------------|
| `q_tdm` & `q_tdm_stem` | creation | `TermDocumentMatrix` from string vector |
| `q_dtm` & `q_dtm_stem` | creation | `DocumentTermMatrix` from string vector |
| `remove_stopwords` | manipulation | Remove stopwords and minimal character words from `TermDocumentMatrix`/`DocumentTermMatrix` |
| `filter_words` | manipulation | Filter words from `TermDocumentMatrix`/`DocumentTermMatrix` |
| `filter_tf_idf` | manipulation | Filter low tf-idf words from `TermDocumentMatrix`/`DocumentTermMatrix` |
| `filter_documents` | manipulation | Filter documents from a `TermDocumentMatrix`/`DocumentTermMatrix` |
| `select_documents` | manipulation | Select documents from `TermDocumentMatrix`/`DocumentTermMatrix` |
| `sub_in_na` | manipulation | Sub missing (`NA`) for regex matches (default: non-content elements) |
# Installation
To download the development version of **gofastr**:
Download the [zip ball](https://github.com/trinker/gofastr/zipball/master) or [tar ball](https://github.com/trinker/gofastr/tarball/master), decompress and run `R CMD INSTALL` on it, or use the **pacman** package to install the development version:
```r
if (!require("pacman")) install.packages("pacman")
pacman::p_load_gh("trinker/gofastr")
```
# Contact
You are welcome to:
* submit suggestions and bug-reports at: <https://github.com/trinker/gofastr/issues>
* send a pull request on: <https://github.com/trinker/gofastr/>
* compose a friendly e-mail to: <tyler.rinker@gmail.com>
# Demonstration
## Load Packages
```{r}
if (!require("pacman")) install.packages("pacman")
pacman::p_load(gofastr, tm, magrittr)
```
## DocumentTerm/TermDocument Matrices
```{r}
(w <-with(presidential_debates_2012, q_dtm(dialogue, paste(time, tot, sep = "_"))))
(x <- with(presidential_debates_2012, q_tdm(dialogue, paste(time, tot, sep = "_"))))
```
## Stopwords
Stopwords are those words that we want to remove from the analysis because they give little information gain. These words occur so frequently in all documents or give very content information (i.e., function words) and thus are excluded. The `remove_stopwords` function allows the user to remove stopwords using three approaches/arguments:
1. `stopwords` - A vector of common + resercher defined words (see [**lexicon**](https://CRAN.R-project.org/package=lexicon) package)
2. `min.char`/`max.char` - Automatic removal of words less/greater than n characters in length
3. `denumber` - Removal of words that are numbers
By default `stopwords = tm::stopwords("english")`, `min.char = 3`, and `denumber =TRUE`.
```{r}
with(presidential_debates_2012, q_dtm(dialogue, paste(time, tot, sep = "_"))) %>%
remove_stopwords()
with(presidential_debates_2012, q_tdm(dialogue, paste(time, tot, sep = "_"))) %>%
remove_stopwords()
```
## Weighting
As the output from **gofastr** matrix create functions is a true **tm** object, weighting is done in the standard way using **tm**'s built in weighting functions. This is done post-hoc of creation.
```{r}
with(presidential_debates_2012, q_dtm(dialogue, paste(time, tot, sep = "_"))) %>%
tm::weightTfIdf()
```
## Stemming
To stem words utilize `q_dtm_stem` and `q_tdm_stem` which utilize **SnowballC**'s stemmer under the hood.
```{r}
with(presidential_debates_2012, q_dtm_stem(dialogue, paste(time, tot, sep = "_"))) %>%
remove_stopwords()
```
## Manipulating via Words
### Filter Out Low Occurring Words
To filter out words with counts below a threshold we use `filter_words`.
```{r}
with(presidential_debates_2012, q_dtm(dialogue, paste(time, person, sep = "_"))) %>%
filter_words(5)
```
### Filter Out High/Low Frequency (low information) Words
To filter out words with high/low frequency in all documents (thus low information) use `filter_tf_idf`. The default `min` uses the *tf-idf*'s median per Grüen & Hornik's (2011) demonstration.
```{r}
with(presidential_debates_2012, q_dtm(dialogue, paste(time, person, sep = "_"))) %>%
filter_tf_idf()
```
\*Grüen, B. & Hornik, K. (2011). topicmodels: An R Package for Fitting Topic Models. *Journal of Statistical Software*, 40(13), 1-30. http://www.jstatsoft.org/article/view/v040i13/v40i13.pdf
## Manipulating via Documents
### Filter Out Low Occurring Documents
To filter out documents with word counts below a threshold use `filter_documents`. Remember the warning from above:
> `Warning message:`<br>`In tm::weightTfIdf(.) : empty document(s): time 1_88.1 time 2_52.1`
Here we use `filter_documents`' default (a document must have a row/column sum greater than 1) to eliminate the warning:
```{r}
with(presidential_debates_2012, q_dtm(dialogue, paste(time, tot, sep = "_"))) %>%
filter_documents() %>%
tm::weightTfIdf()
```
### Selecting Documents
To select only documents matching a regex use the `select_documents` function. This is useful for selecting only particular documents within the corpus.
```{r}
with(presidential_debates_2012, q_dtm(dialogue, paste(time, person, sep = "_"))) %>%
select_documents('romney', ignore.case=TRUE)
with(presidential_debates_2012, q_dtm(dialogue, paste(time, person, sep = "_"))) %>%
select_documents('^(?!.*romney).*$', ignore.case = TRUE)
```
## Putting It Together
Of course we can chain matrix creation functions with several of the manipulation function to quickly prepare data for analysis. Here I demonstrate preparing data for a topic model using **gofastr** and then the analysis. Finally, I plot the results and use the **LDAvis** package to interact with the results. Note that this is meant to demonstrate the types of analysis that **gofastr** may be of use to; the methods and parameters/hyper-parameters are selected with little regard to analysis.
```{r, fig.width=10, fig.height=10}
pacman::p_load(tm, topicmodels, dplyr, tidyr, gofastr, devtools, LDAvis, ggplot2)
## Source topicmodels2LDAvis function
devtools::source_url("https://gist.githubusercontent.com/trinker/477d7ae65ff6ca73cace/raw/79dbc9d64b17c3c8befde2436fdeb8ec2124b07b/topicmodels2LDAvis")
data(presidential_debates_2012)
## Generate Stopwords
stops <- c(
tm::stopwords("english"),
"governor", "president", "mister", "obama","romney"
) %>%
prep_stopwords()
## Create the DocumentTermMatrix
doc_term_mat <- presidential_debates_2012 %>%
with(q_dtm_stem(dialogue, paste(person, time, sep = "_"))) %>%
remove_stopwords(stops) %>%
filter_tf_idf() %>%
filter_words(4) %>%
filter_documents()
## Run the Model
lda_model <- topicmodels::LDA(doc_term_mat, 10, control = list(seed=100))
## Plot the Topics Per Person_Time
topics <- posterior(lda_model, doc_term_mat)$topics
topic_dat <- tibble::rownames_to_column(as.data.frame(topics), "Person_Time")
colnames(topic_dat)[-1] <- apply(terms(lda_model, 10), 2, paste, collapse = ", ")
gather(topic_dat, Topic, Proportion, -c(Person_Time)) %>%
separate(Person_Time, c("Person", "Time"), sep = "_") %>%
mutate(Person = factor(Person,
levels = c("OBAMA", "ROMNEY", "LEHRER", "SCHIEFFER", "CROWLEY", "QUESTION" ))
) %>%
ggplot(aes(weight=Proportion, x=Topic, fill=Topic)) +
geom_bar() +
coord_flip() +
facet_grid(Person~Time) +
guides(fill=FALSE) +
xlab("Proportion")
```
### LDAvis of Model
The output from **LDAvis** is not easily embedded within an R markdown document, thus the reader will need to run the code below to interact with the results.
```{r, eval=FALSE}
lda_model %>%
topicmodels2LDAvis() %>%
LDAvis::serVis()
```
## Comparing Timings
On a smaller `r nrow(presidential_debates_2012)` rows these are the time comparisons between **gofastr** and **tm** using `Sys.time`. Notice the **gofaster** runs faster (the creation of a corpus is expensive) and requires significantly less code.
```{r}
pacman::p_load(gofastr, tm)
pd <- as.data.frame(presidential_debates_2012, stringsAsFactors = FALSE)
## tm Timing
tic <- Sys.time()
rownames(pd) <- paste("docs", 1:nrow(pd))
pd[['groups']] <- with(pd, paste(time, tot, sep = "_"))
pd <- Corpus(DataframeSource(setNames(pd[, 5:6, drop=FALSE], c('text', 'doc_id'))))
(out <- DocumentTermMatrix(pd,
control = list(
tokenize=scan_tokenizer,
stopwords=TRUE,
removeNumbers = TRUE,
removePunctuation = TRUE,
wordLengths=c(3, Inf)
)
) )
difftime(Sys.time(), tic)
## gofastr Timing
tic <- Sys.time()
x <-with(presidential_debates_2012, q_dtm(dialogue, paste(time, tot, sep = "_")))
remove_stopwords(x)
difftime(Sys.time(), tic)
```
### With Stemming
```{r}
pacman::p_load(gofastr, tm)
pd <- as.data.frame(presidential_debates_2012, stringsAsFactors = FALSE)
## tm Timing
tic <- Sys.time()
rownames(pd) <- paste("docs", 1:nrow(pd))
pd[['groups']] <- with(pd, paste(time, tot, sep = "_"))
pd <- Corpus(DataframeSource(setNames(pd[, 5:6, drop=FALSE], c('text', 'doc_id'))))
pd <- tm_map(pd, stemDocument)
(out <- DocumentTermMatrix(pd,
control = list(
tokenize=scan_tokenizer,
stopwords=TRUE,
removeNumbers = TRUE,
removePunctuation = TRUE,
wordLengths=c(3, Inf)
)
) )
difftime(Sys.time(), tic)
## gofastr Timing
tic <- Sys.time()
x <-with(presidential_debates_2012, q_dtm_stem(dialogue, paste(time, tot, sep = "_")))
remove_stopwords(x, stem=TRUE)
difftime(Sys.time(), tic)
```