-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathProduct_Sales_BEG.Rmd
1015 lines (777 loc) · 56 KB
/
Product_Sales_BEG.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
title: "Product sales prediction"
author: "Beatriz Estrella"
date: "26/10/2020"
output:
pdf_document:
toc: TRUE
toc_depth: 3
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE, error = FALSE, message = FALSE, warning = FALSE, fig.width = 6, fig.height = 3)
options(tinytex.verbose = TRUE)
```
# 1. INTRODUCTION <a name="introduction"></a>
## 1.1. Project goal <a name="goal"></a>
The motivation of this project is to complete the task of the course in Data Science: Capstone from HarvardX, course PH125.9x, under the *Choose your own!* project submission.
The project aims to predict the units sold for each specific product. The database was downloaded from - [Kaggle](https://www.kaggle.com/jmmvutu/summer-products-and-sales-in-ecommerce-wish) and consists of data from platform [wish](https://www.wish.com). *Wish* is a retailer that sells millions of product to customers in a e-commerce marketplace. The data consists of the products under the tag "summer" showed during August 2020.
We want to see which features can predict the units that will be sold in a specific product, and the accuracy that we get under the different machine learning algorithms.
This is a multinomial classification problem as units sold will be treated as a categorical value, due to the low number of different values we have for this column in the dataset. That way, we will use different Machine Learning algorithms that can predict under this assumption.
Apart from the main dataset (stored as data frame in main variable), we have a support dataset that contains the main tag categories and the count on how many times they are used in the platform, thus we can deduce that the higher the count, the more relevant they are (stored as data frame in cat variable).
Datasets, .R code and all relevant documentation regarding this project can be found online at [github.com/beatrizeg/Wish-Units-Solds](https://github.com/beatrizeg/Wish-Units-Solds).
```{r load Data and libraries, echo=FALSE}
if(!require(tidyverse)) install.packages("tidyverse", repos = "http://cran.us.r-project.org")
if(!require(caret)) install.packages("caret", repos = "http://cran.us.r-project.org")
if(!require(stringr)) install.packages("stringr", repos = "http://cran.us.r-project.org")
if(!require(purrr)) install.packages("purrr", repos = "http://cran.us.r-project.org")
if(!require(ggplot2)) install.packages("ggplot2", repos = "http://cran.us.r-project.org")
if(!require(corrplot)) install.packages("corrplot", repos = "http://cran.us.r-project.org")
if(!require(forcats)) install.packages("forcats", repos = "http://cran.us.r-project.org")
if(!require(rattle)) install.packages("rattle", repos = "http://cran.us.r-project.org")
if(!require(xgboost)) install.packages("xgboost", repos = "http://cran.us.r-project.org")
if(!require(klaR)) install.packages("klaR", repos = "http://cran.us.r-project.org")
if(!require(h2o)) install.packages("h2o", repos = "http://cran.us.r-project.org")
if(!require(knitr)) install.packages("knitr", repos = "http://cran.us.r-project.org")
if(!require(tictoc)) install.packages("tictoc", repos = "http://cran.us.r-project.org")
library(tidyverse)
library(stringr)
library(purrr)
library(caret)
library(ggplot2)
library(corrplot)
library(forcats)
library(rattle)
library(xgboost)
library(klaR)
library(knitr)
library(tictoc)
#loading databases
url_main <- "https://raw.githubusercontent.com/beatrizeg/Wish-Units-Solds/main/summer-products-with-rating-and-performance_2020-08.csv"
dest_file <- "./main.csv"
download.file(url_main, destfile = dest_file)
main <- read_csv("./main.csv")
url_cat <- "https://raw.githubusercontent.com/beatrizeg/Wish-Units-Solds/main/unique-categories.sorted-by-count.csv"
dest_file_cat <- "./cat.csv"
download.file(url_cat, destfile = dest_file_cat)
cat <- read_csv("./cat.csv")
main <- as.data.frame(main)
cat <- as.data.frame(cat)
```
## 1.2. Inspecting the dataset <a name="inspect"></a>
Now we proceed to inspect the main dataset that is directly downloaded from Kaggle. Using the dim and summary functions we see:
```{r inspect}
dim(main)
summary(main)
```
The dataset consists of 1573 rows and 43 columns. We see as well that we have character and numeric values and that we also have some NAs in the dataset.
First thing we have to take into account now, is that having only ~1k of data points is not a lot. This will probably lead to the accuracy of the prediction not being too high. We will be able to check on this later on.
To check which columns gives us NAs we use.
```{r nas}
nas <- apply(main, 2, function(x) any(is.na(x)))
knitr::kable(nas[which(nas)], caption = "Check columns with NAs")
```
So we proceed to substitute the NAs in *rating_<X>_count* columns to 0 and also the ones in the *has_urgency_banner* to 0 as 0 represents *FALSE*.
```{r nas treat}
main <- main %>% mutate(rating_five_count=ifelse(is.na(rating_five_count),0,rating_five_count),
rating_four_count=ifelse(is.na(rating_four_count),0,rating_four_count),
rating_three_count=ifelse(is.na(rating_three_count),0,rating_three_count),
rating_two_count=ifelse(is.na(rating_two_count),0,rating_two_count),
rating_one_count=ifelse(is.na(rating_one_count),0,rating_one_count),
has_urgency_banner=ifelse(is.na(has_urgency_banner),0,has_urgency_banner))
```
We see that we still get NAs in columns *product_color*, *product_variation_size_id*, *urgency_text*, *origin_country*, *merchant_name*, *merchant_info_subtitle* and *merchant_profile_picture*, but we will deal with these later as we will see, because these features will not be included in the model, except for *product_color*, *product_variation_size_id* and *merchant_profile_picture* that will be tidied up later on.
Lastly, it is important to note that the *product_id* is the unique differentiator of each product. We will see later the reason for a few of them being duplicated and solve the issue.
## 1.3. Libraries <a name="libraries"></a>
All of these these libraries will be loaded:
```{r libraries, echo=TRUE, eval=FALSE}
library(stringr)
library(purrr)
library(caret)
library(ggplot2)
library(corrplot)
library(forcats)
library(rattle)
library(xgboost)
library(klaR)
library(h2o)
library(knitr)
library(tictoc)
```
# 2. METHOD AND ANALYSIS <a name="analysis"></a>
## 2.1. Exploration of the dataset <a name="exploration"></a>
Now we proceed to inspect the different features/variables in the dataset, taking into account that the value we want to be able to predict is *units_sold*.
### 2.1.1. Checking features variability and adjusting <a name="variability"></a>
#### *product_color*
First of all we are going to study the different categories under the predictor of *product_color*. We can see all the different colors we get by showing the table and histogram below.
```{r product_color}
knitr::kable(table(main$product_color) %>% sort(decreasing = TRUE), caption = "Product color values")
```
We see that most of the categories only apply for 2 or less *products_id*, so we will group into main color categories this feature in order to provide the algorithm with more valuable data.
```{r color class, fig.width=7}
main <- main %>% mutate(product_color=
as.factor(case_when(
str_detect(product_color, "&") ~ "two colors",
str_detect(product_color, "blue") ~ "blue",
str_detect(product_color, "navy") ~ "blue",
str_detect(product_color, "green") ~ "green",
str_detect(product_color, "red") ~ "red",
str_detect(product_color, "gray") ~ "grey",
str_detect(product_color, "grey") ~ "grey",
str_detect(product_color, "coffee") ~ "brown",
str_detect(product_color, "brown") ~ "brown",
str_detect(product_color, "pink") ~ "pink",
str_detect(product_color, "rose") ~ "pink",
str_detect(product_color, "black") ~ "black",
str_detect(product_color, "white") ~ "white",
str_detect(product_color, "purple") ~ "purple",
str_detect(product_color, "orange") ~ "orange",
str_detect(product_color, "multicolor") ~ "multicolor",
str_detect(product_color, "yellow") ~ "yellow",
TRUE ~ "other")))
main %>% ggplot(aes(product_color))+geom_bar()
```
Now we get just a few main categories which allows as to treat this feature as a categorical value.
#### *product_variation_size_id*
We are going to do the same exercise we did with the *product_color* variable with the *product_variation_size_id*. We can check the high variability and low information it provides as it is given in the dataset.
```{r product_size}
knitr::kable(table(main$product_variation_size_id) %>% sort(decreasing = TRUE),
caption = "Product size values")
```
We again reassign the values to the main categories and again, we treat it as a categorical value, which provides much more information to the algorithms.
```{r size reassign, echo=FALSE}
main <- main %>% mutate(product_variation_size_id=
as.factor(case_when(product_variation_size_id=="XXXS" ~ "XXXS",
product_variation_size_id=="XXS" ~ "XXS",
product_variation_size_id=="XS" |
product_variation_size_id=="XS." |
product_variation_size_id=="SIZE XS" |
product_variation_size_id=="Size-XS" |
product_variation_size_id=="Size-XS" ~ "XS",
product_variation_size_id=="S" |
product_variation_size_id=="S." |
product_variation_size_id=="s" |
product_variation_size_id=="Size S" |
product_variation_size_id=="Size-S" |
product_variation_size_id=="size S" |
product_variation_size_id=="Size S." |
product_variation_size_id=="S Pink" |
product_variation_size_id=="Suit-S"~ "S",
product_variation_size_id=="M" |
product_variation_size_id=="M."~ "M",
product_variation_size_id=="L" |
product_variation_size_id=="SizeL" ~ "L",
product_variation_size_id=="XL" ~ "XL",
product_variation_size_id=="XXL" |
product_variation_size_id=="2XL" ~ "XXL",
product_variation_size_id=="XXXL" ~ "XXXL",
product_variation_size_id=="4XL" ~ "4XL",
TRUE ~ "other")))
```
```{r size graph}
main %>% ggplot(aes(product_variation_size_id))+geom_bar()
```
#### *origin_country*
We again see variability for *origin_country*, and reassign converting the variable to factor.
```{r ocountry, echo=FALSE, eval=TRUE}
table(main$origin_country) %>% sort(decreasing = TRUE)
main <- main %>% mutate(
origin_country=as.factor(case_when(
origin_country == "CN" | origin_country == "US" ~ origin_country,
TRUE ~ "other"
)))
main %>% ggplot(aes(origin_country))+geom_bar()
```
#### *currency_buyer*
We check that there is only one currency in the dataset and that no unification of units is needed.
```{r currency}
n_distinct(main$currency_buyer)
```
#### *units_sold*
Now we study the characteristics of the variable we want to predict.
```{r sold}
knitr::kable(table(main$units_sold) %>% sort(decreasing = TRUE), caption = "Units sold values and frequency")
```
We see that there are only 15 different values, and in fact six of them are below 10 so we could group this into 9 different categories and treat the project as a categorical problem. This is what we will do.
```{r sold graphs, echo=FALSE}
main <- main %>% mutate(units_sold = ifelse(units_sold<10, 10, units_sold))
main %>% ggplot(aes(factor(units_sold)))+geom_bar() + xlab("units_sold")
```
#### *product_id*
As we commented in the introduction, we can easily check that there are less unique *product_id* values than rows in the dataset. In fact, there are 1341 different *product_id* and 1573 rows.
```{r duplicates}
n_distinct(main$product_id)
```
So if I examine a duplicated *product_id* (ex. "5577faf03cef83230c39d0c3") and see the differences in the rows:
```{r exam dup}
knitr::kable(main %>% group_by(product_id) %>% summarize(n=n()) %>% arrange(desc(n))
%>% head(10), caption = "Examining duplicates")
```
We can see that the almost every column acquires the same value, except for *has_urgency_banner*. Thus we can easily deduce that during the month, this feature was changed for the *product_id* and a new row was created. As the impact is minimal and we do not know which of the rows was active during most of the month, we will just simply delete the duplicated rows as:
```{r remove dup}
main <- distinct(main, product_id, .keep_all = TRUE)
```
### 2.1.2. Assigning classes to features and calculating % stars rating instead of total count <a name="classes"></a>
Now we have studied the variability of the features that had a class "character" and reassigned them the class "factor" for the algorithms to work better, a few more predictors that acquire few different values are assigned to factor as well. We also saw that some features acquired either a 0 or a 1, so we will change this to logical class.
For the columns of rating_X_count, instead of keeping the total count, we will keep the percentage by dividing for each * by the total counts.
```{r classes}
main <- main %>% mutate(currency_buyer=as.factor(currency_buyer),
badges_count=as.factor(badges_count),
uses_ad_boosts=as.logical(uses_ad_boosts),
badge_local_product=as.logical(badge_local_product),
badge_product_quality=as.logical(badge_product_quality),
badge_fast_shipping=as.logical(badge_fast_shipping),
shipping_option_price=as.factor(shipping_option_price),
shipping_is_express=as.logical(shipping_is_express),
has_urgency_banner=as.logical(has_urgency_banner),
merchant_has_profile_picture=as.logical(merchant_has_profile_picture),
inventory_total=as.factor(inventory_total))
main <- main %>% mutate(rating_five_count=rating_five_count/rating_count,
rating_four_count=rating_four_count/rating_count,
rating_three_count=rating_three_count/rating_count,
rating_two_count=rating_two_count/rating_count,
rating_one_count=rating_one_count/rating_count)
main <- main %>% mutate(rating_five_count=ifelse(is.na(rating_five_count),0,rating_five_count),
rating_four_count=ifelse(is.na(rating_four_count),0,rating_four_count),
rating_three_count=ifelse(is.na(rating_three_count),0,rating_three_count),
rating_two_count=ifelse(is.na(rating_two_count),0,rating_two_count),
rating_one_count=ifelse(is.na(rating_one_count),0,rating_one_count))
```
### 2.1.3. Introducing tags model <a name="tags"></a>
In the main dataset, we can see a column named *tags* that includes all tags that were given to each *product_id*. In the cat dataset, there are two columns, the column *counts* gives the number of times each tag, which is in column *keyword*, appears in the main dataset. We can assume then, that those keywords that appear the most, are more relevant than those than appear the less. Thus we create a new column in the cat dataset, named *cat_n*, with a number that goes from 1 to 4, with the idea of weighting more those most popular, as in code below. What we are trying to do here is to have a column that provides a relevance number for the tags that were used in the *product_id*.
```{r cat_n}
cat <- cat %>% mutate(cat_n =
case_when(count>=1000 ~ 4,
count<1000 & count>=500 ~ 3,
count<500 & count>=200 ~ 2,
count < 200 ~ 1,
TRUE ~ 0))
```
Once we have this, we want to transfer this information to the main dataset. First, the *tags* column contains all tags separated by a comma. We split all the tags into different columns, getting 41 columns. Then, we substitute each tag for its value *cat_n* that we assigned in the cat dataset. Once we have the values in *numeric* class, we sum all tag values for each row or *product_id* and bind this new column, named *n_tags* to the main dataset.
```{r n_tags}
main_tags <- str_split(main$tags, ",", simplify = TRUE)
for (i in 1:41){
main_tags[,i] <- with(cat, cat_n[match(main_tags[,i], keyword)])
} #next step change to numeric values
main_tags <- as.data.frame(main_tags)
main_tags[] <- lapply(main_tags, function(x) as.numeric(as.character(x)))
main_tags <- main_tags %>% mutate(n_tags = rowSums(main_tags, na.rm=TRUE)) %>% dplyr::select(n_tags)
main_m <- bind_cols(main, main_tags)
```
Lastly, we disregards all 41 columns that were created and keep only those that can provide relevant information *(price, retail_price, units_sold, uses_ad_boosts, rating, rating_count, rating_five_count, rating_four_count, rating_three_count, rating_two_count, rating_one_count, badges_count, badge_local_product, badge_product_quality, badge_fast_shipping, product_color, product_variation_size_id, product_variation_inventory, shipping_option_price, shipping_is_express, countries_shipped_to, inventory_total, has_urgency_banner, origin_country, merchant_rating_count, merchant_rating, merchant_has_profile_picture, product_id, n_tags)*, disregarding also those such as *title*, *merchant_name*, *product_url*, etc.
```{r select, echo=FALSE}
main_m <- main_m %>% dplyr::select(price, retail_price, units_sold, uses_ad_boosts, rating, rating_count, rating_five_count, rating_four_count, rating_three_count, rating_two_count, rating_one_count, badges_count, badge_local_product, badge_product_quality, badge_fast_shipping,
product_color, product_variation_size_id, product_variation_inventory,
shipping_option_price, shipping_is_express, countries_shipped_to, inventory_total,
has_urgency_banner, origin_country, merchant_rating_count, merchant_rating,
merchant_has_profile_picture, product_id, n_tags)
```
### 2.1.4. Predictors that do not vary across sample <a name="novar"></a>
We are going to do a quick check on predictors that maintain close to the same value across all the sample to evaluate if we want to disregard them. We will use:
```{r no_var}
no_var <- nearZeroVar(main_m, saveMetrics = TRUE)
knitr::kable(no_var[no_var[,"zeroVar"] + no_var[,"nzv"] > 0, ],
caption = "Predictors with near zero variability")
```
As we can see, there are no predictors with zero variance, but there are 5 predictors with near zero variance. For those five, only *inventory_total* is unique for the 74.5% of the *product_id*, the rest have higher variance. As we are not sure how having a low value in this column might impact the units sold, we decide to still keep all of them.
### 2.1.5. Adding *perc_price* column <a name="perc"></a>
From the dataset we see that we have a *price* column, and a *retail_price* column. *Price* is the price for which the product is being sold in the platform, and *retail_price* is the price that similar products have in the same or in other platforms, we can say, a benchmark price. To include any possible sales effect, we add a new column that measures if the price is higher or lower than average.
```{r perc_price}
main_m <- main_m %>% mutate(perc_price=(price-retail_price)/retail_price)
```
## 2.2. Studying correlation between variables <a name="cor"></a>
To study correlation between variables we are going to do two different plots. First, we will do the correlation matrix for the numeric variables.
```{r cor, fig.height=6}
main_m.cor <- main_m %>% mutate(units_sold=as.numeric(units_sold)) %>%
dplyr::select_if(is.numeric) %>%
cor(.)
corrplot(main_m.cor, type="lower", tl.cex = 0.5)
```
We can see that the units sold are highly affected by the total number of ratings, thus by the count of ratings for every star, but we cannot see a high relation for more units sold with a higher rating or higher % of five star ratings. As well, there is a correlation with the number of ratings for the merchant and its rating. There is also a slight positive correlation with the *product_variation_inventory* variable.
To study correlation between non numerical variables, we will perform a Chi-squared test of independence, looking at the p-values. We can say that two different variables are independent if the probability distribution of one is not affected by the presence of the other.
In this case, the null hypotheses is that the variables are independent from the units sold. With a significance level of 0.05, we will test the hypotheses of them being independent.
```{r chisq}
main_m.chisq <- main_m %>%
dplyr::select_if(function(col) is.character(col) |
is.factor(col) | is.logical(col) |
all(col == .$units_sold)) %>% dplyr::select(-product_id)
columns <- 1:ncol(main_m.chisq)
vars <- names(main_m.chisq)[columns]
out <- apply( combn(columns,2),2,function(x){
chisq.test(table(main_m.chisq[,x[1]],main_m.chisq[,x[2]]),correct=F)$p.value
})
out <- cbind(as.data.frame(t(combn(vars,2))),out)
out_dep <- out %>% filter(V1=="units_sold") %>% filter(out<0.05) %>%arrange(out)
knitr::kable(out_dep, caption = "Dependent categorical variables from units_sold")
```
We can see that we get p-values below 0.05 for 6 variables, thus, we can reject in these cases the null hypothesis and assume that these are not independent to units_sold.
Also, we can assume that these 7 other predictors are independent and do not affect the output of the units sold. We will be able to check on this when we study the variable importance of the machine learning algorithms.
```{r independent}
out_ind <- out %>% filter(V1=="units_sold") %>% filter(out>=0.05) %>% arrange(out)
knitr::kable(out_ind, caption = "Independent categorical variables from units sold")
```
## 2.3. Checking predictors effect - graphs <a name="graphs"></a>
In this section we will show different graphs with the relationship between units sold and other predictors.
**product_color**
```{r color graph, echo=FALSE}
main_m %>%
ggplot(aes(fct_infreq(product_color), units_sold)) + geom_bar(stat = "identity") +
ggtitle("Product Color") + xlab("product_color")
```
Black and white are the colors most sold in the platform.
**product_size_id**
```{r size_id graph, echo=FALSE}
main_m %>%
ggplot(aes(fct_infreq(product_variation_size_id), units_sold)) + geom_bar(stat = "identity") +
ggtitle("Product Size") + xlab("product_size_id")
```
S is the most common size within units sold.
**price**
```{r price graph, echo=FALSE}
main_m %>%
ggplot(aes(price, units_sold)) + geom_smooth() +
ggtitle("Price") + xlab("price")
```
Prices within 5 to 10 EUR are the most common for units sold, and high prices are not common, reason for the smooth function to cover a big area for those prices.
**uses_ad_boosts**
```{r ad_boosts graph, echo=FALSE}
main_m %>%
ggplot(aes(uses_ad_boosts, as.numeric(units_sold))) + geom_bar(stat="identity") +
ggtitle("Uses Ad boosts") + ylab("units_sold")
```
We can see that it is more common in units sold to not use ad_boosts.
**5 star %**
```{r 5star graph, echo=FALSE}
main_m %>%
ggplot(aes(rating_five_count, units_sold)) + geom_smooth() +
ggtitle("Percentage of 5 stars") + xlab("perc 5*")
```
Here, the plot shows how most units are sold with an average of 50% of 5 star ratings.
**1 star %**
```{r 1star graph, echo=FALSE}
main_m %>%
ggplot(aes(rating_one_count, units_sold)) + geom_smooth() +
ggtitle("Percentage of 1 star") + xlab("perc 1*")
```
Opposite to the perc 5* graph, we can see that the most units are sold with a low percentage of 1 star ratings.
Lastly, as we commented before, we change the class of *units_sold* to factor, so that it is categorical.
```{r factor}
levels <- c("10", "50", "100", "1000", "5000", "10000", "20000", "50000", "1e+05")
main_p <- main_m %>% mutate(units_sold = factor(units_sold, levels=levels))
```
## 2.4. Creation of train and test set <a name="sets"></a>
We split our data into a train set where we run the algorithms, and a test set to test the results and see how good our algorithm did. Test set usually consists of 10%-20% of the data. In our case, test set will have 15% of the data and train set 85%. As we do not have too much data, we do not want to compromise test set to have very few rows, or train set to not be big enough to produce good models.
Our train set consists of 1138 rows and test set of 203 rows.
```{r train test}
set.seed(1, sample.kind = "Rounding")
test_index <- createDataPartition(main_m$units_sold, times=1, p=0.15, list=FALSE)
train_set <- main_p[-test_index,] %>% dplyr::select(-product_id)
test_set <- main_p[test_index,] %>% dplyr::select(-product_id)
```
## 2.5. Method <a name="methods"></a>
The method will be to train different Machine Learning algorithms, that are included in `caret` package. We will train the models using only the train_set, while repeated cross-validation or simple cross-validation will be used to optimize the hyperparameters of each model. Once the model is optimized using only the train_set, results will be tested in test_set.
The methods to use are those appropriate to study a multinomial categorical problem such as this one. Thus, those like *Logistic Regression* or *Linear Models* will be disregarded.
As well, `h2o` package will be used at the end, to check which is the best performer model and compare results.
# 3. RESULTS <a name="results"></a>
## 3.1. GAM Loess <a name="gam"></a>
Gam Loess is a Generalized Additive Model that keeps the inspiration of Linear Models but incorporating non-linear forms. The model uses smooth functions of the predictors, and the loess function is used to fit. The loess function acts as a local regression as fitting at point x is weighted toward the data closest to x. The data considered from x is controlled by the `span` argument, and represents the percentage of data that will be taken into account.
To optimize the `span` argument, we will use repeated cross validation.
Repeated Cross-Validation has two different values, `number` and `repeats`. `Number` means the times we ramdonly divide our data into, so in this case, we will created 3 different sets to test results while training the model in the other 2. `Repeats` means that we will repeat 4 times the process for each partition, and it will calculate the average.
We only divide into 3 different sets due to the small amount of data we have.
```{r train gam}
tic("GAM Loess")
set.seed(1, sample.kind = "Rounding")
control <- trainControl(method = "repeatedcv", number = 3, repeats = 4,
savePredictions = "all")
grid_loess <- expand.grid(span=seq(0.2,0.9,0.2), degree=1)
train_loess <- caret::train(units_sold ~ ., data=train_set, method="gamLoess",
trControl=control, tuneGrid=grid_loess)
gam_toc <- toc()
ggplot(train_loess, highlight = TRUE)
```
So best `span` in this case is 0.8, and we only get an accuracy of 0.052. If we tried guessing, the probability of being correct would be 1/9 = 0.11. So we would do better just by guessing! In fact, if we use the predict function, we see that it is just predicting the same category every time.
```{r test gam}
y_loess <- predict(train_loess, test_set, type="raw")
acc_loess <- confusionMatrix(y_loess, test_set$units_sold)$overall[['Accuracy']]
acc_results <- tibble(method = "Gam Loess",
Accuracy_Train = max(train_loess$results$Accuracy),
Accuracy_Test = acc_loess,
Time = gam_toc$toc - gam_toc$tic)
knitr::kable(acc_results, caption = "Accuracy Results")
```
This model clearly does not work.
## 3.2. K nearest neighbors <a name="knn"></a>
Now we will use the method of k-nearest neighbors. This method relies on the assumption that similar features will provide the same output. The model equals "similar features" to closeness in distance.
In this case, we need to optimize the *k* argument, which indicates the number of neighbors to include in every calculation. We use as well repeated cross-validation to find the optimal *k*.
```{r train knn}
tic("KNN")
set.seed(2007, sample.kind = "Rounding")
control <- trainControl(method = "repeatedcv", number=3, repeats=4)
train_knn <- train(units_sold ~ ., data=train_set, method="knn",
tuneGrid = data.frame(k=seq(3, 40, 2)), trControl=control)
knn_toc <- toc()
ggplot(train_knn, highlight = TRUE)
```
In this case, we get the higher accuracy with a k=5. In that case, the accuracy in the train_set is 0.503. Now we are doing better than guessing. Let's check our results in the test_set.
```{r test knn}
y_knn <- predict(train_knn, test_set, type="raw")
acc_knn <- confusionMatrix(y_knn, test_set$units_sold)$overall[['Accuracy']]
acc_results <- bind_rows(acc_results,
data_frame(method="KNN",
Accuracy_Train = max(train_knn$results$Accuracy),
Accuracy_Test = acc_knn,
Time = knn_toc$toc - knn_toc$tic))
knitr::kable(acc_results, caption = "Accuracy Results")
```
It gets even better, now we predict correctly more than half of the units sold.
## 3.3. Neural networks <a name="nnet"></a>
In this section, we introduce a Machine Learning algorithm that we did not study during the course, but that is very powerful and easy to understand. It is inspired in replicating the way of working of the neurons in the brain.
neural network: ![](neural.png)
Using the `caret` package, we can optimize now two different parameters: *size* and *decay*. *Size* is the number of units in hidden layer and *decay* is a parameter of regularization to avoid over-fitting that can vary between 0 (default) and 1.
We will try with sizes going from 4 to 20 and decays from 0.05 to 0.5, and a few minutes later we get the results.
```{r train nnet1 trainf, echo=TRUE, eval=TRUE, results="hide"}
tic("NN1")
set.seed(2007, sample.kind = "Rounding")
control <- trainControl(method = "repeatedcv", number=3, repeats=4)
grid_nnet1 <- expand.grid(size=seq(4,20,4), decay=seq(0.05, 0.5, 0.02))
train_nnet1 <- train(units_sold ~ ., data=train_set, method="nnet", trControl=control,
tuneGrid=grid_nnet1)
nn1_toc <- toc()
```
```{r plot nnet1, echo=TRUE}
ggplot(train_nnet1, highlight = TRUE)
```
We see that the best results are for size=8 and decay=0.33, which provides an accuracy on train_set of 0.577. Looks like we are improving!
Now we will run again the code but focusing on sizes from 6 to 10 and decays from 0.26 to 0.42 to see if we can get a higher accuracy.
```{r train nnet2 1, echo=TRUE, eval=TRUE, results="hide"}
tic("NN2")
set.seed(2007, sample.kind = "Rounding")
control <- trainControl(method = "repeatedcv", number=3, repeats=4)
grid_nnet2 <- expand.grid(size=seq(6,10,2), decay=seq(0.3, 0.6, 0.05))
train_nnet2 <- train(units_sold ~ ., data=train_set, method="nnet", trControl=control,
tuneGrid=grid_nnet2)
nn2_toc <- toc()
```
```{r train nnet2 plot, echo=TRUE, eval=TRUE}
ggplot(train_nnet2, highlight = TRUE)
```
And the best results are for size=6 and decay=0.6, providing an accuracy of 0.588. So now, we will use this train_nnet2 model to predict the results in the test set.
```{r test nnet2}
y_nnet <- predict(train_nnet2, test_set, type="raw")
acc_nnet <- confusionMatrix(y_nnet, test_set$units_sold)$overall[['Accuracy']]
acc_results <- bind_rows(acc_results,
data_frame(method="Neural Network",
Accuracy_Train = max(train_nnet2$results$Accuracy),
Accuracy_Test = acc_nnet,
Time = nn2_toc$toc - nn2_toc$tic))
knitr::kable(acc_results, caption = "Accuracy Results")
```
And we see an improvement as well! Now we are over 0.59 of accuracy on the test set. It is important to note how slow this method is, mainly due to the repeated cross-validation being performed.
## 3.4. Classification Trees <a name="rpart"></a>
In this section we will use the classification trees model. This is a very intuitive model that is highly used for categorical problems.
The idea is to create a decision tree, with cut points so that, when applied in a hierarchical way, lead to the final output.
`Caret` package allows us to optimize the *cp* parameter via tuneGrid. *cp* is the complexity parameter of the tree and it defaults to 0. It helps decide the depth of the tree by not continuing building the tree if the improvement does not meet the criteria.
Firstly, we will use default values to train the model, with regular cross-validation for computer optimization purposes. Also we need to change the levels name of the *units_sold* parameter for the function to not throw and error message.
```{r train default rpart}
levels(train_set$units_sold) <- c("X10", "X50", "X100", "X1000", "X5000", "X10000",
"X20000", "X50000", "X05")
levels(test_set$units_sold) <- c("X10", "X50", "X100", "X1000", "X5000", "X10000",
"X20000", "X50000", "X05")
tic()
set.seed(2007, sample.kind = "Rounding")
control <- trainControl(method = "cv", number=4, classProbs = TRUE)
train_rpart0 <- train(units_sold ~ ., data=train_set, method="rpart", trControl=control)
rp0_toc <- toc()
ggplot(train_rpart0, highlight = TRUE)
```
We see that the optimal value for cp in this case is for cp=0.07196, which provides an accuracy of 0.612.
To visualize the tree, we will use the fancyRpartPlot from `rattle` library. We can see that it is actually quite simple, and that only one feature, *rating_count*, is used to make the predictions! In fact, it does not even predict all the possible levels. Also, we can see the variable importance.
```{r train rpart0 tree}
fancyRpartPlot(train_rpart0$finalModel, sub = NULL)
rpart0_imp <- varImp(train_rpart0)
plot(rpart0_imp, top = 10, main="Var Imp default Classification Tree")
```
For the purpose of predicting using the model and be able to compare results, we will use it to predict and get accuracy on test set.
```{r test rpart0 tree, echo=FALSE}
y_rpart0 <- predict(train_rpart0, test_set, type="raw")
acc_rpart0 <- confusionMatrix(y_rpart0, test_set$units_sold)$overall[['Accuracy']]
acc_results <- bind_rows(acc_results,
data_frame(method="Classification Trees not optimised",
Accuracy_Train = max(train_rpart0$results$Accuracy),
Accuracy_Test = acc_rpart0,
Time = rp0_toc$toc-rp0_toc$tic))
knitr::kable(acc_results, caption = "Accuracy Results")
```
Now we will use tuneGrid to try different values of cp that range from 0 to 0.07, as it seems to be the area with the higher accuracy values. *Minsplit* will be held constant at 15.
```{r train rpart1 tree}
set.seed(2007, sample.kind = "Rounding")
control1 <- trainControl(method = "cv", number=4, classProbs = TRUE)
train_rpart1 <- train(units_sold ~ ., data=train_set, method="rpart",
tuneGrid = data.frame(cp = seq(0, 0.07, len = 25)),
control=rpart::rpart.control(minsplit=15), trControl=control1)
ggplot(train_rpart1, highlight = TRUE)
```
We now get a higher accuracy of 0.682 with a cp=0.0058, mainly due to the variation of the *minsplit* parameter. Now that we have optimized cp, we will try different minsplit numbers to optimize the model.
```{r train minsplit tree}
cp <- train_rpart1$bestTune$cp
minsplit <- seq(5, 40, len=15)
acc <- sapply(minsplit, function(ms){
set.seed(2007, sample.kind = "Rounding")
control1 <- trainControl(method = "cv", number=4, classProbs = TRUE)
train(units_sold ~ ., method = "rpart", data = train_set, tuneGrid = data.frame(cp=cp),
control=rpart::rpart.control(minsplit=ms), trControl=control1)$results$Accuracy })
qplot(minsplit, acc)
minsplit[which.max(acc)]
max(acc)
minsplit <- minsplit[which.max(acc)]
```
Great, for a minsplit=20 of we get an accuracy of 0.686. Let's now train a new model with the optimized cp and minsplit values and plot the new tree.
We see now that the tree gets more decision points, and also that more predictors are taken into consideration, such as *price*, *merchant_rating_count* and *rating_four_count*.
```{r train opt tree}
tic("rpart")
set.seed(2007, sample.kind = "Rounding")
control1 <- trainControl(method = "cv", number=4, classProbs = TRUE)
train_rpart2 <- train(units_sold ~ ., data=train_set, method="rpart",
tuneGrid = data.frame(cp = cp),
control=rpart::rpart.control(minsplit=minsplit),
trControl=control1)
rp2_toc <- toc()
fancyRpartPlot(train_rpart2$finalModel, sub = NULL)
rpart2_imp <- varImp(train_rpart2)
plot(rpart2_imp, top = 10, main="Var Imp optimized Classif Tree")
```
The accuracy we get with this model in the train set is 0.686, similar to the one in the previous tree model, so this is the one we will use for predictions.
```{r test opt tree}
y_rpart2 <- predict(train_rpart2, test_set, type="raw")
acc_rpart2 <- confusionMatrix(y_rpart2, test_set$units_sold)$overall[['Accuracy']]
acc_results <- bind_rows(acc_results,
data_frame(method="Classification Trees Optimized",
Accuracy_Train = max(train_rpart2$results$Accuracy),
Accuracy_Test = acc_rpart2,
Time = rp2_toc$toc-rp2_toc$tic))
knitr::kable(acc_results, caption = "Accuracy Results")
```
Great! We are almost on a 75% of correct predictions.
## 3.5. Random Forest <a name="rf"></a>
Random Forest is a widely used algorithm, either for Classification problems (as this one), but also for continuous problems, in that case, it is called Regression Forest. Random Forest uses the technique from Decision Trees that we studied in the section above, but using a large number of different decision trees. Each decision tree in the forest provides an output, the class that was outputed the most of the times by the decision trees, is the one that the algorithm chooses.
So this model usually outperforms the decision tree model, because taking into account different trees to make the decision, disregards errors that single trees may produce.
*By Venkata Jagannath - :
![](Random_forest_diagram_complete.png)
*https://community.tibco.com/wiki/random-forest-template-tibco-spotfirer-wiki-page, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=68995764*
`Caret` package allows us to optimize the *mtry* parameter via tuneGrid. *Mtry* parameter refers to the number of variables available for splitting at each tree node. A common rule to choose *mtry* in classification problems is to $mtry = \sqrt(number of predictors)$.
We will start to run a model with the default values from `caret` package. As in Random Forest we already use independent trees in the model, we will use simple cross-validation in this case.
```{r train rf0}
tic("default RF")
set.seed(1234, sample.kind = "Rounding")
control_rf <- trainControl(method = "cv", number=3, savePredictions = FALSE,
verboseIter = FALSE)
train_rf0 <- train(units_sold ~ ., data=train_set, method="rf", trControl=control_rf)
rf0_toc <- toc()
ggplot(train_rf0, highlight = TRUE)
```
We see that the best accuracy of 0.702 we get is for a mtry=34 . As we said before, in the case best mtry>number of predictors in our dataset. This is because `caret::train` function converts the database into a matrix, and considers, as an independent predictor, each level of the categorical variables.
```{r test rf0}
y_rf0 <- predict(train_rf0, test_set, type="raw")
acc_rf0 <- confusionMatrix(y_rf0, test_set$units_sold)$overall[['Accuracy']]
acc_results <- bind_rows(acc_results,
data_frame(method="Random Forest not optimized",
Accuracy_Train = max(train_rf0$results$Accuracy),
Accuracy_Test = acc_rf0,
Time = rf0_toc$toc-rf0_toc$tic))
knitr::kable(acc_results, caption = "Accuracy Results")
```
And we get an accuracy on test set of 0.76! Great!.
Now we will try to optimize the model. As we have 28 predictors in our database, we will try *mtry* that range from 20 to 40.
```{r train rf1}
tic("mtry optimized RF")
set.seed(1234, sample.kind = "Rounding")
control_rf <- trainControl(method = "cv", number=3, savePredictions = FALSE,
verboseIter = FALSE)
grid_rf <- expand.grid(mtry=seq(10,40,1))
train_rf1 <- train(units_sold ~ ., data=train_set, method="rf", tuneGrid=grid_rf,
trControl=control_rf)
rf1_toc <- toc()
ggplot(train_rf1, highlight = TRUE)
mtry <- train_rf1$bestTune$mtry
```
And now we get a higher accuracy of 0.706 for a mtry=27. Once that we have optimized the *mtry* parameter, we will try with different nodesizes and see if we can get any improvement.
```{r train rf2 nodesize}
grid_mtry <- expand.grid(mtry=mtry)
nodesize <- seq(1, 25, 1)
acc <- sapply(nodesize, function(ns){
set.seed(1234, sample.kind = "Rounding")
control_rf <- trainControl(method = "cv", number=3, savePredictions = FALSE,
verboseIter = FALSE)
train(units_sold ~ ., method = "rf", data = train_set, tuneGrid = grid_mtry,
trControl=control_rf,
nodesize = ns)$results$Accuracy })
qplot(nodesize, acc)
nodesize <- nodesize[which.max(acc)]
max(acc)
```
Great! So the optimal nodesize=18 which gives an accuracy of 0.707. It is important to note now that we are not able to beat the previous accuracy of 0.707. Now, keeping our optimal nodesize=18 and mtry=27, we will train the model.
```{r train rf2}
tic("Optimized RF")
set.seed(1234, sample.kind = "Rounding")
control_rf <- trainControl(method = "cv", number=3, savePredictions = FALSE,
verboseIter = FALSE)
train_rf2 <- train(units_sold ~ ., method = "rf", data = train_set,
tuneGrid = grid_mtry, nodesize = nodesize, trControl=control_rf)
rf2_toc <- toc()
```
As we said the accuracy is higher than in model *train_rf1*, so we will use *train_rf2* to test results in test set. First, we plot the variable importance of the model.
```{r test rf2}
rf2_imp <- varImp(train_rf2)
plot(rf2_imp, top = 10, main="Var Imp optimized Random Forest")
y_rf <- predict(train_rf2, test_set, type="raw")
acc_rf <- confusionMatrix(y_rf, test_set$units_sold)$overall[['Accuracy']]
acc_results <- bind_rows(acc_results,
data_frame(method="Random Forest optimized",
Accuracy_Train = max(train_rf2$results$Accuracy),
Accuracy_Test = acc_rf,
Time = rf2_toc$toc-rf2_toc$tic))
knitr::kable(acc_results, caption = "Accuracy Results")
```
Looks like we did not improve the results compared to the default method. Although on train_set it looked like we did, when applying the model on test set, we got a lower accuracy than with mtry=34. Thus, in this case, the best model is the default.
## 3.6. XGBoost <a name="xgbm"></a>
XGBoost is another model we have not studied during the course, but that is becoming very popular due to its execution speed and model performance. Also, it dominates structured datasets on classification problems. It uses the gradient boosting decision tree algorithm, which is an ensemble technique where new models are added to correct the errors made by the existing models. It keeps adding models until no improvement is achieved.
We will start by optimizing the tuneGrid hyperparameters of *eta* and *maxdepth*. To start the first model, we will keep the *nodesize*=5, which is a standard value for multinomial categorical problems.
We will vary *eta* from 0.005 to 0.3, which controls the learning rate. And *max_depth* from 4 to 12, which controls the maximum depth of the trees. If we make *max_depth* too large, we have higher chances of over-fitting the model.
```{r train1 xgbm}
grid_xgbm1 <- expand.grid(min_child_weight=c(5), eta=seq(0.005, 0.3, 0.05),
nrounds=c(500), max_depth=seq(4,12,2), gamma=0,
colsample_bytree=c(0.8), subsample=1)
set.seed(62, sample.kind = "Rounding")
control_xgbm <- trainControl(method = "cv", number=3, savePredictions = FALSE,
verboseIter = FALSE)
train_xgbm1 <- train(units_sold ~ ., method="xgbTree", data=train_set,
trControl=control_xgbm, tuneGrid=grid_xgbm1, verbose=TRUE)
ggplot(train_xgbm1, highlight = TRUE)
eta <- train_xgbm1$bestTune$eta
max_depth <- train_xgbm1$bestTune$max_depth
```
Once we have our optimized values of *eta*=0.005 and *max_depth*=12, giving an accuracy of 0.699, we will fix those values and try to optimize the *nrounds* hyperparameter. *nrounds* controls the maximum number of iterations.
```{r train2 xgbm}
grid_xgbm2 <- expand.grid(min_child_weight=c(5), eta=c(eta),
nrounds=c(100,200,500), max_depth=c(max_depth),
gamma=0, colsample_bytree=c(0.8), subsample=1)
set.seed(62, sample.kind = "Rounding")
control_xgbm <- trainControl(method = "cv", number=3, savePredictions = FALSE,
verboseIter = FALSE)
train_xgbm2 <- train(units_sold ~ ., method="xgbTree", data=train_set,
trControl=control_xgbm, tuneGrid=grid_xgbm2, verbose=TRUE)
ggplot(train_xgbm2, highlight = TRUE)
nrounds <- train_xgbm2$bestTune$nrounds
```
We need to keep in mind that the higher the *nrounds* number, longer the time the model needs to run. Now that we have also optimized the *nrounds* parameter to equal 100, which provides an accuracy of 0.699, we will try to improve the model by looking at different *min_child_weight* values. It means the same as *nodesize* in Random Forest.
```{r train3 xgbm}
grid_xgbm3 <- expand.grid(min_child_weight=c(1,3,5,7,9,11), eta=c(eta), nrounds=c(nrounds),
max_depth=c(max_depth), gamma=0,
colsample_bytree=c(0.8), subsample=1)
set.seed(62, sample.kind = "Rounding")
control_xgbm <- trainControl(method = "cv", number=3, savePredictions = FALSE,
verboseIter = FALSE)
train_xgbm3 <- train(units_sold ~ ., method="xgbTree", data=train_set,
trControl=control_xgbm, tuneGrid=grid_xgbm3, verbose=TRUE)
ggplot(train_xgbm3, highlight = TRUE)
nodesize <- train_xgbm3$bestTune$min_child_weight
```
Great, we now have a new parameter optimized. With *min_child_weight*=1 we get an accuracy of 0.699. Lastly, we will see if trying a different *gamma* can improve our accuracy. *Gamma* is the regularization parameter that prevents over-fitting and allows weighting. It defaults to 0.
```{r train4 xgbm}
grid_xgbm4 <- expand.grid(min_child_weight=c(nodesize), eta=c(eta), nrounds=c(nrounds),
max_depth=c(max_depth), gamma=seq(0,7,1),
colsample_bytree=c(0.8), subsample=1)
set.seed(62, sample.kind = "Rounding")
control_xgbm <- trainControl(method = "cv", number=3, savePredictions = FALSE,
verboseIter = FALSE)
train_xgbm4 <- train(units_sold ~ ., method="xgbTree", data=train_set,
trControl=control_xgbm, tuneGrid=grid_xgbm4, verbose=TRUE)
ggplot(train_xgbm4, highlight = TRUE)
gamma <- train_xgbm4$bestTune$gamma
```
We can see that we get the higher accuracy of 0.704 for a gamma=1. Once that we have optimized all these parameters, we try to optimized the *colsample_bytree* parameter. It is the fraction of features that will be used to train each tree, similar to *mtry* in Random Forest.
```{r train5 xgbm}
grid_xgbm5 <- expand.grid(min_child_weight=c(nodesize), eta=c(eta),
nrounds=c(nrounds), max_depth=c(max_depth), gamma=gamma,
colsample_bytree=seq(0.4, 0.9, 0.1), subsample=1)
set.seed(62, sample.kind = "Rounding")
control_xgbm <- trainControl(method = "cv", number=3, savePredictions = FALSE, verboseIter = FALSE)
train_xgbm5 <- train(units_sold ~ ., method="xgbTree", data=train_set,
trControl=control_xgbm, tuneGrid=grid_xgbm5, verbose=TRUE)
ggplot(train_xgbm5, highlight = TRUE)
colsample_bytree <- train_xgbm5$bestTune$colsample_bytree
```
We see that we get the optimal *colsample_bytree* at 0.9 with an accuracy of 0.707.
Once we have optimized all these parameters, we run the optimized model. By using the varImp function, we can plot as well the importance that the different predictors have in the model.
```{r train_op xgbm}
tic("Optimized XGBoost")
grid_xgbm_op <- expand.grid(min_child_weight=c(nodesize), eta=c(eta), nrounds=c(nrounds),
max_depth=c(max_depth), gamma=gamma,
colsample_bytree=c(colsample_bytree), subsample=1)
set.seed(62, sample.kind = "Rounding")
control_xgbm <- trainControl(method = "cv", number=3, savePredictions = FALSE,
verboseIter = FALSE)
train_xgbm_op <- train(units_sold ~ ., method="xgbTree", data=train_set,
tuneGrid=grid_xgbm_op, trControl=control_xgbm, verbose=TRUE)
xgbm_toc <- toc()
xgbm_imp <- varImp(train_xgbm_op)
plot(xgbm_imp, top = 10, main="Var Imp optimized XGBoost")
```
We use our optimized model to test the results and compare with the previous models.
```{r test xgbm}
y_xgbm <- predict(train_xgbm_op, test_set, type="raw")
acc_xgbm <- confusionMatrix(y_xgbm, test_set$units_sold)$overall[['Accuracy']]
acc_results <- bind_rows(acc_results,
data_frame(method="XGBoost",
Accuracy_Train = max(train_xgbm_op$results$Accuracy),
Accuracy_Test = acc_xgbm,
Time=xgbm_toc$toc-xgbm_toc$tic))
knitr::kable(acc_results, caption = "Accuracy Results")
```
With our final XGBoost model we were not able to outperform the default Random Forest model when running on test set and results were very similar to the ones we got with Random Forest optimized. Time for XGBoost was also a little bit lower than for the default Random Forest. In fact, XGBoost gets the same accuracy than the optimized Classification Tree model, but in almost x10 time!.
## 3.7. H2O AutoML <a name="automl"></a>
For our last model we will use the library "h2o" and its auto Machine Learning algorithm. This algorithms runs different models (Random Forest, XGBoost, Neural Networks, etc) and specially, it also does Stacked Ensembles. Stacked Ensemble method uses more than one ML algorithm to improve performance.
```{r h2o train, results="hide"}
library(h2o)
h2o.init()
data_h2o <- as.h2o(train_set)
test_h2o <- as.h2o(test_set)
tic("h2oautoml")
automl_all <- h2o.automl(y=3, training_frame=data_h2o, max_runtime_secs=500,
validation_frame = test_h2o,
seed=1, keep_cross_validation_predictions=TRUE)
h2o_toc <- toc()
```
```{r h2o lb}
automl_all_lb <- head(automl_all@leaderboard)
knitr::kable(automl_all_lb, caption = "Best h2o AutoML performer model")
```
In the table above, we can see the performance of the different models that were trained using h2o. We see that the best performer is a Gradient Boosting model that we did not study in the course neither was used in the previous sections.
If we study the best performer model, we can see the different features it used, which were optimized by using cross-validation.
```{r leader}
automl_all@leader
```
In the cross validation of the training data, we can see that it achieves an accuracy of 0.697. Also, we can study the variable importance of the model, and check if it matches the variable importance of the XGBoost and Random Forest model we performed before.
```{r varimp h2o, fig.height=5}
plot <- h2o.varimp_plot(automl_all@leader, num_of_features = 10)
```
It is curious to see the importance of the *product_color* predictor in this model, while it did not even appear for XGBoost and Random Forest. Still, by far, the most important feature in all these models is *rating_count*, while the rest of variables are usually the same, although they do change in position.
Now, running the model in our test_set, we get an accuracy of (203-57)/203=0.7192.
```{r test, echo=3:5}
h2o_gbm <- h2o.performance(model = automl_all@leader, newdata = test_h2o)
pred <- h2o.predict(automl_all@leader, test_h2o)
h2o.confusionMatrix(automl_all@leader, newdata = test_h2o)
acc_results <- bind_rows(acc_results,
data_frame(method="h2oAutoML", Accuracy_Train = 0.694,
Accuracy_Test = 0.7192,
Time=h2o_toc$toc-h2o_toc$tic))
knitr::kable(acc_results, caption = "Accuracy Results")
```
In fact, it is only capable of improving the not optimized Classification Tree model. Personally, I think that it is always nice to check that the autoML algorithm does not beat the work done by the data scientist, and that other methods get a higher accuracy, so it is great that we were able to run a few more models that improve the results.