Wednesday 21 October 2020
o
and f
keys for best viewingtoolchain walkthrough := an opinionated (Parker 2017) documentation of a scientific workflow (Gray 2019)
The tidyverse (Wickham 2017) is an opinionated collection of R packages designed for data science. All packages share an underlying design philosophy, grammar, and data structures.
Gray, C. T. (2019). code::proof: Prepare for Most Weather Conditions. Statistics and Data Science, 22–41. https://doi.org/10.1007/978-981-15-1960-4_2
# toolchain walkthrough library(multinma) # network meta-analysis
# from data documentation ?parkinsons
mean off-time reduction in patients given dopamine agonists as adjunct therapy in Parkinson’s disease from 7 trials comparing four active drugs and placebo
parkinsons_dat %>% # display top values head(3)
## studyn trtn y se n diff se_diff ## 1 study_1 trt_1 -1.22 0.504 54 NA 0.504 ## 2 study_1 trt_3 -1.53 0.439 95 -0.31 0.668 ## 3 study_2 trt_1 -0.70 0.282 172 NA 0.282
\[ \boldsymbol y \sim \text{normal}(\boldsymbol \theta, \boldsymbol \sigma^2)\\ \boldsymbol \theta \sim \text{normal}(\mu, \tau^2) \]
\[ \left. \begin{array}{c r c l} \text{prior} & \boldsymbol d & \sim & \text{normal}(\boldsymbol d_0, \boldsymbol \Sigma_d)\\ \text{likelihood} & \boldsymbol y | \boldsymbol d & \sim & \text{normal}(\boldsymbol \delta, \boldsymbol V)\\ \text{fixed effects model} & \boldsymbol \delta &=& \boldsymbol{Xd} \end{array} \right\} \]
parkinsons_net <- set_agd_contrast( parkinsons, study = studyn, trt = trtn, y = diff, se = se_diff, sample_size = n )
# prior summary(normal(scale = 100))
## A Normal prior distribution: location = 0, scale = 100. ## 50% of the prior density lies between -67.45 and 67.45. ## 95% of the prior density lies between -196 and 196.
an example of an area I would like to learn more about; selection of priors.
::nma
# fit model to network object parkinsons_nma <- nma( # network object parkinsons_net, # fixed; for brevity w sensitivity trt_effects = "fixed", # set prior on treatment contrast with placebo prior_trt = normal(scale = 100))
## Note: Setting "4" as the network reference treatment.
## ## SAMPLING FOR MODEL 'normal' NOW (CHAIN 1). ## Chain 1: ## Chain 1: Gradient evaluation took 3.1e-05 seconds ## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.31 seconds. ## Chain 1: Adjust your expectations accordingly! ## Chain 1: ## Chain 1: ## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) ## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) ## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) ## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) ## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) ## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) ## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) ## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) ## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) ## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) ## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) ## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) ## Chain 1: ## Chain 1: Elapsed Time: 0.049515 seconds (Warm-up) ## Chain 1: 0.045204 seconds (Sampling) ## Chain 1: 0.094719 seconds (Total) ## Chain 1: ## ## SAMPLING FOR MODEL 'normal' NOW (CHAIN 2). ## Chain 2: ## Chain 2: Gradient evaluation took 9e-06 seconds ## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. ## Chain 2: Adjust your expectations accordingly! ## Chain 2: ## Chain 2: ## Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) ## Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) ## Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) ## Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) ## Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) ## Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) ## Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) ## Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) ## Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) ## Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) ## Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) ## Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) ## Chain 2: ## Chain 2: Elapsed Time: 0.041985 seconds (Warm-up) ## Chain 2: 0.044227 seconds (Sampling) ## Chain 2: 0.086212 seconds (Total) ## Chain 2: ## ## SAMPLING FOR MODEL 'normal' NOW (CHAIN 3). ## Chain 3: ## Chain 3: Gradient evaluation took 9e-06 seconds ## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. ## Chain 3: Adjust your expectations accordingly! ## Chain 3: ## Chain 3: ## Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) ## Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) ## Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) ## Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) ## Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) ## Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) ## Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) ## Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) ## Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) ## Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) ## Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) ## Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) ## Chain 3: ## Chain 3: Elapsed Time: 0.040731 seconds (Warm-up) ## Chain 3: 0.040899 seconds (Sampling) ## Chain 3: 0.08163 seconds (Total) ## Chain 3: ## ## SAMPLING FOR MODEL 'normal' NOW (CHAIN 4). ## Chain 4: ## Chain 4: Gradient evaluation took 8e-06 seconds ## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. ## Chain 4: Adjust your expectations accordingly! ## Chain 4: ## Chain 4: ## Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) ## Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) ## Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) ## Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) ## Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) ## Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) ## Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) ## Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) ## Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) ## Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) ## Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) ## Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) ## Chain 4: ## Chain 4: Elapsed Time: 0.042716 seconds (Warm-up) ## Chain 4: 0.042591 seconds (Sampling) ## Chain 4: 0.085307 seconds (Total) ## Chain 4:
plot(parkinsons_net, weight_edges = FALSE)
plot(parkinsons_nma)
plot_prior_posterior(parkinsons_nma)
posterior_rank_probs(parkinsons_nma) %>% plot() + posterior_rank_probs(parkinsons_nma, cumulative = TRUE) %>% plot()
# excluding study study_to_filter
## [1] "study_7"
# get posterior mean estimates post_means <- summary(parkinsons_nma, pars=c("d")) %>% as.data.frame() %>% pull("mean") # print values post_means
## d[1] d[2] d[3] d[5] ## 0.50845842 -1.30562885 0.04087262 -0.30738230
## [,1] [,2] [,3] [,4] ## [1,] -1 0 1 0 ## [2,] -1 1 0 0 ## [3,] 1 0 0 0 ## [4,] 0 0 1 0 ## [5,] 0 0 0 1
Sigma_d <- diag(rep(100^2, length(post_means)))
Gray, Charles T. 2019. “Code::Proof: Prepare for Most Weather Conditions.” Edited by Hien Nguyen. Statistics and Data Science, Communications in Computer and Information Science,, 22–41. https://doi.org/10.1007/978-981-15-1960-4_2.
Parker, Hilary. 2017. “Opinionated Analysis Development.” Preprint. https://doi.org/10.7287/peerj.preprints.3210v1.
Wickham, Hadley. 2017. Tidyverse: Easily Install and Load the ’Tidyverse’. https://CRAN.R-project.org/package=tidyverse.