My current academic career is actually my second career. Before studying psychology I studied medical imaging techniques and worked for a couple of years in a hospital. I worked mainly at the emergency room where I was responsible for making X-rays. Maybe it is because of this background that I really enjoy helping other researchers with their data analyses. I also like it very much to write (and publish) tutorial papers in which I challenge researchers to use state-of-the-art techniques instead of their usual techniques. Below I briefly describe the tutorials I have published over the years. If you need any additional information (like syntax files) just drop me a message.
A very easy paper to start reading about Bayesian statistics is my paper together with Sarah Depaoli: Bayesian analyses: Where to start and what to report. A next step could be the gentle introduction with applications to research in Child Development together with, among other co-authors, David Kaplan. In this paper we also re-analyzed four datasets and used posterior results from a previous dataset as prior input for the next dataset. This way, we updated our confidence in the model to obtain more certainty in the end. Since applying Bayesian estimation to real life data is not so easy Sarah Depaoli and me published a 10-point checklist in Psychological Methods which can be really useful when applying Bayes’ yourself, if you are supervising someone who applies Bayesian estimation, or when reviewing a Bayesian paper. One of the steps is conducting a sensitivity analysis, which is explained in more details in a paper where we showed that Bayesian analyses is really helpful with small datasets and in a paper applying Bayesian latent growth mixture modeling to data on PTSD (under review).
Testing for measurement invariance (MI) for latent variables is an essential assumption to test when comparing multiple groups (e.g., countries) or following individuals over time. Together with Joop Hox and Peter Lugtig we published a simple to use checklist. An alternative to establishing strict or strong MI is to apply approximate measurement invariance which is introduced in a paper together with Bengt Muthén (and others). Applications of this method and other state-of-of-the-art MI techniques are presented in our special issue on MI.
Missing data handling
Nested data (analyzed with multilevel modeling) sometimes involves non-normally distrusted variables and as Joop Hox and I argue robust methods need to be used. Moreover, often the sample size at higher levels is limited and alternative (i.e., Bayesian estimation methods) need to be used to obtain reliable results.
A full description of this topic can be found on my page devoted entirely to this topic or on the general informative hypothesis website, but this is a bullet-wise overview of all tutorials I have published (together with many others) on this topic:
- Why should one move away from testing null hypotheses and evaluate informative hypotheses instead? (the first paper provides the background for the video presented at the APA conference in 2013).
- Why should one still use Bayesian model selection even if you are actually interested in the null hypothesis itself? (Including the quote: “God would love a Bayes Factor of 3.01 nearly as much as a BF of 2.99”).
- Tutorial for Bayesian evaluation of informative hypotheses in structural equation modeling (includes the full version of the black bear story), and how to compute the complexity term for more complicated models.
- Also bootstrapping can be used for testing informative hypothesis which is shown to gain a lot of power in ANOVA/regression and in SEM models
- How to use the software BIEMS and LCM, and brief introduction on what is going on under the hood of the software BIEMS.
Pictures obtained from (in order of appearance):
3. van de Schoot, R., Schmidt, P., De Beuckelaer, A., eds. (2015). Measurement Invariance. Lausanne: Frontiers Media. doi: 10.3389/978-2-88919-650-0