Would you expect more precise searches in disciplines like Information Systems, Management, or the Social Sciences?
Open challenge:
The screen is typically completed in two parts:
The screen is often based on explicit inclusion and exclusion criteria
Screening tasks are often split among the review team to complete the process more quickly, and to ensure reliable decisions
Process:
The PRISMA flow chart (updated version by Tricco et al. 2018)
An online version is available here
The reading activities can be organized strategically at two levels:
The overall corpus level: In which order should papers be read or skimmed?
The individual paper level: How should the different parts of a paper be read?
Assume you have 300 papers to cover, how would you organize the reading activities?
Key differences with regard to data extraction and analysis:
Grounded theory is an inductive method commonly used in literature reviews (Wolfswinkel et al. 2013)
In the data analysis phase, the three coding techniques are central:
The coding process and results are often illustrated in the Gioia data structure
Scope: Digital platforms for knowledge-intensive services, such as Upwork, Fiverr, or TopCoder
Sample: 50 papers, mostly published in the Information Systems discipline
Data: Text fragments and figures have been pre-selected: access the worksheet
Objective: Analyze extant research and inductively develop a process model
Vote counting is one technique to aggregate the evidence from prior empirical studies
Key variables are extracted and compiled in a list of master codes
Effects between independent and dependent variables are coded:
Effects are aggregated and presented as follows:
Strength of vote counting:
Shortcoming of vote counting:
Meta-analysis techniques address these shortcomings.
Research objective: "to assess the effects of Fitbit-based interventions, compared with nonwearable control groups, on healthy lifestyle outcomes." (Ringeval et al. 2020)
Data extraction (example):
Create a quick draft for the data extraction and analysis section.
Generic steps
Okoli, C. (2015). A guide to conducting a standalone systematic literature review. Communications of the Association for Information Systems, 37. doi:10.17705/1CAIS.03743
Boell, S. K., & Cecez-Kecmanovic, D. (2014). A hermeneutic approach for conducting literature reviews and literature searches. Communications of the Association for information Systems, 34, 12. doi:10.17705/1CAIS.03412
Templier, M., & Pare, G. (2018). Transparency in literature reviews: an assessment of reporting practices across review types and genres in top IS journals. European Journal of Information Systems, 27(5), 503-550. doi:10.1080/0960085X.2017.1398880
Problem formulation
Alvesson, M., & Sandberg, J. (2011). Generating research questions through problematization. Academy of Management Review, 36(2), 247-271. doi:10.5465/amr.2009.0188
Search
Gusenbauer, M., & Haddaway, N. R. (2021). What every researcher should know about searching–clarified concepts, search advice, and an agenda to improve finding in academia. Research Synthesis Methods, 12(2), 136-147. doi:10.1002/jrsm.1457
Hiebl, M. R. (2023). Sample selection in systematic literature reviews of management research. Organizational Research MNethods, 26(2), 229-261. doi:10.1177/109442812098685
Knackstedt, R., & Winkelmann, A. (2006). Online-Literaturdatenbanken im Bereich der Wirtschaftsinformatik: Bereitstellung wissenschaftlicher Literatur und Analyse von Interaktionen der Wissensteilung. Wirtschaftsinformatik, 1(48), 47-59. doi:10.1007/s11576-006-0006-1
Wagner, G., Prester, J., & Paré, G. (2021). Exploring the boundaries and processes of digital platforms for knowledge work: A review of information systems research. The Journal of Strategic Information Systems, 30(4), 101694. doi:10.1016/j.jsis.2021.101694
Screen
Tricco, A. C., Lillie, E., Zarin, W., O'Brien, K. K., Colquhoun, H., Levac, D., ... & Straus, S. E. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation.
Annals of Internal Medicine, 169(7), 467-473. doi:10.7326/M18-0850
Data analysis
Wolfswinkel, J. F., Furtmueller, E., & Wilderom, C. P. (2013). Using grounded theory as a method for rigorously reviewing literature. European journal of information systems, 22(1), 45-55. doi:10.1057/ejis.2011.51
Higgins J, Savovic J, Page MJ, Elbers RG, Sterne JA. Chapter 8: Assessing risk of bias in a randomized trial. In: Cochrane Handbook for Systematic Reviews of Interventions. London: Cochrane; 2019. link
Lacity, M. C., Solomon, S., Yan, A., & Willcocks, L. P. (2011). Business process outsourcing studies: a critical review and research directions. Journal of Information Technology, 26, 221-258. doi:10.1057/jit.2011.25
Ringeval, M., Wagner, G., Denford, J., Paré, G., & Kitsiou, S. (2020). Fitbit-based interventions for healthy lifestyle outcomes: systematic review and meta-analysis. Journal of Medical Internet Research, 22(10), e23954. doi:10.2196/23954
https://unsplash.com/de/fotos/menschen-die-tagsuber-auf-grunem-rasen-sitzen-7rImz-goqfQ
Discuss how the models fit together / what the underlying differences are - Reading activities: in the "data extraction" hermeneutic vs. systematic traditions inductive/emergent vs. deductive form of data analysis Okoli: screen before search? (search: "reporting the search")
TODO : discuss the differences between review types
Remember: coherence
- methodological coherence objectives/type/methods -> we have done that in the first session
-> Check the Ringeval search. Surprising: the search strategy was stated without any trial-and-error/iterations. Explain the linked-list format, explain the Concept-Synonym-group approach
anecdote: sex vs. gender
- $TP$: True positives = *retrieved* by the search and *relevant* - $FP$: False positives = *retrieved* by the search but *irrelevant* - $FN$: False negatives = *not retrieved* by the search and *relevant* ❓ - $TN$: True negatives = *not retrieved* by the search but *irrelevant* ❓
It is instructive to know these metrics The key objective is to identify all relevant papers, but also to do that efficiently A certain level of noise (precision) must be accepted SYNERGY datasets: https://github.com/asreview/synergy-dataset included = relevant all records (retrieved) = True positives and False positives on average: 4% or 2% when removing outliers -> check absolute numbers: covering 1,000 papers to identify 20-40 relevant ones (search may be too narrow when the inclusion percentage is bigger)
"We have waited too often that database provides improve search capabilities"! https://unsplash.com/de/fotos/toddlers-standing-in-front-of-beige-concrete-stair-bJhT_8nbUA0
-> illustrate "Percentage agreement, Agreement by chance" with an example on the blackboard https://en.wikipedia.org/wiki/Cohen%27s_kappa
https://unsplash.com/de/fotos/%EC%BB%B5%EC%97%90-%EC%BB%A4%ED%94%BC%EC%9D%98-%EC%8B%9C%EA%B0%84-%EA%B2%BD%EA%B3%BC-%EC%82%AC%EC%A7%84-5iRgh_G0eRY
cite Bandara !? on profiling?
TBD: Illustrate differences between co-citation analysis and bibliographic coupling? Highlight the Web of Science data export format
Exampe: Lacity et al. (2011) TODO : summarize vote counting and give an outlook on meta-analyses - data extraction (reliability, ...) - risk-of-bias (quality) assessment - Vote counting example (meta-analysis)
- Hierarchy of evidence (medicine)
Topic 4A Topic 4B Grounded theory: Topic 11