Evaluation criteria

Bachelor’s and Master’s theses are evaluated according to the same criteria. Once the formal requirements are met, the main criteria related to the process, the contribution, and the discussion are used to grade a thesis.

A changelog is available for this document.

Table of contents

Formal requirements

  • A Bachelor’s thesis should cover 30-50 pages (roughly 8,000 - 16,000 words), while a Master’s thesis should cover 40-70 pages (roughly 12,000 - 28,000 words). Design-oriented, analytical, and quantitative theses are typically shorter than qualitative and literature-based ones. For a Master’s thesis, a presentation is required, and a more substantial contribution is expected in line with the higher number of ECTS credits earned.
  • Written in proper academic English, with a clear structure and line of argument
  • Follows citation practices, using APA format, and refers primarily to peer-reviewed academic papers
  • Includes a title page, table-of-contents, abstract, common sections (e.g., IMRAD), a reference section, and appendices if applicable (a list of figures or tables is not required)
  • Makes appropriate use of tables and figures, citing each table or figure in the text
  • Provides clear definitions for key terms

Main criteria

The process is evaluated according to the following criteria:

  • Systematicity in managing time, resources, and challenges. Start early to have enough time for each task, make sure that resources such as labs, equipment, participants, partners, or thesis advisors are available at the right time, and reduce potential risks by anticipating them and identifying potential alternatives.
  • Proactiveness in developing the topic (research question, theory, method, etc.), and incorporating feedback. We encourage students to challenge our feedback and suggest better alternatives.
  • Proficiency in leveraging prior research, i.e., identifying relevant papers, achieving an appropriate understanding, assessing contributions critically, organizing prior work in a persuasive way to clarify how the thesis builds on prior work and how it goes beyond prior work

The contribution is evaluated based on several dimensions (based on Leidner 2020):

  • Explicitness and significance of research objective or question: Explain how your work builds on prior research, identify related contributions, and clarify the research gap you address. Readers should understand why your research promises interesting insights, and who would benefit from it. The editorial of Lange (2017) provides useful sugestions to accomplish this.
  • Uniqueness in framing prior research: In summarizing prior research, it is essential to organize existing contributions from the perspective of your work. Prior research, in itself, does not follow a pre-defined order. It is your responsibility to make sense of the literature and to adopt an organizing frame that helps readers understand how prior research relates to your contribution.
  • Systematicity of method: Evaluation criteria and reporting guidelines depend on the method applied. A selection is provided below. In addition to reporting the methodological steps, data and code should be made available with the thesis according to standards of reproducible research. Corresponding requirements should be discussed with the thesis advisor.
  • Originality of theory: Applies if your contribution involves a theoretical contribution, e.g., in the form of propositions, constructs, or models. Clarify how it differs from related theory, and explain why it offers a plausible and more compelling perspective or explanation.
  • Interpretation and novelty of findings: The findings should be interpreted appropriately, using a neutral tone, and avoiding to over-state or under-state their significance. Clearly distinguish findings that confirm prior research or common expectations. Effectively communicate the findings that are novel, interesting, or counterintuitive.

Recommended criteria and reporting guidelines for selected research methods (the equator network provides a comprehensive overview). Additional guidelines can be provided upon request.

Method Criteria and reporting guideline
Literature review Paré et al. 2015, Paré et al. 2023, Page et al. 2021, Templier and Paré 2018, PRISMA
Theory development Gregor 2006
Design Science Hevner et al. 2004, Prat et al. 2015, Peffers et al. 2007, JOSS review criteria
Machine learning Stevens et al. 2020, Heil et al. 2021, Walsh et al. 2021, Kapoor et al. 2023
Experiments Frank et al. 2024
Qualitative surveys and interviews O’Brien et al. 2014, Tong et al. 2007

The contribution should be discussed with reference to

  • The (methodological) limitations
  • The implications for future research (research gaps)
  • The implications for practice (Ågerfalk and Karlsson, 2020).

Presentation

A presentation is required for Master’s theses. A presentation takes 15-20 minutes with 10 minutes of discussion. It is evaluated based on the following criteria:

  • The introductory section should convey why the topic is interesting, and it should be easy to follow by a general audience
  • The main section should demonstrate the objective, method, main findings, and contribution, giving experts the possibility to critically assess the different parts
  • The concluding section should give an outlook and briefly outline the implications
  • Presentation style and slides should be appropriate, i.e., using academic terminology, displaying a clear structure, connecting with the audience, taking approx. 2-3 minutes per slide, using short bullet-point summaries instead of longer paragraphs, including illustrations rather than animations
  • Questions should be handled constructively, demonstrating in-depth knowledge of the thesis, and familiarity with the broader topic area

In line with the applicable regulations, the presentation receives a weight of 33% for students of Information Systems. For students from other departments and degree programs (such as IBWL), the presentation may not be graded.

References

Ågerfalk, P. J., & Karlsson, F. (2020). Artefactual and empirical contributions in information systems research. European Journal of Information Systems, 29(2), 109-113. link

Frank, M. C., Braginsky, M., Cachia, J., Coles, N. A., Hardwicke, T. E., Hawkins, R. D., Mathur, M. B. and Williams, R. (2024). Experimentology: An Open Science Approach to Experimental Psychology Methods. MIT Press. link

Gregor, S. (2006). The nature of theory in information systems. MIS Quarterly, 611-642. link

Heil, B. J., Hoffman, M. M., Markowetz, F., Lee, S. I., Greene, C. S., & Hicks, S. C. (2021). Reproducibility standards for machine learning in the life sciences. Nature Methods, 18(10), 1132-1135. Link

Hevner, A. R., March, S. T., Park, J., & Ram, S. (2008). Design science in information systems research. MIS Quarterly, 28(1), 6. link

JOSS review criteria and checklist. link

Kapoor, S., Cantrell, E., Peng, K., Pham, T. H., Bail, C. A., Gundersen, O. E., ... & Narayanan, A. (2023). Reforms: Reporting standards for machine learning based science. arXiv preprint arXiv:2308.07832.

Lange, D., & Pfarrer, M. D. (2017). Editors’ comments: Sense and structure—The core building blocks of an AMR article. Academy of Management Review, 42(3), 407-416. link

Leidner, D. E. (2020). What's in a Contribution?. Journal of the Association for Information Systems, 21(1), 2. link

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: a synthesis of recommendations. Academic Medicine, 89(9), 1245-1251. link

Okoli, C. (2015). A guide to conducting a standalone systematic literature review. Communications of the Association for Information Systems, 37. Link

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., ... & Moher, D. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. International Journal of Surgery, 88, 105906. link

Paré, G., Trudel, M. C., Jaana, M., & Kitsiou, S. (2015). Synthesizing information systems knowledge: A typology of literature reviews. Information & Management, 52(2), 183-199. link

Paré, G., Wagner, G., & Prester, J. (2023). How to develop and frame impactful review articles: key recommendations. Journal of Decision Systems, 1-17. link

Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A design science research methodology for information systems research. Journal of Management Information Systems, 24(3), 45-77. link

Prat, N., Comyn-Wattiau, I., & Akoka, J. (2015). A taxonomy of evaluation methods for information systems artifacts. Journal of Management Information Systems, 32(3), 229-267. link

Stevens, L. M., Mortazavi, B. J., Deo, R. C., Curtis, L., & Kao, D. P. (2020). Recommendations for reporting machine learning analyses in clinical research. Circulation: Cardiovascular Quality and Outcomes, 13(10), e006556. Link

Templier, M., & Pare, G. (2018). Transparency in literature reviews: an assessment of reporting practices across review types and genres in top IS journals. European Journal of Information Systems, 27(5), 503-550. link

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19(6), 349-357. link

Walsh, I., Fishman, D., Garcia-Gasulla, D., Titma, T., Pollastri, G., Harrow, J., ... & Tosatto, S. C. (2021). DOME: recommendations for supervised machine learning validation in biology. Nature Methods, 18(10), 1122-1127. Link