Basic Considerations for Critically Appraising Research

 I want to discuss how to approach evaluating research. You do not need to be an expert to ask the right questions or understand the general procedures experts follow when publishing quantitative or qualitative research. You do not need to be a statistician to know whether the results of a study answer the intended question. As with many arguments, researchers follow a general schematic. The schematic is different from an argument scheme in that it has been deemed ideal and variations have been ranked according to their strength. The research process represents the ideal form of inference for inferring causation from limited observations in a data set. As with all forms of argument, critical questions can be applied to probe various features of the inferential process, ultimately evaluating the conclusion. Your questions should be substantial. Sometimes they can be simple yes/no, other times more complex and open ended, or they can be directed at assessing the implications of accepting the conclusion. Sometimes questions can be irrelevant or loaded in favor of your presumptions. The questioning process should be guided by a general level of intellectual humility and fairness. You do not want to unnecessarily use loaded language or straw man their position. See The Thinkers Guides to Asking Questions for more examination. Anyway, that is more of a general point. I specifically want to help us transcend beyond the overly simplistic assertion we all come across when someone disagrees with research: "They are biased and are funded by industry X". It is time to move beyond the ridiculous simplicity of attributing bias without specific evidence to do so. We can begin by looking at one of the fundamental framework practitioners use when implementing evidence based research: The PICO Process.

I should begin by clarifying I am not saying that bias and malicious intent cannot happen within a research agenda. If your mind suddenly jumped to that, congratulations you did not understand one of the main points in the first paragraph. We will cleanse your mind of making these leaps of logic in time. As mentioned in another blog post, there are the following threats: publication bias, reporting bias, misconduct such as Sham peer review, Why most published research is false, industry funding, and the general skepticism of scholarly peer review. However, these are typically the first things people assume of any research before assessing the validity and process of the research itself. It is normally assumed that, if a conclusion is not to your liking, it is immediately due to one of these issues. No further inquiry needed, and no need to justify your position. Let us assume for simplicity, that none of these problems exist with the research article you hold in your hand. Do you know how to evaluate it?

Lets begin by unpacking what is meant by this "Pico Process". PICO is an acronym. It can be used for all evidence based disciplines. Contrary to what some may think, systematic frameworks are needed to synthesize, understand, interpret, and give meaning to evidence.

  • P refers to your patient or population. This can generalize to "what your unit of study is" or "what are the features of your sample". What population are you wanting to generalize about?
  • I stands for Intervention. We have talked about this before; it is a crucial determinant for understanding cause and effect. What are you giving your patient? What is the specific unemployed worker training program you wish to implement? You must intervene on a system in order to understand how the effects propagate throughout. What happens if we choose another intervention? Do the results look the same or different? Was the intervention applied to everyone in the study? Who was left out? Did certain participants receive more or less than others? Did anyone drop out before the intervention was complete?
  • C stands for comparison, which is closely related to the notion of intervention. You intervene normally compared to the status quo. What is the effect of some drug as opposed to no drug? You can also compare the effects of two drugs. What are the effects of drug with dose level A compared to drug with dose level B? This is generalizable to the policy domain as well. Questions of unemployment do not need to be politicized, they need to be rigorous and evidence based.
  • O stands for outcomes. What are you even trying to do? Do you wish to stress test an existing system to find its operational bounds? Do you want to alleviate the effects of some third factor C? Normally in the medical context you are seeking to cure some latent disease. You can measure the outcome by testing whether the disease is present, or by alleviating symptoms. Just remember its better to identify a root cause ultimately.

There are other variations such as PICOTTS. The first T asks about what kind of clinical question we are even asking. Is it a diagnosis? Therapy? Prevention? This obviously generalizes to other evidence domains. Diagnosis normally connotes a medical context but "diagnosing" a problem is another way to say "identifying and classifying" whatever the problem is. The second T stands for time. This can be important if the duration of data collection is a key factor. Are you estimating long-run effects of an intervention? Did your patients die before the result horizon was observed? Can you follow up and remeasure whatever the experimental units were? Lastly, S refers to the study type. This refers to the hierarchy of evidence.

Why should you care about anything I have just wrote? Well, PICO gives you a framework for assessing the quality of research. It allows you to frame your critical questions in an effective manner. How far off does a particular design deviate from this standard? Was there a legitimate reason to deviate? Are there alternative frameworks to use for assessment? Do we have good reason to believe this is a comprehensive starting point? Feel the questions seeping out of you. PICO is a scheme that is generalizable. A scheme implies structure, which allows you to expand your questioning beyond the simple "they are biased or stupid" charge.

My favorite thing to do is ask background and foreground questions. The first thing you can do is ask whether the researchers even asked any of these. If they did, were they relevant and comprehensive? This can save you a lot of time before you begin formulating your own questions. Maybe you should ask yourself if you have sufficient background knowledge before approaching something. Consult a database, not twitter. Foreground questions are more specific; what is your identification strategy? Are there possibly any unintended consequences of implementing some intervention that may harm the participants? Is the outcome we are measuring a reliable indicator of the underlying concept? Is there construct validity? How is the data being collected?

A related procedure for evidence evaluation is to ask if the research was reported according to CONSORT standards. Some of the questions asked above can be answered if researchers follow the checklist and flow diagram proposed by this body of experts in clinical trial methodology when publishing their findings. If the reporting of research is not standardized in some way, it will be harder for the critical thinker to assess conflicting pieces of research. This can be used as a framework for assessing research but be aware it may fall short somewhere and somehow. Perhaps you can be the inquisitive scholar who finds problems in these standards, and does not simply call them "corrupt". Think about the first node in in the flow diagram. Who is eligible to be in the study? Do the conclusions generalize beyond this population? What was the criteria for inclusion? How many people declined to participate or dropped out of the study? The questions below give you a sense of the generalizability of the study. They probe into details of internal validity and external validity. If you have an incredibly small sample size of homogenous individuals, how well can this generalize to people with different attributes? Does the composition of N imply any selection bias? Was the intervention suspect of bias? Was there a statistical bias in how the treatment was allocated? What data was left out of the analysis and for what reason?

Reporting Bias as a Generalized Concept

Recently I was watching a video titled "The Challenges of Evidence Based Medicine" and a follow up video. If you are not familiar with the concept check out the BMJ Evidence-Based Medicine website. In short, the practice seeks to standardize and make rigorous the use of evidence as a foundation for medical research. In it are guidelines about levels of evidence; a hierarchy grading the reliability of different evidence types. There are evaluation methods, statistical basis, and meta-analysis methodology as well. NCBI also has a nice article describing EBM. Evidence based medicine is subsumed by a broader notion of "Evidence based practice"; which include other topics such as Evidence Based Policy. Susan Haack is a philosopher who writes about this and evidence more broadly: what is evidence? What makes evidence relevant? Is there a generalized conception of evidence that is trans-disciplinary (although this is something tackled more directly by people such as David Schum). If you are interested in what evidence really is then consult these resources. These are the topics I am broadly interested in with respect to evidence.

I bring all of this up to provide a bit of context. This is a Critical Thinking blog so some questions might be: What was your motivation behind writing something related to a reporting bias? How does this fit into the larger discussion of bias in research? Where can I find reference to your sources? This is the whole point behind critical thinking: to be one is to pose non-trivial questions that give you deeper insight into whatever it is you are engaged with. Critical thinking is a directed process aimed at achieving multiple objectives such as analysis, synthesis, critique, depth, breadth, scope, and many more. Reporting Bias makes little sense unless embedded in a context broader than this blog post. If you understand what it means at a deeper level you can capitalize on this knowledge and get a better grasp of the scope and applicability of the concept. It is this core idea: generalizing the mental model will help you in your journey.

Reporting bias technically refers to how the results of a quantitative study of some intervention, can be misrepresented in a systematic way. Research tends to be reported in a highly selective way. This is related to the Confirmation Bias and might just be a subspecies or instantiation of it. What I mean by this is that confirmation bias is about how we tend to favor evidence that supports our presumptions, predispositions, or beliefs we are emotionally tied to. The job of the critical thinker is not to dismiss all research on the grounds of it being subject to confirmation bias. Ironically, calling the results on a topic you disagree with suspect of confirmation bias, could be an example of the confirmation bias. You should have reason to suspect something is subject to the confirmation bias rather than claiming it is biased purely on grounds that the results dismiss your desired conclusion. Most scientific research is confirmatory anyway. The fact that I want to prove that X is the case is not ipso facto evidence of a bias. If an investigator is determining who murdered a victim, we would not claim that confirmatory DNA evidence is subject to the the confirmation bias. The job of a critical thinker is to move beyond this sort of trivial examination of research. When a researcher is engaged in research, it is possible that they were driven by motivations to prove whatever their presumptions were apriori. This is not necessarily mean that all research is driven by irrational impulses rather than objective inquiry; it just means that we need to be aware that after research is conducted we critically evaluate the results to rule out the possibility that the results are inflated by some agenda. Confirmation can be contrasted with disconfirmation and also null results. The heart of the reporting bias is this: did the researcher leave out an analysis that disconfirmed their initial hypothesis? Were there a substantial amount of analyses that yielded null results, not enough evidence in favor of any hypothesis? We need to think about not only confirmatory evidence, but whether there was a significant amount of disconfirmatory evidence favoring another hypothesis. Evidence analysis needs to be holistic. If there is a high quality research paper that disconfirms your presupposition, sorry you need to reevaluate. This is the problem with empirical research; it is messy and imprecise and that bothers people. Reporting bias is can be less about the quality of a study. There are plenty of measures for this such as: internal validity, external validity, construct validity, criterion validity to name a few. You can simply assume the internals of the research meet some criterion, and assess whether the report of the entire inquiry reflects what happened. Reporting bias is multidimensional in this sense.

The idea of disconfirmation is very important. We can think of it in terms of the Raven Paradox, but this is actually slightly different. Disconfirming evidence favors alternative hypotheses. It diminishes the acceptance of the hypothesis under discussion. Why is this relevant? A bit of intellectual humility will help. Consider you are on trial accused of something like robbing a bank. A person vaguely resembling you in the security footage adds credence to the hypothesis that it was you, but a receipt for some purchase at the time in another location of the crime disconfirms the original hypothesis. Generalize this to all domains of your life. Rebirth of Reason provides a nice article outlining the concept of disconfirming evidence. FS Blog also has a nice article. In the context of statistical research and Reporting Bias, might occur if we dismiss a reproducible randomized experiment or results of one of our trials that run contrary to our hypothesis. Should scientific research be like prosecution and defense in a criminal trial? I would argue no.

To get a better definition of what reporting bias is you can look at some official sources. From NCBI Causes of Reporting Bias , Wikipedia reporting bias , BMJ Outcome reporting bias in trials: a methodological approach for assessment and adjustment in systematic reviews , a paper outlining Tools for assessing risk of reporting biases in studies and syntheses of studies: a systematic review etc. I will give a brief outline that somewhat alludes to everything above. If you see the links you will notice that Reporting Bias refers to a family of types of biases. Broadly, it refers to selective presentation, suppression of critical contrary evidence, citing specific sources and not others, favoring certain outcomes when presenting results (you can repeat a trail many times and by pure chance find something significant), omitting "outliers" or other subsets of the data. Earlier I mentioned that reporting bias tends to be on the post-study aspects of the trial, but it can also happen during pre-study. You can imagine that there were alternative study designs that were deliberately disregarded because they would favor an alternative outcome. Think of it as another way to work backwards in an ad-hoc way to fit data to existing assumptions. It is similar to constructing a narrative to prove a world-view. Someone can have an outcome in mind, select data and an experimental design to confirm the outcome, and then suppress contrary results or ignore literature that disconfirms it. Or think of an example in which the effect size of your study is comparatively small but you keep repeating the study until you get a large effect size and only report this instance that is favorable. It is probably going to get more recognition than the small effect sizes. Or how about any potential second order effects, negative externalities, or impacts that are not discussed?

All of this talk assumes a certain level of quality associated with the evidence. Implicit in the article remains the questions: What counts as evidence? Why does one count as stronger than another? What is the probative force of various pieces of evidence? Does a certain degree of accumulation have to occur before we accept a body of evidence as pointing in favor of a specific proposition or hypothesis? The Hierarchy of Evidence can help but note that this is an ideal and is difficult to achieve in practice. All of these questions are related to the work done by legal scholars such as John Henry Wigmore and his successors like William Twining or people interested in Bayesianism and belief revision. My editorial note: there are more ways you can be wrong or led astray than be right.

One of the most illuminating parts of the video was the discussion of the Janus Phenomenon. This is in the context of nutrition studies but you can generalize the concept to anything you like. In essence, the meta-analysis identified an instance in which there were many studies showing negative effects of some diet, and many studies showing positive effects. The resulting p-value distribution was bi-modal. This is an indication of selective reporting. Janus was a a Roman God with two faces. The Janus effect refers to how you can select whichever part of the bi-modal distribution fits your beliefs about the effect of the intervention in a study. Take any hot topic at the moment and subject it to a level if rigor that accounts for these biases: Is a vegan diet more effective than a carnivore diet? Maybe there is something about our need to fit into little groups so we don't feel ostracized.

You can think of the reporting bias in an even broader sense. A report is simply a transmission of some sort of information to a target audience. Can you think of ways in your personal life people may embellish or misrepresent whatever it is they decide to persuade you of? Can you think of information they may be leaving out inadvertently? Can you think of ways they may have selectively procured the information needed to persuade you? The reporting bias becomes even more magnified when certain results are cherry picked by media outlets and presented as conclusive. A critical thinker should consult primary sources as much as possible; in this context it means consulting the scientific databases, raw data, and research design. If you are a manager and your subordinate presents an analysis, you wouldn't blindly accept it unless of course there is a financial incentive to do so. Think even more broadly, can the reporting bias manifest in cases of witness testimony? How about your favorite podcast? Are the speakers qualifying their statements? This pervades our lives and neighbors associated concepts related to self interested or deceptive reporting. Saying that "the facts are on your side" or that "you have evidence" is simply insufficient if ancillary critical questions have not been answered. I will list a few below:

  1. What is it the researchers are attempting to show?
  2. How are they showing it? What are the design implications?
  3. Are definitions explicit and operational? Do you agree with how the target is measured?
  4. How was the data procured and sampled? Was the full dataset analyzed?
  5. Are there alternative ways of analyzing it that lead to different results?
  6. Is the current way of analyzing it robust?
  7. Was the literature selectively represented?
  8. Was the study replicable? Is the data available to a third party researcher who can perform the study?
  9. Are the results overstated? Was it an observational study but causal effects were reported?
  10. Are you results resistant to refutation?
  11. Is your evidence anecdotal? Have you made sure you are not falling prey to the Regression to the mean? What were the controls and covariates?
  12. Is there another study not directly related to the one under discussion that contradicts or nulls the results or casts doubts?
  13. Are there any assumptions underlying the design and sample selection? If so how do they impact the results?
  14. Are there any interactions?

The point is to think critically. Not accept things blindly, but also not fall prey to uncritical tendencies such as "They are biased and self-interested but we are not", "they are subject to confirmation bias", "they are fallacious" etc. It is important to realize that all of these also apply to your own stance, and calling someone out on bias or fallacy without evidence is simply a rhetorical tactic to shut down opposition rather than converge on a solution.

A related topic is that of the Replication Crisis. In short, this refers to the fact that many papers, especially in social sciences, cannot be reproduced. They are one-off analyses that simply do not generalize. One of the side effects is that news agencies will gladly report on any of these if it means higher ratings. As a consequence, people start to believe weird things because they think "it is backed by science" or they come to distrust the whole scientific enterprise. Both are ridiculous, do not be one of these people. Avoid extremes, apply critical thinking, remain humble and inquisitive. Here is some deeper context and counter arguments as to why it is not a crisis:

Experimental Design: Ecological Study

Previously I wrote about the Reporting Bias and alluded to the idea that critical thinkers should be aware of how studies are conducted if you are to assess the severity of the problem in a particular instance. I want to start by describing the Ecological Study Design and the corresponding Ecological Fallacy. Before this, we should start by asking what is the purpose of an experimental "design". What is it that researchers are trying to achieve when they conduct a study?

The answer might seem self evident. Research questions come up when we want to describe a phenomenon or understand associations between variables. The most useful information we can gain is about causation. Does smoking cause cancer? Does administering penicillin reduce the severity of an infection? Does a worker retraining program lead to a growth in employment? Will practicing a certain way result in better performance? All of these questions are within different domains: Health, Economic, and sports. The common theme among them all is about causality: how can we intervene in a particular way to create an intended outcome. Notice how we use different words to describe the same notion: reduce, lead, result in, cause etc. Sometimes it is not obvious that what we wish to understand is something about causation. With causal knowledge, we can know what to manipulate in order to achieve an intended goal or understand how components of a system interact. This is one of the fundamental forms of inquiry in scientific research. The notion of a research design allows us to extract cause and effect relationships from empirical data.

If you do not intervene on a process, there will be an outcome. What if you are interested in knowing how the outcome would be under alternative scenarios? This is the core idea of counterfactual thinking and it is core to the notion of an experiment. For example, can I expect a greater risk of heart disease under one set of conditions versus another? Suppose I eat vegetables and do not contract heart disease. Did the vegetables cause the absence of heart disease? What if I ate sugar? The point of the counterfactual is that at some point in time, you have "intervened" in your diet with one of the treatment regimes, and observe the outcome. However, you cannot observe the alternative scenario in which you chose to use another treatment regime. The counterfactual state of the world is not observed. You can speculate about it based on common knowledge, but that just begs the question as to how the knowledge came to be in the first place. You could eat vegetables first, and then sugar second, and measure the outcome; but how do you know there are not interaction effects when sequencing the treatments? How do you know if sequencing the treatments leads to the optimal health outcome? We need a rigorous way of inferring what the counterfactual state would have been under alternative conditions. This is what the experiment solves for. You cannot directly observe the possible world but you can infer probabilistically given data generated from an experiment. Instead of speculating, we make our questions explicit, rigorous, and testable. Less Wrong has a nice article making the notion accessible.

At a very basic level, the experiment generalizes some of our basic cognitive faculties. Counterfactual thinking seems very fundamental to the human condition, we do it all the time. What if the quarterback did X as opposed to Y? The idea is that the action leads to a specific instantiation of history. The alternative history is not realized, but we can image what it would have been. In the story in the article above, two different conditions lead to two different processes that lead to the same conclusion. You can imagine countless other possible outcomes, and paths to the outcomes, that are more or less plausible. The experiment uses this notion formally along with data generated in a controlled setting to infer what the counterfactual would have looked like.

Most of the time, we uncritically operate in the association level. We hear countless reports in the news about some interesting statistical association someone identified in observational data. Association, covariance, and correlation are the starting point when we want to understand a causal relation but be aware of spurious correlations; you may get out of bed every morning as the sun rises but does the sun cause you to get out of bed? Intervention is when we begin to make changes in our environments to achieve our goals. Children do this, but many adults seem to forget about their experimental and skeptical nature.

I want to begin with "Ecological Study Designs" because they are simply correlational studies at a population level. They cannot tell us anything about cause and effect, nor tell us anything about individuals. These designs are observational studies meaning the data was not generated in a controlled setting so there are potentially many confounding variables skewing any sort of causal conclusion. We do not have scientific control over who was exposed to the intervention. Many of these studies will be conducted at very high geographic levels like a state or country. All of these are associational so they fall prey to many ways in which inferring causality can be undermined. Does Y cause X or does X cause Y? If we swap the two axis we can paint an entirely different story. Does playing video games lead to violent behavior or do violent people tend to play more video games? Is there a third variable confounding our conclusions that causes both X and Y? See the Bradford Hill criteria for more context. From Douglas Walton, here are some critical questions you can ask under any instance of someone arguing something from cause to effect:

Critical Questions for Argument from Cause to Effect

  • CQ1: How strong is the causal generalization (if it is true at all)?
  • CQ2 : Is the evidence cited (if there is any) strong enough to warrant the generalization as stated?
  • CQ3: Are there other factors that would or will interfere with or counteract the
  • production of the effect in this case?

If someone is arguing from effect to cause (backwards, inferring whatever the cause was after the fact)

  • CQ1 : How satisfactory is E as an explanation of F. apart from the alternative explanations available so far in the dialogue?
  • CQ2 : How much better an explanation is E than the alternative explanations available so far in the dialogue?
  • CQ3: How far has the dialogue progressed? If the dialogue is an inquiry how thorough has the investigation of the case been?
  • CQ4: Would it be better to continue the dialogue rather than drawing a conclusion at this point?

Critical Questions for Argument from Correlation to Cause

  • CQ1: Is there a positive correlation between A and B?
  • CQ2: Are there a significant number of instances of the positive correlation between A and B?
  • CQ3: Is there good evidence that the causal relationship goes from A to B and not just from B to A?
  • CQ4: Can it be ruled out that the correlation between A and B is accounted for by some third factor (a common cause) that causes both A and B?
  • CQ5: If there any intervening variables, can it be shown that the causal relationship between A and B is indirect (mediated through other causes)?
  • CQ6: If the correlation fails to hold outside a certain range of cases, then can the limits of the range be clearly indicated?
  • CQ7: Can it be shown that the increase or change in B is not solely due to the way B is defined, or to the way entities are classified as belonging to the class of Bs, or to changing standards, over time, in the way Bs are defined or classified?

All of this comes from Chapter 5 of Argumentation Schemes. This book and author are incredible. He fundamentally impacted my critical faculties. 

By "population level" we mean we are calculating statistics over a group of people. One such metric might be "average income in California". The data is collective. This is typically found in epidemiology, economics, sociology etc. What if you want to compare two cities based on their public debt? Or their incidence of coronavirus? Each row in your dataset corresponds to a population level aggregate. Most analyses are done with cross sectional data; data taken at a single instance of time. There are however, longitudinal studies which measure the outcome over time (note that one of the main problems with these in an ecological setting is the attrition bias and the potential for a rapidly changing composition of your population). You can describe various attributes of the population by calculating statistics such as ratios and odds. You can compare them to other aggregates but remember: you are not doing a causal analysis. These types of studies will can be found in news headlines with the click-bait sounding titles like "scientists find link between diet and marital happiness"; which are very enticing to click on and share with your friends on Facebook to prove them wrong about something. You can gain some insight but remember these are not "Apples to Apples" comparisons; there is likely systematic differences between the populations being compared and of course there is no data on potential confounders. You probably see selective reporting on aggregate statistics all the time on the news. "California has a homeless problem and its a blue state, therefore it's liberal policy causing it because in Montana there is no issue". Simply put, California has a different migration history, total population, ethnic composition, geographic factors, and economic base than most other states in the country. Making comparisons on a simple aggregate is literally the antithesis of critical thinking; and yet it plagues all of our new outlets. It's as if we crave to be stupid; but not the reader of this article. You will transcend. BMJ has an article describing the basics. This is directly related to the reporting bias issue I discussed in a previous article. Many health articles are ecological studies, and many of them have conflicting results. Before you click that link and send it to your friend on Facebook, ask yourself if you are perpetuating this phenomenon that is unfortunately plaguing our society. Remember the Janus effect, remember to be humble, and remember to remain inquisitive.

The Ecological Fallacy is simple: If there is a relationship between exposure and outcome in an ecological study, the relationship at the population does not necessarily apply at the individual level. This is the same as stating the Fallacy of Division (and the flip side is the fallacy of composition). Simply put: what is true of the whole is not necessarily true of the parts. If your car is rated excellent, does it surprise you to find out that your interior might be a bit sub-par? If a football team is good, is every player on the team good? Apply this thinking to group level statistics. These are Fallacies of Illicit Transference. This is a fancy word for stating that people tend to transfer properties attributed to the whole to the parts, and vice versa. The fallacy assumes that individual members of a group all have the average characteristics of the group as whole, when in fact any association observed between variables at the group level does not necessarily mean that the same association exists for any given individual selected from the group.

With Ecological studies, we have to be very careful drawing conclusions about individuals in the population and making comparisons between populations. Remember the big talk on causality: we are simply in the association stage. We are no where near intervention and counterfactual analysis in a controlled experimental setting.

There is a closely related phenomenon called Simpsons Paradox. It also refers to issues that can arise when measurements at the population level do not apply to the disaggregated data. The Wikipedia link states that this paradox is a type of ecological fallacy. Simpson's paradox: the fact that when comparing two populations divided into groups, the average of some variable in the first population can be higher in every group and yet lower in the total population. This can happen when population proportions are not comparable. The message remains: be very careful when pooling data into aggregates. See the Gender Bias case study.

Summary: When thinking about The Reporting Bias and critically evaluating research, remember how the structure of the design and inhibit or enable specific generalizations about the population being studied. If it is an ecological study, there can be many flaws and even if there are no flaws we are limited in our ability to infer causation. If someone is not qualifying themselves, or are overstating the results, that could be a red flag.

Sources

  1. CASP Checklists
  2. Critical appraisal for medical and health sciences: 3. Checklists
  3. Appendix 6Critical Appraisal Skills Programme criteria
  4. Critical Appraisal : Critical appraisal checklists
  5. Knowledge syntheses: Systematic & Scoping Reviews, and other review types
  6. PUBLIC HEALTH DISSERTATION RESOURCES
  7. Evidence Based Practice in Health
  8. Optimising the value of the critical appraisal skills programme (CASP) tool for quality appraisal in qualitative evidence synthesis
  9. EBM Questions
  10. CONSORT - CONsolidated Standards Of Reporting Trials.
  11. Glossary of Research Terminology

Comments

Popular posts from this blog

The Nature of Agnosticism Part 1

The Nature of Agnosticism Part 2

Basic Considerations for Argument and Evidence Evaluation