Break it down, to build it up: an interview with John Ioannidis

I met Prof. Ioannidis in his office, a cozy room with a beautiful view of trees on the medical school quad. “[The trees] are better in spring and summer,” he nodded to the comfort of his room and thanked me for joining him for an hour on this  rainy afternoon.

As a Professor of Medicine,  Health Research and Policy, and Statistics at Stanford University, John Ioannidis has profoundly changed the field of meta-analysis. His paper, Why Most Published Research Findings Are False, which garnered more than 2.7 million readers, offers a mathematical model to calculate the PPV, positive predictive value, which evaluates the likelihood that a research result claimed to be statistically significant is true.

At the end of the paper, Prof. Ioannidis  concludes that, “Most research findings are false for most research designs and for most fields,” a profound finding for an area where “truth” is valued above all else. In addition to this startling finding, the paper contains specific subtopics that are explored in a logical sequence, rather than through the usual sections found in a scientific paper, like a “methods” or “discussion” section. As Prof. Ioannidis told me, his career as a novelist enables him to explore innovative writing styles, some of which he applies to his scientific writings.

Professor Ioannidis is teaching a first-year IntroSem, “Scientific Method and Bias”, where I had a chance to meet with this inspiring figure. In his class we read Kuhn’s paradigm shift, discussed the effectiveness of p-value on evaluating research, and debated about the peer-review process.

The whole experience was fascinating yet deeply disturbing. We saw the power and authority of science shatter under our scrutiny, raising the possibility of an invasion of the “alternative truth”. However, a more thorough understanding of Ioannidis’ ideas enables us to understand how this questioning of science’s credibility will ultimately yield more “truth” than there was: it is like a breaking-down-to-build-up process. We break down the traditional research methods into pieces, and after cherry-picking, scrutinizing, and comparing, we pick only the highest quality ones to consist a new science, a modern approach to truth, which, with meticulous methods, reproduction, registration, data transparency, and peer review, will be more convincing and rigorous than ever.

Here is the interview with Professor Ioannidis : words in italics are mine and the rest are his.

I know you are also a writer, so I just want to know is there anything between literature and science, that connects and inspires each other?

One common denominator is that they both stem from humans, they are both not perfect, they both have opportunities for greatness and opportunities for errors. And also another commonality that they both try to defend humanity eventually, because knowledge from science can help protect dignity of man by improving his/her life, by improving his/her knowledge and status and I think that art and literature can actually do the same, to protect the dignity of man maybe in some different frontier and I do see some commonalities [in these aspects]. And science also has some writing components in it. Even though there is a very strong quantitative aspect, there is also a need to express [quantitative findings] in language, and to put it into science writing against peer reviewers, critics, and to disseminate these hypotheses and viewpoints. So, training and involvement in literature are clearly spilling over on how you do that in science.

At the same time, literature has inspired some other ways of writing or expressing yourself that probably would be a bit more difficult or unusual to adopt in science. In science, for much of the writings, we do follow some strict rules, for most papers they have very clear setups, like introduction, methods, results, discussion. Art can also have rules or forms, but it may give you more liberty and experiment as well, well, again experimenting with (literary) quality in mind. Some of that experimentation may then spill over to your scientific writing if done well. Some people have questioned or have wondered about some of my popular papers being a bit atypical about their structure, like that “Why most popular search findings are false?” is probably not the most typical paper in terms of the way that it is structured. It’s structured with an introduction [like traditional ones], but it also has questions and modeling and corollaries, so it’s a different stylistic form, and I believe that I would probably not have done that unless I had some experience and training in literature and forms of writing that would allow me to try different types of form that are beyond the most traditional ones, so there is some communication [between science and literature].

As you said that “science and literature are both to defend humans.”, but I think taking your class made me more and more dubious of the power of science -- the more I think, the more I realize how ignorant we are -- as human beings and even as trained scientists.

This is why science is so important. I think where the really the great contribution lies [is] that we're building a wealth of information that went through some questioning, some critical scrutiny, and hopefully some of that eventually would be more credible than other parts. So there's an iterative questioning in the process. I think that literature has also some sort of iterative questioning. There's a lot of searching also in literature or at least some authors, including myself, like to search instead of being content with just what we know and limiting the scope of the what's being presented to just what we know. I think it is fascinating that we don't know where the frontier is.


Then how does that replication crisis impact our research institutions? Does it have any impact on us -- on Stanford?

I think there are no institutional boundaries in science. Science is a communal effort, it’s a global effort. It’s about twenty million people publishing scientific papers one way or another. And, you know, some environments, some communities, and some universities may have better standards. They may have higher levels of calibration of some of their research practices, so on average maybe they're producing higher-quality work, more impactful work, more useful work; but it's not a black or white situation, and it's not that Stanford being the best university in the world then is immune from any concerns. Actually, I would say that Stanford being the best university in the world is more cognizant, or should be more cognizant, of these issues and these biases, and the potential of getting things wrong. [We need] to strengthen our methods and processes of how we do science, how we make sure that our methods are bulletproof, and how we sort out that something is correct to weed out errors. So I think what makes for a good scientist and what makes good research and research environment is being sensitized to these issues and being methodologically very strong, and not trying to make a big splash with some story in the media but really thinking deeply about science and the challenges that it faces and trying to overcome them rather than hiding them under the carpet.

How will these reproducibility crisis impact our future research? How will it change our future scientists or people like us who wanted to do research in the future?

Until recently we thought well maybe you should wait [to start training scientific methods] until someone is 30 years old, which is probably not the case. You know you need to reveal elements of the scientific method to kids as early as possible. What are we trying to tell them is the most important things that they need to base their future building of information and knowledge. It's far more important to teach people how to judge, how to appraise, how to evaluate information, and if they want to generate information for themselves, what do they need to be well-trained on, what do they need to be cognizant [of], what do they need to be sensitised about in building experiment, studies, designs, in collecting information, analyzing data, interpreting them, and disseminating them so as to have as  little error, as little bias, and as clean as possible.

So it creates a different idea of what’s the priority of what we want to teach people.  And obviously, once you get to later stages, people training at the graduate level [or] postgraduate level or also, you know scientists, scientists are trained for their whole life. It has no end to that. If you just say that I learned whatever I had to learn with your dad within a couple weeks your kind of outpaced by the field that might be working fast forward, so how do we instill that mentality of continuous updated education for scientists that is focused on improving their methodological strength so that they can catch up and keep abreast and be the leaders on content expertise.

Other types of impacts go beyond people in training education and science training there's his  issue about how do we make a better case for science in the public community. How do we convince people that sign says that's a great thing because it is a great thing? How do we fight anti-science claims? How do we make science more likely to succeed in solving our problems? How do we implement policy decisions? Hopefully we should have science feeding into policy. I mean as much as we know and the best knowledge that we know should be transmuted into the policy and how we run our world.

At this point, I’m very curious what made you think that “I want to be a scientist in the future”?
I think that I had multiple sources of stimulation, [like] a kind of science track and [the fact that] both my parents were physician-scientists. They were always not just taking care of patients but also trying to do research and I remember them writing a protocol and running analysis and spending an immense number of hours doing that. When I was a kid I was considered a child prodigy. I could run very fancy calculations when I was 4 years old, I wrote my first book when I was eight years old. I love both the writing component and the quantitative component. So very early, as soon as I kind of remember myself,  I recalled that I said well I want to become either scientist or an astronaut or Zoro, and [science is] what I think kind of carried through. And there were a number of exposures, I was exposed to very smart people very early on and some of them were working on bench research, others were working on clinical research, and others on methods and you know meta-analysis.

And for a while I was very unfocused,[so I just kept on exposing myself to all the things I wanted to do]. But gradually I felt that what attracted me the most was having as much as of a big picture as possible so. In science, there are two ways you could go. One is to zoom in and to very minute detail and over-specialize in one particular area; and the other is zoom out and try to see kind of bigger patterns with maybe less specialization. I think zooming in is indispensable because looking at detail is what makes a good scientist to a large extent, but at the same time I felt that it was also very interesting to try to zoom out and not just look at the single study but look at multiple studies within the same question and then multiple studies in a given field and then multiple studies in a given discipline and try to get some patterns that appear in different types of questions.

So I like those interdisciplinary perspectives, [which] is why I'm currently involved in five departments at Stanford University. And metrics is a trans-disciplinary center that has most affiliates from all schools of the university because the questions that we try to address may be relevant to all and any fields and you know science at large. How do you optimize messages? How do you put transparency? How do you replicate results? How do you improve statistics and their applications? [These questions] could be applicable to economics much like in medicine [and] biology [and] sociology and political science and engineering and computer science. There's a common denominator, although the exact issues may differ across these different disciplines.


Do you have some advice for undergraduate students who want to do research?

I would argue in an early stage, as an undergraduate, I would try different things. I would try different opportunities to see what I enjoyed [and] what drives my curiosity or drives my attention [and] what is it that makes me feel that I can really spend time [on] until after midnight without even noticing that I've been working all afternoon and all night [because] I’m so occupied with [the research]. Trial and error is perfectly fine, make sure that you enjoy [the experience]. Make sure also that you have good mentors, you know people who want to spend time with you and share their experience, their knowledge, [and] their errors. I think you know probably that's very big parts of the equation, and papers will come but it's not the publications that should be the goal.

I decided that I didn't want to publish papers early in my scientific training. I had lots of people who are approaching me and upon interviewing with them, they would say “well you will do these little things and then you publish a paper,” and I didn't really like that. I really wanted to publish something that I felt very confident that I had full control of that I knew everything to be done. I have done my very best and it would be a real contribution to science would be a meaningful contribution so it's just very easy to write papers so I had to say “no”s to lots of possibilities very early in my career to do that until I felt confident that I had something to say I have something to contribute. I felt fairly well trained to be able to contribute to the dialogue in a meaningful way.

Thank you for sharing that experience with me. Will you also be comfortable to share some mistakes and failures in your career?

I mention errors in almost all my talks! For example, my very first effort to write a paper was an 84 pages long manuscript that would take probably 20 pages to publish. I thought that I had come up with some very complex pathophysiology model that would link together diabetes and HIV infection and other viral infections, so I wrote that monster of a hypothesis and pathophysiology and modeling and I gave it to my mentor, who was the physician in chief at Harvard at that time. He should have thrown it away, but he did comments and, eventually, I [created a one page letter to the editor from the original manuscript].

Some  errors you cannot easily tell. In several occasions I found myself doing some very well studies, but when we try to replicate them, sometimes it didn't work and you just had to go back to the drawing board to see why was that? What went wrong? When it was at the original was the subsequent study.

I remember one occasion when we had a fantastic team of about 20 top scientists: we had top mythologist in the team, and the creme-de-la-creme of both Parkinson's disease and genetics, and we published a paper saying that a certain  gene is associated with Parkinson Disease -- within two years another team of equally established and wonderful scientist and having an even larger sample size than our study saying that the gene is not Parkinson's disease susceptibility gene. Then we did another study and it showed that yes there is some susceptibility fact and then there was an even larger study know there's not there are some other studies showing maybe there is, and the latest update is that it's not clear.

So sometimes you do your very best and you're still ending in a pretty uncertain situation where you don't know whether what you found is correct or-or wrong or it could be erroneous and I think that personally, I'm more happy to detect an error rather than not detect it and when you can detect it, it means that you have found a way to move forward, to try something that would obviate overcome bypass that possibility in your next study, in your next experiment, next analysis. If you think that everything is fine then yeah maybe everything's fine or maybe there are errors detected.

Thank you very much. What will be your next step? Can you tell us about your future research?

I’m very unfocused. I'm working on many many different things at the moment. [This is] a little bit like saying which one of your kids you love the most, and  I'm working with lots of very talented people that are wonderful for me to learn from. Students all the way to doctoral students, postdocs, [and] junior scientists. And as I mentioned earlier, many of these themes of research practices in applications of the research method are applicable or potentially applicable to lots of very different disciplines, so I try to remain open to learning about things that I don't know yet and kind of incorporating them in what I'm already doing. Sometimes you realize that this can give you solutions to problems intractable in the past.

Do you have some advice for the scientific writing for the general public? Like for Probe magazine?

That's not easy. When we assume that our colleagues know what we are talking about, but sometimes they don’t. When you try to talk someone with no scientific training at all, it may sound like noise. There are ways we can try to simplify things, but this can make things worse. Because we remove some of the key parts of the scientific thinking, which is the scrutiny, the healthy-skeptical approach, and the quantitative approach. And I think this is something removed from the conversion of science [for] popular science or science for the general public. So many stories in media about science have no numbers at all. Good science is all about numbers, well there’s qualitative research, but 95% have a lot of numbers. And most people say well, people don't want to hear about numbers because they're [too technical]. I don't believe that because if you go to the sports page [there’s] a fancy statistic about every single game of hockey or football or soccer or basketball. And you have statistics about the positioning of players on the court and the spatial coefficient or whatever you can desire in terms of statistics, and then you go to the science page and there's not a single number there. So how do people [understand] all that quantitative information when it comes to sports and then you say well for science I need to remove numbers? I feel that we need to not try to undress science when it is conveyed the public [but] try to convey it with its real strength in quantitative [numbers], healthy skepticism, and careful scrutiny.

Footnotes:

(1) Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. https://doi.org/10.1371/journal.pmed.0020124