R&D TECHNOLOGIES: HYPE OR GENUINE POTENTIAL?
By NextLevel Life Sciences - August 28, 2018
Where have Evidence-Based Practices for Digital Technologies Gone?

Drug developers have been seeking new ways to improve R&D with digital technologies, although more noticeably now than in the past. As recent success stories have started to hit the headlines, the world has turned its eye on what these technologies can do.
Subsequently, however, a new wave of technological hype and disillusionment has overshadowed some of the successes. People are now wary of an explosion of unproven tools that might help R&D.
As a consequence, evidence-based practices in the development of technology are clamoring to be reinstituted.

So, let’s talk about science for a minute, not about “hyped-up” products

Certainly, the technological innovations span numerous fields including artificial intelligence (machine learning and deep learning), blockchain, big data, complex bioinformatics, synthetic biology, computational chemistry, and digital health initiatives. By far, the most hyped-up of these technologies is artificial intelligence.

AI, therefore, is a good example to demonstrate how advancements in these technologies are struggling to meet the evidence-based standards of rigorous science. In this article, we will consider 5 crucial areas where AI needs to face the accusation of hype head-on:


Accusations:

  1. Technology as a black box.
  2. Quality and bias in the data.
  3. Integration of technology into the actual clinical setting.
  4. The replacement of current job roles in specialist medical fields.
  5. Existing standards of science.

Accusation 1: Even the creators of AI don’t fully know what’s going on inside the black box

Why does an AI program predict one drug compound’s success and not another? Often, the researchers who use algorithms to discover new molecular structures don’t readily see the causes behind a prediction. Maybe, in a sense, that’s a good thing. “If AI technologies only predict things researchers could immediately understand, then the programs are not digging deep enough,” reports Scrip’s interview with Guido Lanza, CEO of Numerate.

Understanding the black box too easily would mean restricting the machine to a shallow exploration of biological and chemical properties and interactions. Giving the algorithm free reins is essential for AI to do its work. In turn, this means that the only real validation of the algorithm’s ouput will be reproducible experiments.

Reproducible experiments, however, are not always easy to conduct:

  • Sometimes an algorithm produces compounds which are devilishly hard to synthesize in the lab. A team of chemists may need to work for many months to produce the proposed molecule and even then, that molecule may be no good. That’s a lot of effort with nothing to show.

 

Accusation 2: Poor quality and biased data fueling algorithms

Advanced analytics is strongly coupled to the data that is fed into it. When data is incomplete, and the machine encounters new structures which are dissimilar to anything in its known dataset, good results can be challenging to produce.

The problem is further exacerbated when the data is biased. For example, selective bias is a huge problem where a computing model is trained on a dataset derived from a population unlike the target population. Therefore, it is important to focus efforts on improving the datasets and algorithms together.

This can be done, for example, by building a synergy where computational chemists/data scientists and bench chemists/clinicians work side-by-side. Only by working together can they iteratively build datasets of higher quality and tune algorithms which can propose better compounds. “The idea is that after a few attempts or a few hundred attempts, suddenly, the AI becomes a reliable support to the people making medical care decisions,” says Michael Dahlweid, Chief Technology & Innovation Officer of Inselgruppe. [Source]

Preparing the data input for a computational program is thus a key strategy to minimize the “black box feel” of a technology.

Accusation 3: Data scientists are too far removed from clinical practice

Some data and computational modelers have shown a surprising lack of understanding of normal clinical workflows and practicalities. When technology misses out crucial steps in that workflow, the whole tool fails in its purpose. Using medical textbooks and publications might help patch the disjunct, but ultimately, this method does not always capture all the complex activities in the daily workflow and practice of a specialist.

In other words, a lot of a technology’s usefulness comes from integrating into current standards, workflows and software which are already considered best practice and evidence-based for clinicians. AI needs to get on board.

Accusation 4: The so-called “demise” of medical specialist domains

If a technology such as AI can match the performance of specialists, then why do we need these domain experts anymore? Such a question oversimplifies the skills and tasks which specialists are performing.

It is a matter of balance. AI can match or outperform a few processes which specialists are responsible for, but there are other tasks which require general human intelligence and judgment that machines can never provide. “This will create a new type of decision-making process that hasn’t existed before,” explains Christopher Larkin, Vice President of Data and Analytics at GE Digital. “Routine decisions are being processed by software, so experts and executives can focus on areas that really require their analytics expertise.” [Source]

The problem is not that we have too many specialists who need replacing, but rather that there is a scarcity of specialists from whom patients increasingly demand more time. Technology’s role should be supportive and supplemental to the specialist’s by enabling more efficient processes and focus.

Accusation 5: Touting technology without reporting standards

In certain cases, analytics tools, even sophisticated AI, need to follow reporting standards accepted by the industry, and more importantly, regulators.

For example, diagnostic reports in radiology have a format defined by the STARD (Standards for Reporting Diagnostic Accuracy) statement. Predictive models also have another format defined by the TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) statement.

Clearly, if regulators are to accept AI-enabled clinical evidence, they need to be certain that the data is aligned with industry R&D standard operating procedures. Evidence-based practices for incorporating advanced technology in R&D still entail these standards.