
Artificial intelligence (AI) helps many with productivity, but it will probably help even more people when it generates entirely new drugs. And they’ll probably be commercially available by the end of the decade, the CEO of an AI drug discovery startup says.
“I would be surprised if we don’t see it over the next five to six years,” Alex Zhavoronkov, chief executive officer of Insilico Medicine, said in an interview with Bloomberg Television.
“I hope we will be the first ones – we have more than 40 programs internally – but you never know.”
Traditionally, developing a new drug takes many years and requires a massive financial investment, often involving significant risk and a high likelihood of failure.
The approval of a single new drug typically costs about $2.8 billion and takes around 12-15 years. Needless to say, that’s also why drugs are so expensive.
But now, AI models trained on extensive data sets, sophisticated mathematical models, and advanced computational algorithms are being developed in an effort to directly address these inefficiencies.
Although the US Food and Drug Administration has not yet approved an AI-generated drug for human use, several compounds developed through AI, such as treatments for fragile X syndrome and idiopathic pulmonary fibrosis, are currently being investigated in clinical trials.
Besides, according to Bloomberg, Takeda Pharmaceutical is in the final clinical testing of a psoriasis drug selected by AI. Data is expected this year.
Insilico, Zhavoronkov says, is different from other companies because it incorporates the technology “in every step from target hypothesis to drug optimization to deliver drugs ready for human trials.”
Smooth collaboration between the scientists and the machines could indeed shorten timelines and reduce costs. Still, experts say a bunch of ethical concerns can slow down the progress.
First, data bias – common in AI and machine learning – can lead to inaccurate outcomes and reinforce health disparities rather than reduce them.
Besides, who’d be responsible if or when an AI-driven medical error occurs? There’s also growing concern over data privacy and algorithmic fairness.
Your email address will not be published. Required fields are markedmarked