Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Unprecedented 100% of First 14 Patients with Cancer Respond to Dostarlimab (ascopost.com)
76 points by zw123456 on Aug 10, 2022 | hide | past | favorite | 25 comments


For cases like this, exponentially scaled studies should be allowed.

Current practice is to enroll 14 participants (and some control participants) and wait many months for the results to roll in, which, if the treatment is effective lets people not on the treatment die unnecessarily.

A better approach is to have a rolling approach. Ie. Every time a potential participant is found, they are allowed to join the study if the results from all other participants show that it is statistically beneficial, minus a risk budget.

So if a treatment does really well like this, then it will quickly expand to a large number of participants. There is no need to wait for results - as soon as the current cohort of participants survives a few extra days, or as soon as someone in the control group dies, you can enroll a few more participants.

And if a treatment is marginal, then the group size will only expand very slowly, or not at all, as more and more data is collected from existing participants. If the data starts to show a detrimental effect of the same size or larger than the risk budget, then the study can take on no new participants, and if the collected data is not enough to approve the treatment then the result is a failure.


This sounds good, but there are a lot of practical problems:

1. How do you account for multiple testing? To ensure that the treatment actually works, you need to show a statistically significant difference between control group and experimental group. If you start ten small studies, and expand them based on early results, you are likely to get more false positives (type I error).

2. In proper randomized trials, the statisticians, imaging analysts and sometimes even doctors doctors are blinded from the treatment and outcome until the study has ended. Blinding is especially important for earlier studies, where surrogate endpoints for survival are used (response to treatment in this study) to get quicker results. These measures can be subjective, and thus prone to bias if the studies are not properly blinded.

3. Earlier stage studies are used to estimate the treatment effect, upon which power calculations are based to determine the size of the study population in Phase III clinical trials. This strategy would bias the treatment effect towards a benefit of the drug (by selecting those studies that show a preferential effect).

4. Despite all these objections, note that the results presented are actually interim results of a study that intends to enroll 30 patients. Usually the speed of enrollment of patients is limited by logistics (Finding the right patients to include and getting informed consent from them) rather than the sequential nature of studies.

I recommend that you read the 'Statistical analysis' section of the published paper (freely available here: https://www.nejm.org/doi/10.1056/NEJMoa2201445), and think about how you would write that section with the method you proposed.


> Multiple testing

I imagine that the 'risk budget' would be divided up somehow. So a company can decide to use its risk budget on 10 small studies (potentially of slightly different treatments) or one large study. They can mix-n-match and end one study early to reallocate the risk to another study to accelerate it, etc. Governments could allocate risk budgets to bio-tech companies or divide it per disease etc. The smaller initial risk budget of each smaller study decreases the chance of type I error, and the two effects could be designed to cancel.

> blinding

This approach doesn't prevent blinding. You're right that it makes it more complex. You could imagine a computer system (or independant human) that takes as input all the study results (ie. observations seen so far per patient), unblinds them, does predetermined statistical analysis, and outputs a number of new patients that may join the study today. That number could have a small amount of noise added to prevent "patient number 27 got a headache, and our patient limit fell, therefore patient 27 must be part of the treatment group".

> speed of enrollment of patients is limited by logistics

For the diseases that have the most human impact, this isn't the case. 50,000 people per day die of heart disease. I don't think any study will struggle to find a few hundred willing participants for a trial. For a disease with very few patients, the proposed scheme doesn't help, but also doesn't hurt.


> Risk budget

How does a smaller risk budget reduce the chance of type I error? Also, a lot of care would have to be taken that all smaller studies are pre-registered as well as their outcomes.

> Unblinding

In practice, things happen that were not foreseen in the predefined analysis plan. Still, that solution may work provided that the statistical issues were solved correctly.

> speed of enrolment

Patient groups are often very selected, and almost all studies struggle to find sufficient participants. This leads to delay for many studies, and 1 in 10 studies are even prematurely stopped solely because they cannot find enough participants [1].

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4843106/


Regarding blinding: I was part of a clinical trial (phase III, not the drug or cancer this article relates to). It couldn't be blinded - the side effects of the trial drug were too significant compared to just the standard of care which was the control. Also it would have been obvious during infusion, etc.

There are many different ways to conduct research like this, of course there is no single perfect methodology.


If we know the statistical evolution of a well known cancer, why not giving the treatment to everyone?

If we know that cancer A ends deadly in an an average of 6 months for 90% of the population and if our treatment has wildly different results, why wonder?


It's important to have a control group to prevent the placebo effect in both the patients and in clinicians.

If a patient thinks they are on a treatment, they might actually end up healthier than a patient who has just been told there are no treatment options left for them...

Likewise, if a clinician is really hoping the new drug they are developing is successful, their judgement of exactly how quickly a tumour is growing might not be fully impartial.

However, there is no need for the control group and treatment group to be 50/50. The control group can also be shared with other studies - although that makes risk budget allocation complex, but not impossible.


Clinical trials for cancer treatment rarely use placebos. Instead, the control group is the current standard of care:

https://www.cancer.net/research-and-advocacy/clinical-trials...


> If a patient thinks they are on a treatment, they might actually end up healthier than a patient who has just been told there are no treatment options left for them...

Isn't the actual aim? to have someone heal?

> Likewise, if a clinician is really hoping the new drug they are developing is successful, their judgement of exactly how quickly a tumour is growing might not be fully impartial

Well, this is something that is quantifiable I guess (a measurement). And changes must be notable in order to call something a success in that case.


Some reasons to include a control group

- Trials tend to be conducted at the best facilities, and they monitor their patients extremely well. Thus, control groups in a trial tend to have a better outcome than the general population.

- Patients included in trials tend to be better informed and better motivated to seek treatment.

- When trials are run in a specific location, the patient group may be different than the general population (younger/older, less/more exposure to carcinogens, etc.).

Note that all of these are in favor of approval for the experimental drug, and combined with financial interest/pressure from pharmaceutical companies to get approval of a drug, leaving out a control group is a recipe for disaster.

The hypothetical case you describe where a cancer ends deadly in an average of 6 months for 90% of the population and is suddenly cured by a miracle treatment simply does not happen (see https://www.cancerresearchuk.org/health-professional/cancer-... for some statistics), so you will always need a control group.


> The hypothetical case you describe where a cancer ends deadly in an average of 6 months for 90% of the population and is suddenly cured by a miracle treatment simply does not happen

That was an exaggeration (in line with the title of the article) - what I meant are cases where we historically have a good knowledge of the trajectory of an illness and this could be used as the control group.


Studies will actually be stopped short if results are so good as it would be unethical to continue giving people placebos. It happens, but rarely.


I believe this happened with Gleevec


They already do this. It’s not unusual to add “treatment arms” to a study.

However, that still wouldn’t be enough. This was a single arm uncontrolled study. No control arm, every patient knew they were getting treatment as did the doctors.

And since you need to pre-define your study design and endpoints, you’d basically need a more rigorously designed trial to actually get approval.

But all this is intentional. You start with quick, cheap studies to see if it’s worthwhile spending more.


I imagine the test protocols are heavily regulated. Getting what you describe right is within reach of experiment design... But updating regulations in a mathematically correct way is a whole new challenge. And given the current state of globalisation, an interested party would need to convince multiple government bodies, using multiple languages.

Which means we'd need an interested party with deep pockets and good motivation.


The FDA also currently has basically 2 statuses: "banned" and "covered by insurance".

Making an intermediary step which is "not banned not covered", would allow things to progress faster. I'm ok with reporting requirements allowing for this phase to act like a large unblinded test too.


This also opens the door to eliminating stage 1/2/3 trials. Let the study leadership decide how to spread their risk budget, and to overlap them if they choose to.


They already do this, especially for cancer.

Phase 3 trials aren't always needed. Phase I/IIb are common.


"Unprecedented" is a little bit unneeded considering this is a PD-1 immunotherapy used against a mismatch repair deficient cancer, these results are not surprising. See Le et al NEJM 2015.


Referring to https://www.nejm.org/doi/full/10.1056/NEJMoa1500596 ?

Reading a follow up study:

> Kaplan-Meier estimates of the 5-year OS [overall survial] rate were 31.9% for the pembrolizumab group and 16.3% for the chemotherapy group. Thirty-nine patients received 35 cycles (ie, approximately 2 years) of pembrolizumab, 82.1% of whom were still alive at data cutoff (approximately 5 years).

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8280089/

That's a pretty good improvement


You are comparing the wrong numbers, 5 year-OS vs objective response. Objective response is 46% in the study you cited (Table 2). 5 year-OS is nearly always lower. The patients in the new study were only followed for a median of 6.8 months, and unfortunately, the odds are high that not all of them will survive that long.


Previous discussion when this result came out at ASCO: https://news.ycombinator.com/item?id=31630679


Not sure if it means that the drug is more effective than other PD1/L1 inhibitors or pharma companies simply learned how to better select their clinical trial participants to optimize the outcomes ...


Is this medication produced using hybridomas? The idea of using a tumor to produce antibodies that fight other tumors is something that would have been hard to predict.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: