Keywords
Cost-effectiveness analysis, cancer screening, incremental-cost effectiveness ratio, prostate screening, PSA
Cost-effectiveness analysis, cancer screening, incremental-cost effectiveness ratio, prostate screening, PSA
The original version of the manuscript has been updated in light of the reviewer comments. Principally, the changes are to better acknowledge caveats made by the authors of the original study regarding their results. This relates to the potential relevance of multiple screening intensities, but that the necessary data to support the analysis of such policies is currently lacking. More generally, I have added a discussion that acknowledges the tension between pragmatic modelling of a restricted set of policy choices within the constraints of currently available date as opposed to a theoretical ideal of a complete and comprehensive comparison of strategies for multiple cohorts that is only possible with extensive data. I have also made changes to the manuscript in order to make the constructive intention of the critique clear in order to address comments that the tone of the original submission was unnecessarily negative. I have also given my reply to the reviewer comments in order to further explain the changes made and to acknowledge the helpful direction offered by the reviewers.
See the author's detailed response to the review by Paul Carty and Conor Teljeur
Callender et al.1 recently published a cost-effectiveness analysis (CEA) of a risk-tailored approach to prostate cancer screening. I believe the study’s results are not interpreted appropriately and cannot be considered a reliable guide to prostate screening policy. This commentary explains the problems with the results, examines if they can be usefully reinterpreted, and more generally, attempts to elucidate the issues regarding risk-group selection and the interpretation of cost-effectiveness estimates. The purpose of this commentary is, through critical examination of the Callender et al., to offer guidance to research groups conducting such modelling on how their analyses can best answer policy questions. Secondly, it aims to help readers of such research interpret published estimates.
Callender et al.’s1 analysis estimates the total net costs and quality-adjusted life-years (QALYs) of alternative screening approaches. They examine prostate-specific antigen (PSA) based testing every four years between ages 55–69. They consider this strategy when applied to all men within that age range (described as age-based screening) and alternatively, the same strategy starting only when men meet a range of alternative prostate cancer risk thresholds (described as precision screening). They consider 17 alternative risk thresholds ranging from 2% to 10% 10-year absolute risk (10y-AR) in 0.5% increments. Men can reach these thresholds at different ages. This means the men with the greatest total lifetime risk reach any given threshold at an earlier age and the proportion of men having reached any given threshold increases with age. Therefore, relaxing the risk eligibility threshold simultaneously expands the pool of screened men and lowers the age of screening initiation in those screened.
The study reports incremental cost-effective ratios (ICERs) calculated by comparing the total costs and health effects of both age-based screening and the range of precision screening strategies to no screening. The reported ICERs range from £14,862/QALY for the most conservative risk-based strategy that restricts screening to those with a 10y-AR of at least 10% to £34,952/QALY for age-based screening. They report a 10y-AR of 5% yields an ICER of £19,598/QALY and note this would be a cost-effective policy in a UK context in which a cost-effectiveness threshold of £20,000/QALY applies. Their results are presented with caveats regarding the structure of the analytical model used and parameter uncertainty. Callender et al.’s1 analysis is a very welcome attempt at examining how prostate screening could be better targeted towards those at greater risk, thereby avoiding unnecessary harms to men at lower risk and enhancing programme cost-effectiveness.
This critique addresses three issues. The first relates to differences between the average and incremental effects of lowering the risk threshold of screening. The second concerns the failure to consider mutually exclusive intervention strategies and the implications this has for finding optimal policies for specific risk-subgroups. The third relates to the relevance of varying the intensity of screening for the estimation of ICERs.
Callender et al.’s1 cumulative assessment of the ratio of total costs to total QALYs as the risk threshold is relaxed means the analysis initially includes those men at highest risk who are likely the most cost-effective to screen and then progressively adds those of lower risk that are probably less cost-effective to screen. This cumulative approach to assessing the ratio of total costs to health effects hides the marginal effect of progressively relaxing the risk threshold to include lower risk screenees. The appropriate approach is to examine the incremental change in costs and health effects as the risk threshold is relaxed. Such an incremental analysis identifies what additional health gain is achieved at what additional cost of relaxing the risk threshold relative to the previous, more restrictive threshold.
Reinterpreting the results using an incremental approach indicates that relaxing the risk threshold is less cost-effective than appears under Callender et al.’s1 cumulative analysis. The difference between the cumulative and incremental appraisal is shown in Figure 1. It plots the estimated costs and effects of the 17 precision screening strategies from Callender et al.’s1 analysis. The least costly strategy is for the most conservative risk threshold of 10y-AR of 10%, while the most costly is for the least restrictive 2% 10y-AR threshold. The solid grey line shows the ratio of incremental costs and effects between the risk subgroups. The incremental ratios rise as the risk threshold falls and more men are screened. Beyond a certain threshold total effectiveness falls, implying that some screening becomes harmful to health. That is, it appears that reducing the risk threshold beyond a certain point harms at least some men.
The dotted line in Figure 1 corresponds to the cumulative ratio reported in the by Callender et al.1 as an ICER. In this case, corresponding to screening all men with a 10y-AR of at least 5%, which Callender et al.1 report to be £19,598/QALY. The ratio of the incremental difference of screening men with a 10y-AR of 5.0–5.5% compared to men with a 10y-AR of 5.5% and above is £78,431/QALY. This ratio is shown in Figure 1 as the single thick black segment of the line joining the strategies. Further detail on the estimate is given in Table 1. It includes the ratios reported Callender et al.1 as ICERs (“Reported ICERs”) and an additional cost-effectiveness ratio (CER) calculated as the incremental difference in costs and effects as the risk threshold is incrementally relaxed from 10y-AR of 10% to 2% (“Calculated CERs”). These CERs rise from £14,881/QALY for the highest risk men to £281,553/QALY for men with 10y-AR of 4.0–4.5%. There is no meaningful CER to report below a 10y-AR of 4% once the incremental change in QALY estimates becomes negative.
The policy significance of the difference between the cumulative and incremental analysis can be seen in the context of the UK cost-effectiveness threshold of £20,000/QALY as referenced by Callender et al.1. Using the originally reported cumulative ratios, the first 11 risk thresholds would be considered cost-effective, as their reported ICERs are within the threshold (shown in italics in Table 1). Using the appropriate incrementally calculated CERs however, only the two most restrictive risk categories fall within the threshold (shown in bold). Accordingly, far fewer men appear cost-effective to screen than originally reported. Furthermore, those screened would start at an older age.
As an aside, it is useful to see how my interpretation of the results presented here is supported by the net monetary benefit (NMB) analysis provided by Callender et al.1 within a supplementary appendix to their study. The variation of NMB with the risk threshold is presented by them in Figure H (A). It shows that NMB is maximised only when the risk threshold is near its most restrictive around 9.5% to 10% 10y-AR. This contradicts Callender et al.’s1 finding that a 10y-AR of 5% would be cost-effective, as NMB should be maximised at the optimally cost-effective strategy. The observed maximisation of NMB around 9.5% to 10% 10y-AR corresponds with the optimally cost-effective policy (within those simulated) when using the incremental interpretation presented here.
At this point, I now turn to consider can the reinterpreted CERs reported in Table 1 be used as a reliable guide to screening policy. The above critique draws on the long and widely recognised distinction between the average and incremental cost-effectiveness ratios that is reflected in CEA guidelines2–6. Note, however, that I have not described the CERs in Table 1 as ICERs. This is because there are further complications with the incremental reinterpretation of CERs that means they still may not be considered true ICERs and so are not a suitable guide to policy. Moreover, the published results cannot be reinterpreted into ICERs.
The standard interpretation of an ICER is the incremental comparison of costs and effects of mutually exclusive strategies that lie on the efficient frontier of the cost-effectiveness plane7. The way the alternative policy choices are specified within Callender et al.’s1 analysis means they fail to constitute mutually exclusive strategies. As mentioned above, the relaxation of the risk threshold simultaneously adds men of lower lifetime risk of disease to the pool of screened men and reduces the age at first screen for those already within the pool of screened men. Lowering the age of screening initiation in higher risk men is not mutually exclusive of extending screening to lower risk men.
The non-mutually exclusive strategies within Callender et al.1 means the incremental CERs reported in Table 1 correspond to a mixture of different policy choices for men of different lifetime risk. For example, reducing the risk threshold will bring forward the age of first screening for men of high lifetime risk, while it may entail a shift from no screening to one or more screens for a man of lower lifetime risk. While both men may have an equal 10y-AR at the initiation of screening, the differences in both their lifetime risk and the number of lifetime screens they receive mean the policy choices will likely be of different cost-effectiveness. There is no way to disaggregate the published results into a form that would permit policy makers to understand how large any differences in cost-effectiveness may be between men of different lifetime risk or identify what the optimal policies would be.
The implication of non-mutually exclusive strategies for policy is that although most of the incremental CERs in Table 1 are above the cost effectiveness threshold, it is not necessarily the case that all the corresponding policies are cost-ineffective. For instance, advancing the age at first screening for a man with a high lifetime risk might be less cost-effective than providing one lifetime screen at age 69 to a lower risk man. Accordingly, it would be premature to base policy on the incrementally interpreted CERs I present in Table 1.
To generate mutually exclusive policies within a single analysis, the authors should have held screening intensity constant for all but one sub-group at a time while varying intensity for the remaining sub-group. This policy generation process would have to be repeated for all sub-groups over all alternative strategies considered, resulting in a very large number of mutually-exclusive strategies. A much simpler alternative would be to model the range of screening strategies in separate analyses for each sub-group according to their lifetime risk.
If Callender et al.’s1 analysis were to disaggregate men of different lifetime risks as described, then it would be possible to assess the different simulated screening strategies in each risk subgroup. That would allow the analysis untangle the differences of intensifying screening in higher risk men from extending screening to lower risk men. Such disaggregation is particularly important when we consider that the reduction in the risk threshold is eventually estimated to harm health. It is important to know which men are harmed by what intensity of screening.
The third issue of ratio interpretation considered here relates to the range of alternative comparator strategies required to adequately estimate ICERs. ICERs give the ratio of the incremental difference in costs to effects between one strategy relative to the next most effective relevant comparator strategy7. For example, the appropriate ICER estimation for a given strategy would typically require comparison to a less intense screening strategy with a lower number of lifetime screens, achieved by comparison to a strategy with either a longer interval or a narrower screening age range3,7.
Previous prostate cancer screening CEAs demonstrate the relevance of incremental comparisons between alternative screening frequencies and varied screening age ranges. Heijnsdijk et al. show how ICERs rise as the number of lifetime screens increases8. While that analysis did not differentiate between risk strata, it does illustrate the relevance of including low intensity strategies as comparators to other strategies with shorter intervals and wider age ranges. They found that in an average risk population in a Dutch context the optimal strategy would be three screens per lifetime at ages 55, 57 and 59. They found no strategy with screening beyond age 59 to be cost-effective, indicating the relevance of considering alternative stopping ages in the case of prostate screening.
This context of previous research and well-established methods guidance tells us that even if Callender et al.’s1 results can be disaggregated into mutually exclusive strategies for separate sub-groups according to lifetime risk, the resulting ratios would still only represent incremental changes to the start age of screening. Ideally, we would like to estimate a range of screening intensities in each sub-group, varying not only the screening start age, but also screening interval and, importantly, the screening stop age. In particular, we would be interested in the potential for low intensity screening to offer at least some prevention to the lowest risk men.
Modelling a wide variety of strategies intervals typically requires simulation of the natural history of disease and the imposition of stage-specific estimates test performance characteristics. Not all models are such "deep" models9. Callender et al.’s1 analysis apparently is not one such model and may be restricted to the simulation of quadriennial screening intervals used in the trial that informed the model. While analyses limited to the simulation of one screening interval alone may come with the limitations of not being able to estimate a complete ICER on the basis of a comprehensive set of comparator strategies, these limitations are traded off against the advantage of less reliance on assumptions and reduced parameter uncertainty. There is no clear answer on the optimal balance in this trade-off when informing policy. Nevertheless, results from analyses without a complete set of comparators can still be useful to policy makers. CERs from analyses with a limited set of strategies that exceed the cost-effectiveness threshold can usefully rule out strategies as cost-ineffective. This is because any strategy with a CER exceeding the threshold within a limited set of comparators can never be cost-effective within a more complete analysis.
It is important to note that Callender et al. did clearly acknowledge the relevance of strategies of alternative age ranges and screening frequencies, but explained the data to support a risk stratified analysis was lacking. Accordingly, while the policy choices they simulated may not be those optimal relative to our theoretical understanding of optimally tailored strategies, the latter are, as yet, unsupported by data. Moreover, we should point towards the usefulness of Callender et al.’s analysis in informing further research. Their study concludes by stating prospective randomised controlled trials are required to better inform optimal policy. Further analysis of their model could usefully indicate what strategies would be most useful to compare and which parameters estimates are the priority to refine.
The above critique shows that the ratios reported as ICERs by Callender et al.1 should not be used to inform prostate screening policy. Unfortunately there is no way to readily reinterpret the published estimates into policy-relevant guidance. A disaggregation of the model results by the authors could make their analysis more useful to policy makers, but probably only to rule certain strategies out. Further work would be needed to determine if other screening intensities could provide cost-effective screening for lower risk men and what might be the benefit of varying the screening stop age.
Some of the points described in this critique were apparently raised during peer review. The available reviewer comments that accompany the paper show Reviewer 3 noted these issues and explained they could be easily addressed. In reply, the authors give the rationale for their modelling choices, which explain why they did not make the suggested changes. It seems unfortunate that the reviewer’s advice was not considered further as it is evident that the authors have done the hard work of constructing, parameterising and implementing their model. It seems a shame that basic changes were not made prior to publication. This highlights the need for both authors and journal editors to ensure that reviewer comments are adequately accounted for. It also serves as a reminder of the fallibility of the peer-review process.
CEA methods for the analysis of screening are well-established and the need for appropriate ICER comparisons between screening strategies of different intensities has been recognised clearly for many years3,5. The issues around risk subgroup analysis and how to handle them in screening CEAs have received less attention in the literature10–12. Given the increasingly nuanced knowledge of risk subgroups provided by research on genetic and other risk factors, it seems likely that risk-stratified analyses such as Callender et al.’s1 will become more common. Accordingly, there may be a need for clearer guidelines for analysts.
This commentary is unavoidably critical of the analysis presented by Callender et al.1. The intention, of course, is not to single out a single study for criticism. Rather, it is to offer constructive guidance to such modellers on how their analyses can be best specified and interrogated. The question of appropriate strategy comparison is not trivial, especially when variation in disease risk is considered, and Callender et al. is certainly not the only study to face pitfalls in this respect. Without clear examination of the problems and clear direction of how they should be avoided, subsequent studies will be prone to error. The research question addressed by the authors is important and deserves attention from health economic modellers. It is hoped that the points raised here may inform a revision of the model and the generation of new cost-effectiveness estimates. Such an analysis would usefully inform the tailored provision of prostate cancer prevention according to disease risk, potentially improving the health of men across the UK and beyond.
The criticism made here of Callender et al. reflects a broader tension between pragmatic modelling within the constraints of currently available data among a narrow range of policy alternatives as opposed to the theoretical ideal of modelling many alternative strategies, each considered in disaggregated analyses for separate subgroups. The optimal balance between pragmatism and technical exactitude will always be a matter of debate. We can at least inform this debate by being explicit about the modelling choices made and rationale for them.
In conclusion, Callender et al.’s1 interpretation of their cost-effectiveness estimates is at odds with accepted CEA practice for several reasons. While a reappraisal of their results suggest that quadrennial screening will likely be cost-ineffective for more men than they suggest, it is not advisable to base any policy recommendations on either the originally published results or the illustrative reinterpretation given here. This example is useful in illustrating some of the methodological considerations surrounding the appropriate handling of risk-subgroup specific cost-effectiveness estimates. Such issues of sub-group specific interpretation of evidence are likely to become more prevalent as increasing knowledge of disease processes permits further disaggregation of screen eligible populations.
All data underlying the results are available as part of the article and no additional source data are required.
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Evaluation of screening.
Is the rationale for commenting on the previous publication clearly described?
Yes
Are any opinions stated well-argued, clear and cogent?
Yes
Are arguments sufficiently supported by evidence from the published literature or by new data and results?
Yes
Is the conclusion balanced and justified on the basis of the presented arguments?
Yes
References
1. O'Mahony J: Interpreting cost-effectiveness ratios in a cost-effectiveness analysis of risk-tailored prostate screening: A critique of Callender et al.HRB Open Research. 2020; 3. Publisher Full TextCompeting Interests: Rob Boer is an employee of AbbVie and may hold AbbVie stock.
Reviewer Expertise: Health economics.
Is the rationale for commenting on the previous publication clearly described?
Yes
Are any opinions stated well-argued, clear and cogent?
Yes
Are arguments sufficiently supported by evidence from the published literature or by new data and results?
Yes
Is the conclusion balanced and justified on the basis of the presented arguments?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Evaluation of screening.
Is the rationale for commenting on the previous publication clearly described?
Yes
Are any opinions stated well-argued, clear and cogent?
Yes
Are arguments sufficiently supported by evidence from the published literature or by new data and results?
Yes
Is the conclusion balanced and justified on the basis of the presented arguments?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Health services research and health economics.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 2 (revision) 20 Oct 20 |
read | ||
Version 1 13 May 20 |
read | read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Register with HRB Open Research
Already registered? Sign in
Submission to HRB Open Research is open to all HRB grantholders or people working on a HRB-funded/co-funded grant on or since 1 January 2017. Sign up for information about developments, publishing and publications from HRB Open Research.
We'll keep you updated on any major new updates to HRB Open Research
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Authors:
Tom Callender1, Stephen Morris2, Paul Pharoah2,3, Nora Pashayan1
Affiliations:
1 Department of Applied ... Continue reading Interpreting cost-effectiveness ratios in a cost-effectiveness analysis of risk-tailored prostate screening: A critique of Callender et al. - Reply
Authors:
Tom Callender1, Stephen Morris2, Paul Pharoah2,3, Nora Pashayan1
Affiliations:
1 Department of Applied Health Research, University College London, London, UK.
2 Department of Public Health and Primary Care, University of Cambridge, Cambridge, UK.
3 Department of Oncology, University of Cambridge, Cambridge, UK.
O’Mahony, in his commentary1, makes the criticism that we have incorrectly interpreted the cost-effectiveness estimates in Callender et al.2 He sets out three issues.
In his interpretation of the cost-effectiveness analysis, O’Mahony reports that our paper finds that screening at a 10-year absolute risk (AR) of developing prostate cancer of 5% or more would be cost-effective, in contrast to his efficiency frontier analysis showing 10-year AR of 9.5%-10% being the most cost-effective. A screening strategy being cost-effective compared to no screening at a willingness-to-pay of £20,000 per quality-adjusted life-year is not the same as being the most cost-effective strategy, and we do not suggest this. In our analysis, to compare all strategies studied against each other, we used net monetary benefit, a widely accepted method in cost-effectiveness analysis of multiple alternatives3, and come to the same conclusion as he presents in his reinterpretation, a point which O’Mahony notes.
O’Mahony reports that our analysis is of non-mutually exclusive strategies that “correspond to a mixture of different policy choices for men of different lifetime risk”. This is an incorrect interpretation of our risk-stratified screening strategy. We examined the strategy of eligibility for screening being based on 10-year AR that is dependent on age and polygenic risk and equivalent to the risk threshold for eligibility based on age alone4,5. For example, the US Preventive Services Taskforce consider 55 to be the age at which, for some men, there is a net benefit to screening6. In England, a 55-year old man has a 10-year AR of being diagnosed with prostate cancer of ~2.5%2. Instead of inviting all men from age 55, we invite men to begin screening when their 10-year AR reaches 2.5%, dependent on both their age and polygenic profile. There is a mix-up in O’Mahony’s use of the terms higher and lower risk, 10-year absolute risk, and lifetime risk, with no differentiation between lifetime and remaining lifetime risk by age.
There are different approaches for risk-stratified screening7. We consider one approach, and O’Mahony appears to propose another, using risk independent of age. That O’Mahony proposes a different risk-stratified screening strategy to the one evaluated in our paper does not negate our analysis.
In modelling different risk-stratified screening intensities, we have acknowledged in our paper the importance of varying the inter-screening interval by risk and discussed in detail why we have not done so.
We show how O’Mahony’s critique appears to be based on misunderstanding or misinterpretation of our paper. We think that a more constructive approach would have been for O’Mahony to model the alternative screening strategies he outlines to quantify any difference and provide empirical backing for such screening programmes, furthering the field.
References
1 O’Mahony JF. Interpreting cost-effectiveness ratios in a cost-effectiveness analysis of risk-tailored prostate screening: A critique of Callender et al. HRB Open Res 2020; 3: 23.
2 Callender T, Emberton M, Morris S, et al. Polygenic risk-tailored screening for prostate cancer: A benefit–harm and cost-effectiveness modelling study. PLOS Med 2019; 16: e1002998.
3 Drummond MF, Sculpher MJ, Claxton K, Stoddart GL, Torrance GW. Methods for the Economic Evaluation of Health Care Programmes: Fourth Edition. Oxford, U.K.: Oxford University Press.
4 Pharoah PDP, Antoniou AC, Easton DF, Ponder BAJ. Polygenes, Risk Prediction, and Targeted Prevention of Breast Cancer. N Engl J Med 2008; 358: 2796–803.
5 Pashayan N, Duffy SW, Chowdhury S, et al. Polygenic susceptibility to prostate and breast cancer: Implications for personalised screening. Br J Cancer 2011; 104: 1656–63.
6 Grossman DC, Curry SJ, Owens DK, et al. Screening for Prostate Cancer. JAMA 2018; 319: 1901.
7 Chowdhury S, Dent T, Pashayan N, et al. Incorporating genomics into breast and prostate cancer screening: Assessing the implications. Genet Med 2013; 15: 423–32.
Authors:
Tom Callender1, Stephen Morris2, Paul Pharoah2,3, Nora Pashayan1
Affiliations:
1 Department of Applied Health Research, University College London, London, UK.
2 Department of Public Health and Primary Care, University of Cambridge, Cambridge, UK.
3 Department of Oncology, University of Cambridge, Cambridge, UK.
O’Mahony, in his commentary1, makes the criticism that we have incorrectly interpreted the cost-effectiveness estimates in Callender et al.2 He sets out three issues.
In his interpretation of the cost-effectiveness analysis, O’Mahony reports that our paper finds that screening at a 10-year absolute risk (AR) of developing prostate cancer of 5% or more would be cost-effective, in contrast to his efficiency frontier analysis showing 10-year AR of 9.5%-10% being the most cost-effective. A screening strategy being cost-effective compared to no screening at a willingness-to-pay of £20,000 per quality-adjusted life-year is not the same as being the most cost-effective strategy, and we do not suggest this. In our analysis, to compare all strategies studied against each other, we used net monetary benefit, a widely accepted method in cost-effectiveness analysis of multiple alternatives3, and come to the same conclusion as he presents in his reinterpretation, a point which O’Mahony notes.
O’Mahony reports that our analysis is of non-mutually exclusive strategies that “correspond to a mixture of different policy choices for men of different lifetime risk”. This is an incorrect interpretation of our risk-stratified screening strategy. We examined the strategy of eligibility for screening being based on 10-year AR that is dependent on age and polygenic risk and equivalent to the risk threshold for eligibility based on age alone4,5. For example, the US Preventive Services Taskforce consider 55 to be the age at which, for some men, there is a net benefit to screening6. In England, a 55-year old man has a 10-year AR of being diagnosed with prostate cancer of ~2.5%2. Instead of inviting all men from age 55, we invite men to begin screening when their 10-year AR reaches 2.5%, dependent on both their age and polygenic profile. There is a mix-up in O’Mahony’s use of the terms higher and lower risk, 10-year absolute risk, and lifetime risk, with no differentiation between lifetime and remaining lifetime risk by age.
There are different approaches for risk-stratified screening7. We consider one approach, and O’Mahony appears to propose another, using risk independent of age. That O’Mahony proposes a different risk-stratified screening strategy to the one evaluated in our paper does not negate our analysis.
In modelling different risk-stratified screening intensities, we have acknowledged in our paper the importance of varying the inter-screening interval by risk and discussed in detail why we have not done so.
We show how O’Mahony’s critique appears to be based on misunderstanding or misinterpretation of our paper. We think that a more constructive approach would have been for O’Mahony to model the alternative screening strategies he outlines to quantify any difference and provide empirical backing for such screening programmes, furthering the field.
References
1 O’Mahony JF. Interpreting cost-effectiveness ratios in a cost-effectiveness analysis of risk-tailored prostate screening: A critique of Callender et al. HRB Open Res 2020; 3: 23.
2 Callender T, Emberton M, Morris S, et al. Polygenic risk-tailored screening for prostate cancer: A benefit–harm and cost-effectiveness modelling study. PLOS Med 2019; 16: e1002998.
3 Drummond MF, Sculpher MJ, Claxton K, Stoddart GL, Torrance GW. Methods for the Economic Evaluation of Health Care Programmes: Fourth Edition. Oxford, U.K.: Oxford University Press.
4 Pharoah PDP, Antoniou AC, Easton DF, Ponder BAJ. Polygenes, Risk Prediction, and Targeted Prevention of Breast Cancer. N Engl J Med 2008; 358: 2796–803.
5 Pashayan N, Duffy SW, Chowdhury S, et al. Polygenic susceptibility to prostate and breast cancer: Implications for personalised screening. Br J Cancer 2011; 104: 1656–63.
6 Grossman DC, Curry SJ, Owens DK, et al. Screening for Prostate Cancer. JAMA 2018; 319: 1901.
7 Chowdhury S, Dent T, Pashayan N, et al. Incorporating genomics into breast and prostate cancer screening: Assessing the implications. Genet Med 2013; 15: 423–32.