Abstract
Purpose
Standardized computer-aided tumor response assessment is common in clinical trials. In contrast, unstructured free text reporting (UFTR) is common in daily routine. Therefore, this study aimed to discern and quantify differences between UFTR and computer-aided standardized tumor response evaluation based on RECIST 1.1 criteria (RECIST), serving as gold standard, in clinical workflow.
Methods
One-hundred consecutive patients with cancer eligible for RECIST 1.1 evaluation, who received five follow-up CTs of the trunk, were retrospectively included. All UFTRs were assigned to RECIST response categories [complete response, partial response (PR), stable disease (SD), progressive disease (PD)]. All CTs were re-evaluated using dedicated software (mint lesion™) applying RECIST 1.1. The accordance in tumor response ratings was analyzed using Cohen's kappa.
Results
At the first follow-up, 47 cases were rated differently with an SD underrepresentation and a PR and PD overrepresentation in UFTR. In the subsequent follow-ups, categorical differences were seen in 38, 44, 37, and 44%. Accordance between UFTR and RECIST was fair to moderate (Cohen's kappa: 0.356, 0.477, 0.390, 0.475, 0.376; always p < 0.001). Differences were mainly caused by the rating of even small tumor burden changes as PD or PR in UFTR or by comparison to the most recent prior CT scan in UFTR instead of comparison to nadir or baseline.
Conclusions
Significant differences in tumor response ratings were detected comparing UFTR and computer-aided standardized evaluation based on RECIST 1.1. Thus, standardized reporting should be implemented in daily routine workflow.
http://ift.tt/2uS3s8h
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου