This website is intended for healthcare professionals only.

Hospital Healthcare Europe
Hospital Pharmacy Europe     Newsletter    Login            

Advanced visualisation for efficiency in radiology

Standardisation of postprocessing in radiology is a task for the next decade

Bernd Tomandl
Tibor Mitrovics
Patrick Egan
Department of Radiology and Neuroradiology,
Christophsbad Hospital Goeppingen, Germany
 
Until the late 1990s the number of images from MRI and CT was limited and there was not much need for postprocessing when section thicknesses of 5–10mm were used. With the development of new imaging techniques at the end of the last millennium like submillimetre 3D imaging with MRI, and even more with the development of multisection CT, data sets of more than 1000 images often amount from a single examination. This makes postprocessing and 3D imaging not only useful but also mandatory.
 
Techniques for 3D visualisation and postprocessing
In the new millennium a couple of workstations for high resolution 3D visualisation and other postprocessing methods such as perfusion imaging or computer aided diagnosis (CAD) were available. Normally these were stand-alone solutions where radiologists had to sit down in front of the workstation and play around with the several features that are provided.
 
While simple methods like multiplanar reconstructions (MPR) are relatively easy to understand and to work with, all the other methods like maximum intensity projection (MIP) or volume rendering (VR) are much more difficult to use and the results will vary dependent on the software that is available on a dedicated workstation and the experience of the user.1
 
Until now there were no clearly defined standards even for frequent examinations like MR-angiography. The MRI community still often uses MIP for this task. With this technique, from a selected viewing direction, only the brightest voxels of the data are used for imaging. When a 256×256 matrix is used for MR-angiography (65536 voxels in one plane) the MIP algorithm will only take the 256 brightest voxels, thus leading to a reduction of information by 96%.
 
It is nearly impossible to find small structures like tiny aneurysms with this method. It has been shown that this method is more valuable when thin slabs of 10–20mm are used. This is easy to understand because overlapping images showing one voxel out of ten will give more information than an image with one voxel out of 256 (Figure 2).
 
Fig. 1: Comparison of whole volume MIP MR-angiography (left) to 10mm thin section MIP (right).  The details within an arteriovenous malformation (AVM) of the left temporal lobe (arrows) are much clearer visible on the thin section MIP (right image).
 
The following example will make the clinical relevance of this dilemma even clearer: 
 
In 2002 a study on the detection of small cerebral aneurysms with CT-angiography was published in the American Journal of Neuroradiology.2 The authors found a detection rate of 98% for aneurysms of ≤5mm using a one row helical CT system.
 
This was even better than DSA. Another paper was published 11 years later in the Journal of Computer assisted Tomography on the same subject using a 256 row CT scanner, where CTA showed a sensitivity of only 93%.3 While we can assume that the source images were probably better or at least of equal quality in the second study we must believe that the lower detection rates resulted from the methods that were used for postprocessing rather than from other factors. 
 
In another study investigating the on-call resident interpretation of CT-angiography for intracranial aneurysms, the detection rate was only 87%, sinking to as low as 35% in cases with multiple aneurysms.4 While it is not difficult to see an aneurysm (a bubble in the bifurcation of intracranial arteries, which even the less experienced resident can see), the problem is to get this visual information out of the source images.
 
In our own study we could show that the detection rate of aneurysms is independent from the experience of an investigator provided that the same standardised 3D visualisation videos were presented.5 These videos where automatically generated on a remote server after transferring the CTA data.
 
So, in summary, the value of a method for detection of pathologic structures (aneurysms, tumours etc.) using any kind of 3D visualisation technique may more likely depend on the technique of visualisation and familiarity with the selected software than on the information contained in the basic source images. All these methods are therefore individual user-dependent procedures rather than standardised procedures (Figure 2). 
 
Fig. 2: Relationship of standardisation to quality in postprocessing. A highly qualified radiologist may be able to produce highest quality results on his workstation. But with different users the quality will vary. Standardisation of postprocessing will lead to a quality (low or high) that is consistent and can be improved over time.
 
Another sad story is perfusion imaging in stroke patients. This technique has been available for a long time and in my experience is probably the most helpful tool for therapeutic decisions in patients with an acute intracranial arterial occlusion.6 Again, different algorithms of different vendors lead to different results, so the threshold values for ‘penumbra’ and ‘dead brain’ varies between all these postprocessing tools.7
 
The evaluation of MR-perfusion is even more challenging in comparison to CT-perfusion. Currently it is safer to rely on the colours of the images rather than looking for the calculated absolute values.
 
Standards for postprocessing 
It took until the year 2000 to determine standardised rules for such a simple task as telling whether a tumour improved in size or not. RECIST (Response Evaluation Criteria In Solid Tumours8) depends on plain axial images where the largest diameter of a tumour is measured.  This primitive method is so far away from modern postprocessing that we easily understand how difficult it is to introduce standards for any kind of diagnostic imaging.
 
Therefore, in most cases there are still no standards for postprocessing. Every department of radiology still has its own standards and sometimes even within the department different radiologists will produce different results. If we look for pulmonary nodules we can just rely on 5mm sections from spiral CT. We can use the 1mm sections in MPR-mode in three planes interactively (probably better, but we must play on the workstation).
 
Or we add 10mm MIP images in three planes (probably even better, but detection of very small nodules <3mm will lead to many unspecific findings).  What is the best standard for the evaluation of such a frequently used examination like thorax CT? This has not yet been defined.
 
Even the most experienced neurosurgeon will not be able to find an intracranial aneurysm on CT- or MR-angiography if they are not familiar with the 3D visualisation tools of a workstation. This shows clearly that we need standardised procedures for visualisation and postprocessing, which enables users to see the information that is contained within the data set without the necessity of being deeply familiar with the software application.
 
Only then methods like CT-angiography can be evaluated and thereafter be improved. Some companies provide semiautomatic visualisation tools on their workstations for selected examinations like cardiac imaging or virtual colonoscopy leading to more or less consistent results. 
 
Again, the quality depends very much on the quality of postprocessing, which may vary depending on the type of software on a selected workstation. Meanwhile some vendors provide the possibility to use the postprocessing tools within a computer network making them available to all workstations within the department.
 
These tools can then be integrated within the PACS of a hospital. This is surely a step in the right direction but currently most of these procedures still need revision by the user who must know the algorithms of the postprocessing process. In 2014 international standards for postprocessing of diagnostic imaging do still not exist.
 
Remote postprocessing: now and in the future
An intriguing approach to get away from individual to standardised 3D imaging is remote postprocessing. The data are sent to a remote server where specialised software analyses the data and the result is sent back to the customer.
 
This can be done by a video file or by creation of a 3D model that can be interactively used for reviewing by the customer on their computer.9  A good example of such a remote system is the ‘risk analysis and resection planning project for liver surgery’ provided by the Fraunhofer Institute MEVIS in Bremen, Germany.10
 
The data of the liver CT examination are sent to the remote server and the segmentation of the tumour and the intrahepatic arteries and veins is constructed in a standardised way. Finally, the result is sent back to the customer, who can use the images to plan the liver surgery of the individual patient including an analysis of the risk of complications and successful tumour resection.
 
Using such a standard and feedback of the customers, the system can be improved, leading to increasingly better results. In times of fast Internet connection it is already possible to integrate sophisticated postprocessing tools. 
 
Conclusion
The quality of 3D volume rendering and other highly sophisticated postprocessing methods has reached a very high quality in the recent years. Due to lack of standardisation of these tools, their use and value is still limited in clinical routine work. In order to create software tools for the 3D visualisation and other postprocessing techniques with comparable results, the software applications must be as independent of vendor and user as possible. Only then the real value of examinations using postprocessing for the diagnosis of diseases from volume data sets can be evaluated and improved. 
 
Remote postprocessing could be a possible solution for that problem. It is desirable that within the next few years standardised (remote) visualisation tools will be available for the most important clinical tasks like tumour staging, evaluation of CT- and MR-angiograms, cardiac imaging, volumetry, lesion burden in multiple sclerosis and surgery therapy planning.
 
From the view of the patient it is not acceptable that the quality of diagnostic imaging depends on postprocessing tools that are not standardised. Playing around with beautiful images without defined standards is fun but not important.
 
References
  1. Tomandl BF et al. CT angiography of intracranial aneurysms: A focus on postprocessing. Radiographics 2004;24:637–55.
  2. Villablanca JP et al. Detection and characterization of very small cerebral aneurysms by using 2d and 3D helical ct angiography. AJNR Am J Neuroradiol 2002;23:1187–98.
  3. Ni W et al. Preliminary experience of 256-row multidetector computed tomographic angiography for detecting cerebral aneurysms. J Comput Assist Tomogr 2013;37:233–41.
  4. Hochberg AR et al. Accuracy of on-call resident interpretation of ct angiography for intracranial aneurysm in subarachnoid hemorrhage. AJR Am J Roentgenol 2011;197:1436–41.
  5. Tomandl B et al. CT-angiography (CTA) of intracranial aneurysms: Evaluation of a novel user-independent 3D-visualization on a remote graphic workstation. Radiology 2002;225(Suppl):521.
  6. Sabarudin A, Subramaniam C, Sun Z. Cerebral ct angiography and ct perfusion in acute stroke detection: A systematic review of diagnostic value. Quant Imaging in Med Surg 2014;4:282–90.
  7. Abels B et al. Perfusion ct in acute ischemic stroke: A qualitative and quantitative comparison of deconvolution and maximum slope approach. AJNR Am J Neuroradiol 2010;31:1690–8.
  8. Therasse P et al. New guidelines to evaluate the response to treatment in solid tumors. European organization for research and treatment of cancer, national cancer institute of the united states, national cancer institute of canada. J Natl Cancer Inst 2000;92:205–16.
  9. Tomandl BF et al. Local and remote visualization techniques for interactive direct volume rendering in neuroradiology. Radiographics 2001;21:1561–72.
  10. Kleemann M et al. Laparoscopic navigated liver resection: Technical aspects and clinical practice in benign liver tumors. Case Rep Surg 2012;2012:265–918.
x