II. International Congress on Critical Care on the Internet –

CIMC 2000

 

 

 

 

Round Table:

Critical Care Databases

 

The Austrian Experience

 

Philipp G. H. Metnitz

 

 

 

 

 


 

 

 

 

 

 

 

 

 

 

Current address:

 

Departement Réanimation Médicale,

Hôpital St. Louis, Université Lariboisière-St. Louis,

1 Avenue Claude Vellefaux, 75010 Paris, France.

 

Email: philipp.metnitz@univie.ac.at

 


 

Quality management is said to provide the tools for the management of todays challenges to modern intensive care medicine. Quality mangement in this context means the strategic use of instruments such as quality planning, quality regulation, quality improvement and quality assurance. Internal quality assurance means all measures which are done within an institution, whereas external quality assurance describes the involvment of external parties. External comparison of outcomes, i.e. the comparison of outcomes between different institutions is a prerequisite in that it allows to evaluate the performance of the own institution (i.e. an intensive care unit) and to compare it with the performance of other institutions. An external comparison program (also called “benchmarking”) is therefore able to determine a mean level of performance (e.g. risk adjusted mortality or other indices) and to depict outliers.

 

To make ICU populations comparable requires a standardized documentation. Recognizing this, several national documentation standards for intensive care have been developed in the past years [[1],[2],[3]]. The Austrian Center for Documentation and Quality Assurance in Intensive Care Medicine (ASDI) has developed several instruments (such as a national documentation standard and a national database for intensive care) for a benchmarking program for intensive care units (ICUs) [[4]] in Austria. On one hand, the multicentric analysis of these data should evaluate possible quality indicators for their usefulness. On the other hand, participating ICUs get the possibility to compare themselves with other ICUs.

 

 The first data collections for benchmarking reports were done in 35 Austrian ICUs in 1999 and repeated in 61 ICUs in 2000. Any data are, of course, collected anonymously. Data reporting is done in data files which are password- secured. All collected data underwent multi-level proof routines.

 

 First, for all parameters there exist mandatory plausibility controls in the local data entry system (ICdoc): Values entered are checked by the system for type and value range. Determined value ranges consist of a normal range (parameter within the physiologic range), a plausibility range (parameter in the pathologic range, but plausible), and a storage range (extremely deviated parameters outside the plausibility range). Values outside the storage range are not accepted by the database system. In addition, several consistency checks are included. For example, no data might be entered for a date after a patient had already been discharged.

 

Moreover, besides consistency checks the system also checks and reports missing data. This as more important, as not-recorded values are weighted with zero (as normal) in the SAPS II. Thus, it is necessary not only to record calculated scores, but also to store and pool the single values. This permits the calculation of missing variables, and provides a simple but effective mechanism for controlling the competeness of the data acquisition. Our data recording can, however, be regarded as being complete: on average, only one value necessary for the calculation of the LOD was missing per patient (median, interquartile range 0 – 2). These missing values can easily be explained: First, several patients do not stay long enough to have all values analyzed (e.g., patients who die or are discharged for another reason before some analyses can be done). Second, rural hospitals in particular have problems obtaining a variety of lab values on the weekend, so values like bilirubine or blood urea nitrogen are only performed if they are suspected to be abnormal (especially if lab values were obtained immediately before the ICU stay and were in the normal range). Moreover, hospitals are beginning to minimize costs. ICUs in non-teaching hospitals are therefore confronted with the fact that a complete lab analysis should only be performed when abnormalities are suspected. In these cases, missing values cannot be avoided. Therefore, the actual amount of missing physiologic data might be even smaller than the results indicated.

 

The storage and the collection of the raw data values also permits another data quality control during the data import process into the database server: here the data are again checked for plausibility and completeness. During this import are also all calculations redone (e.g. for scores). These proof routines ensure, altogether, maximum data reliability.

 

To assess the reliability of data collection, specially trained data collectors were  also sent to each unit to obtain data from the histories of a random sample of patients and interrater variability calculated as described previously [4]. The quality of the recorded data was in both data collections satisfactory with respect to both interrater variability and completeness. Exceptions were only found in the reason for admission and hospital mortality.

 

It is well known that the selection of a single reason for admission can be difficult [[5],[6],[7]]. With the exception of severity scoring systems such as APACHE II [[8]], no international standards exist on how to document a patient’s disease(s) on admission to the ICU. Differentiating between reason for admission, organ system involvement, and underlying disease may be difficult; besides, a patient may have more than one disease. An internationally standardized coding system would therefore be a prerequisite for multinational comparisons of  case mix data.

 

The assessment of hospital mortality still presents a problem in several hospitals. This favors assessment of ICU mortality, which is readily available without any additional effort. Hospital mortality is, however, thought to be the more objective approach [[9]], since it is not skewed by different ICU discharge practices, which may vary across regions and countries and give erroneous mortality figures. Adding to this discussion, we can report data from a recent survey of 23 Austrian ICUs, in which we found the proportions of nonsurvivors dying after ICU discharge to vary widely between 0 and 63% (on average 29.8 ± 13.7%, A. Valentin et al, unpublished data). This variability between ICUs supports the use of hospital mortality as the endpoint of interest.

 

Currently, 62 units are participating in the ASDI benchmarking project. The anonymized form of the report 2000 can be found at the ASDI website at: http://www.asdi.ac.at/body_datensammlung.html (Please use the link at the bottom of the page to download the PDF file. Acrobat reader 4.0  is needed for viewing and printing).

 

An external comparison project has several “side” effects. Participants have e.g. to agree on goals and contents (which is in most cases not very easy) and success criteria of such a project. Moreover, a distribution of data always implies also a distribution of experiences, which eventually leads to new insights. Last but not least such a program opens also the possibility to form common groups of interest, such as working groups to define standards, guidelines or recommendations. It should, however, not be the primary goal of an external comparison project to seek for the best and the worst – and to provide thus a „ranking“. It should - au contraire - look for an average and the distribution of several indices of performance and to identify possible outliers. Afterwards, the data should clearly be evaluated for artifacts and possible confounders. Only after these steps have been done, such data can then (locally) be used to identify possible reasons for a lack in performance and quality improvement strategies developed.

 

 


References



[[1]] Schmitz JE, Weiler Th, Heinrichs W. Mindestinhalte und Ziele der Dokumentation im Bereich Intensivmedizin. Anästhesiologie und Intensivmedizin. 1995; 6: 162–172.

[[2]] Stoutenbeck CP. Dutch Specification study of an Intensive Care Information System. In: Vincent JL. 1994 Yearbook of Intensive Care and Emergency Medicine. Springer Verlag Berlin-Heidelberg 1994.

[[3]] ICMPDS. ICNARC Case Mix Programme Dataset Specification. Intensive Care National Audit and Research Center, Tavistock Square, London. 1995;

[[4]] Metnitz PhGH, Vesely H, Valentin A, Popow C, Hiesmayr M, Lenz K, Krenn CG, Steltzer H. Evaluation of an interdisciplinary data set for national ICU assessment. Crit Care Med 1999; 27: 1486-1491.

[[5]] Cowen JS, Kelley MA. Errors and bias in using predictive scoring systems. Crit Care Clinics 1994; 10(1): 53–77.

[[6]] Teres D, Lemeshow St. Why severity models should be used with caution. Crit Care Clinics 1994; 10(1): 93–110.

[[7]] Lemeshow St, Teres D, Klar J, Avrunin JS, Gehlbach StH, Rapoport J. Mortality Probability Models Based on an International Cohort of Intensive Care Unit Patients. JAMA 1993; 270(20): 2478–2486.

[[8]] Knaus WA, Draper EA, Wagner DP, Zimmermann JE. APACHE II: a severity of disease classification system. Crit Care Med 1985; 13(10): 818-829.

[[9]] Le Gall JR, Klar J, Lemeshow S, Saulnier F, Alberti C, Artigas A, Teres D. (1996) The logistic organ dysfunction system. A new way to assess organ dysfunction in the intensive care unit. JAMA;276:802-810.