The discovery of insulin in the early 1920s made long-term survival with diabetes an increasingly common phenomenon. As the 20th century progressed, it became apparent that complications still occurred despite a general improvement in outcomes as a result of this treatment, including avoidance of death from ketoacidosis. Early on, there appeared to be a correlation between adequacy of glycaemic control and at least some of these complications. It would take the randomised controlled trials of the 1980s and 1990s (e.g. the DCCT [Diabetes Control and Complications Trial] for type 1 diabetes and the UKPDS [UK Prospective Diabetes Study] for type 2 diabetes) to demonstrate beyond doubt that the association was causal and that intervening to improve control would reduce the risk of complications (DCCT Research Group, 1993; UKPDS Group, 1998). During the 1970s, it was still plausible that causation operated in the reverse direction; that is, that low vision from retinopathy, for example, reduced the individual’s ability to manage their diabetes and maintain blood glucose levels.
Either way, the measurement of glycaemic control itself was clearly important. The standard approach at the time was to measure random or fasting blood glucose levels either as a “spot check” (e.g. in the outpatient department) or on admission to hospital, followed by serial measurements under inpatient observation. For people who were managed in the community, urinalysis for glucose remained the mainstay of self-monitoring for most of the 1970s. That decade then witnessed a move toward self-monitoring of blood glucose (SMBG), which eventually superseded the measurement of urinary glucose in the majority of settings. This was made possible by new monitoring technology. The first blood glucose monitoring strips were the Dextrostix, produced by the Ames Company (Elkhart, IN, USA) and used as early as 1970, following an earlier discovery that their Clinistix urinary glucose test strips also worked for measuring blood glucose (Kohn, 1957). Clarke and Foster (2012) give an interesting account of the history of blood glucose meter technology.
Despite the effort during this decade aimed at making SMBG easier and more reliable, it was still difficult to distinguish people with good, fair, poor or very poor control in the home setting. The field was still dominated by hospital- and specialist-based practice before the emphasis of chronic disease management moved to the community and towards individual self-management.
Hospitalised patients admitted with an acute illness display glycaemic control that is not necessarily typical of their usual pattern, and single measurements in outpatients are, of course, just random checks. Elective admission for a longer period of observation in hospital with serial blood and urinary glucose measurement was a frequently used option during this time. In the current age of hospital admission avoidance, this option would be difficult to justify, at least for adults. However, during the 1970s it was a commonly used approach.
It was also difficult to satisfactorily measure the impact of a patient’s treatment on glycaemic control, whether it was with insulin or with oral drugs such as metformin and sulphonylureas, which were increasingly being used at this time. What was needed was a technique to reliably measure glycaemic control adequacy that was robust to short-term fluctuation and random measurement effects. In addition, it became evident that if we were to rely on SMBG, we needed to be aware of the patient’s own tendency to “interpret” the results prior to reporting them to a clinician. This phenomenon was later brought to light by the author and patient Colin Dexter, who vowed as a 1996 New Year’s resolution “not to invent quite so many satisfactory blood sugar readings” (Gallichan, 1997). The benefit of an objective measure of glycaemic control became increasingly clear.
Prior to the mid-1970s, a mouse model of diabetes had been developed. The mouse form of HbA1c was analogous to the human form, resulting from post-synthesis modification of haemoglobin A, and its level had been demonstrated to rise several weeks after the onset of diabetes in these animals (Koenig et al, 1976). It was also becoming evident that in humans, HbA1c was linearly correlated with blood glucose levels. What was not known was whether the level would change over time as a result of improved glycaemic control. This would be the key to its success as a useful marker in clinical practice.
As care moved towards both the patient and the community, given the tradition at that time to assess and modify glycaemic control according to hospital-based serial blood glucose measurements, the medical establishment was very receptive to a new approach that could be provided in the outpatient or general practice setting and avoid the need for prolonged (usually in-hospital) observation. In today’s clinical practice, measurement of HbA1c is so commonplace that it is easy to forget the challenge of assessing glycaemic control adequacy prior to its discovery.
The Hidden Gem
In this article, Koenig and colleagues describe the first study demonstrating the value of HbA1c as a reflection of changing glycaemic control over periods of weeks or months. They also explain the biochemical basis of this approach. HbA1c is a subfraction of haemoglobin A that results from glycation of the haemoglobin molecule, a process that, as these authors demonstrate, correlates directly with average blood glucose levels over the lifetime of the red blood cells in the circulation.
Five participants were included in the study. The profiles make an interesting discussion in their own right as a reflection of the hospitalised diabetes population at that time:
- Case 1 was a 57-year-old black man with an 8-year history of diabetes and mild peripheral neuropathy.
- Case 2 was a 38-year-old white woman diagnosed with diabetes (presumably type 1) at the age of 10 years, who had been started on insulin during the post-war years. She had mild peripheral neuropathy and gangrene of the distal phalanx of the right great toe.
- Case 3 was, similarly, a 38-year-old white woman with type 1 diabetes, this time diagnosed at the age of 5 years. She had stable retinopathy but had previously undergone hypophysectomy after failure to respond to photocoagulation.
- Case 4 was a 61-year-old woman with a 14-year history of diabetes treated with phenformin prior to admission. This biguanide drug was withdrawn in the same year as this study (1976) due to the risk of often fatal lactic acidosis, a problem much less evident (although it occurs on occasion) with its sister drug, metformin. The report indicates that on admission her weight was 78 kg, and she responded to a diet and exercise programme to lose 10 kg during admission.
- Case 5 was a 64-year-old black woman with a 30-year history of diabetes. She had previously been on insulin but, interestingly, had not been treated recently with either this or oral hypoglycaemics. She had multiple complications, including extensive background retinopathy, peripheral vascular disease, peripheral neuropathy and mild proteinuria. We are not told whether this history reflects lack of compliance or whether the previous insulin therapy was no longer considered necessary.
The authors explain that all of the patients were inducted into the study with poor glycaemic control, presumably their reason for admission. The need to manage the gangrenous toe of case 2 provided another reason. However, the next statement truly dates this article as from before the modern era of community-based practice:
“Blood sugar concentrations were brought to more optimal levels within one to two months of hospital admission.”
During this period of observation and treatment, the patients’ blood glucose levels were all brought under control through “the careful regulation of diet, exercise, and administration of insulin.”
The results demonstrate not only the correlation between blood glucose levels and HbA1c but also, and equally importantly, the response of HbA1c to the eventual control of the blood glucose. Table 1 records the fasting blood glucose and HbA1c levels before and after achievement of control.
Why it still shines today
The second half of the 20th century was dominated by pharmacological innovation in the study of diabetes, and it is easy to forget the impact that other developments made on clinical practice and patient care. The reason why this (very hospital-based) study is so important is that it reports the early development of a technique that facilitated the move towards community-based diabetes management. It enabled glycaemic control adequacy to be measured over weeks or months, without literally needing to monitor blood glucose levels over the entire timescale, as happened in this case series.
The development of SMBG (involving finger-prick measurements) was also in its infancy at this time, but despite huge improvements in this technology, there remains a preference for regular HbA1c checks rather than SMBG measurements in the majority of patients who are not using insulin. That is not to deny the benefits of SMBG by individual patients not taking insulin when justified; for example, when gauging response to sulphonylureas (which is difficult to predict at the outset in poorly controlled patients), it can take too long to decide whether the dose is correct following initiation using HbA1c measurements alone. However, for many patients not using insulin, measurement of HbA1c has provided a much better index of current control adequacy. The final piece in the jigsaw came with later studies demonstrating that HbA1c is also correlated with the risk of long-term microvascular complications. This study, therefore, opened a door through which the large randomised intervention trials measuring the impact of glycaemic control on diabetes complications became possible.
Attempts to achieve remission, or at least a substantial improvement in glycaemic control, should be the initial focus at type 2 diabetes diagnosis.
9 May 2024