Welcome to our new website!
Jan. 15, 2025

Mastering Medical Statistics: Elevate Your Clinical Decision Making

The player is loading ...
Aussie Med Ed- Podcast

Send us a text

Join host Dr Gavin Nimon (Orthopaedic Surgeon) as he unlocks the mysteries of medical statistics and take your clinical decision-making skills to new heights with insights from Dr. Adam Badenoch, an anaesthetist with a Master's in Biostatistics. Discover how essential concepts like central tendency, distribution, and variance can transform your understanding of medical research. Dr. Badenoch explains the significance of numerical and categorical data, and sheds light on how outliers can alter the mean and median, equipping you with the tools needed to critically assess statistical evidence in healthcare.

Venture into the complex world of hypothesis testing, where we explore the importance of the null hypothesis and the scrutiny needed before changing clinical practices. Dr. Badenoch demystifies the role of p-values and addresses common criticisms such as the arbitrary 0.05 significance threshold and publication bias. By emphasizing the necessity of defining clinical importance and analysis methods at the outset of studies, this discussion urges a thoughtful balance between scientific integrity and interpretation.

Our episode culminates with an insightful look into research study design and the indispensable role of statistical tools in evaluating studies. Learn about confidence intervals and their power to reveal the range of plausible values for true population parameters, standing in contrast to p-values. We also touch on the challenges of implementing evidence-based medicine in practice, with a nod to the potential and pitfalls of artificial intelligence in data analysis. This episode is a must for healthcare professionals aiming to refine their statistical acumen and apply evidence-based insights effectively.

Aussie Med Ed is sponsored by OPC Health, an Australian supplier of prosthetics, orthotics, clinic equipment, compression garments, and more. Rehabilitation devices for doctors, physiotherapists, orthotists, podiatrists, and hand therapists. If you'd like to know what OPC Health offers.

Visit opchealth. com. au and view their range online.

Aussie Med Ed is sponsored by -HealthShare is a digital health company, that provides solutions for patients, General Practitioners and Specialists across Australia.

 

Aussie Med Ed is sponsored by Avant  Medical Indemnity: They state that they offer holistic support to help the doctor practice safely and believe they have extensive cover that's continually evolving to meet your needs in the ever changing regulatory environment.

 

Chapters

00:00 - Understanding Medical Statistics in Healthcare

10:51 - Interpreting Hypothesis Testing in Medicine

21:23 - Key Concepts in Research Study Design

35:13 - Analyzing Research Studies With Statistics

Transcript

WEBVTT

00:00:00.029 --> 00:00:13.220
I'd like to let you know that Aussie Med Ed is sponsored by OPC Health, an Australian supplier of prosthetics, orthotics, clinic equipment, compression garments, rehabilitation devices for doctors, physiotherapists, orthodists, podiatrists and hand therapists.

00:00:13.839 --> 00:00:16.939
If you'd like to know what OPC Health offers, visit opchealth.

00:00:16.940 --> 00:00:18.618
com.

00:00:18.618 --> 00:00:20.210
au and view their range online.

00:00:20.489 --> 00:00:23.989
Medical statistics plays a fundamental role in shaping modern health care.

00:00:24.309 --> 00:00:35.075
It's the foundation of evidence based medicine, allowing us to make informed decisions and Whether it's selecting the best treatment, evaluating the efficacy of interventions, or even understanding the risks and benefits for our patients.

00:00:35.604 --> 00:00:46.784
In today's episode of Aussie Med Ed, we're looking at the world of medical statistics, exploring how it influences clinical guidelines, helps us critically evaluate research papers, and informs everyday clinical practice.

00:00:47.484 --> 00:01:02.344
We'll cover the essential statistical concepts every healthcare professional should be familiar with when reviewing research, from types of data and statistical concepts to more complex topics like hypothesis testing, p values, confidence intervals, and correlation vs causation.

00:01:02.704 --> 00:01:08.775
We'll also talk about other study design, the importance of sample size, and how to spot potential bias in research papers.

00:01:09.454 --> 00:01:14.055
Badenoch, an anaesthetist who has a Masters in Biostatistics.

00:01:14.373 --> 00:01:16.183
He's going to help us break it all down for you.

00:01:16.185 --> 00:01:16.239
I'm Adam Badenoch.

00:01:16.689 --> 00:01:23.099
Whether you're new to research or looking to refresh your knowledge, this episode will give you tools to better understand and apply the evidence in your clinical practice.

00:01:23.935 --> 00:01:25.325
Welcome to Aussie Med Ed.

00:01:26.685 --> 00:01:29.935
G'day and welcome to Aussie Med Ed, the Australian Medical Education Podcast.

00:01:30.194 --> 00:01:34.424
Designed with a pragmatic approach to medical conditions by interviewing specialists in the medical field.

00:01:35.025 --> 00:01:39.353
I'm Gavin Nimon, an orthopaedic surgeon based in Adelaide and I'm broadcasting from Kaurna Land.

00:01:40.183 --> 00:01:45.454
I'd like to remind you that this podcast podcast players and is also available as a video version on YouTube.

00:01:46.194 --> 00:01:52.463
I'd also like to remind you that if you enjoy this podcast, please subscribe or leave a review or give us a thumbs up as I really appreciate the support.

00:01:52.463 --> 00:01:53.394
It helps the channel grow.

00:01:54.015 --> 00:02:02.504
I'd like to start the podcast by acknowledging the traditional owners of the land on which this podcast is produced, the Kaurna people, and pay my respects to the Elders both past, present and emerging.

00:02:06.745 --> 00:02:18.485
Well, it's my pleasure now to introduce Dr Adam Badenoch, an aethetist trained in South Australia and who has specialized fellowships in difficult airway management, medical education and simulation, as well as hepatobiliary and liver transplant anesthesia.

00:02:18.705 --> 00:02:21.903
In 2023, Adam earnt a Barstas by statistics.

00:02:22.330 --> 00:02:32.460
from the University of Adelaide, combining his clinical expertise with a deep interest in research, statistics, and anesthesia specialties, such as ENT, neuroanesthesia, and liver transplant care.

00:02:32.840 --> 00:02:34.859
Thanks Adam, thanks very much for coming on Aussie Med Ed.

00:02:35.129 --> 00:02:42.000
Statistics has always been a very difficult and confusing concept for myself, probably because it confines both mathematics and some unusual concepts.

00:02:42.110 --> 00:02:46.240
Can you please start off by explaining some basic key statistical concepts for everyone?

00:02:46.460 --> 00:02:51.300
Basic principles that people should be aware of and what they should know if they're trying to analyse medical research.

00:02:51.949 --> 00:02:52.689
Sure, Gavin.

00:02:52.789 --> 00:02:54.050
First of all, thanks for having me on.

00:02:54.370 --> 00:03:02.139
I, at times, find statistics a bit confusing and complicated too, so don't worry, I think, if that's how you feel.

00:03:02.280 --> 00:03:11.810
Lots of people are in the same boat, and it definitely does cover some relatively unintuitive concepts or logic at times.

00:03:11.909 --> 00:03:15.250
But I think you're right, covering some basics often helps.

00:03:15.689 --> 00:03:23.789
I think a few key concepts to understand are what sort of data are there and how do we categorise it?

00:03:24.090 --> 00:03:27.780
How can we describe different types of data?

00:03:28.338 --> 00:03:32.748
And some basic concepts related to hypothesis testing.

00:03:33.188 --> 00:03:38.969
So types of data can generally be classified into numerical or categorical.

00:03:39.460 --> 00:03:54.794
And the numerical, uh, Uh, data can be further categorised into discrete or continuous data, and categorical data can, is often further delineated into nominal or ordinal categories.

00:03:55.854 --> 00:04:02.870
A nominal categorical, uh, A categorical variable is simply one which has no logical order to it.

00:04:03.129 --> 00:04:16.798
A good example might be hair colour, as opposed to an ordinal variable, which is categorical in nature, it categorises things, but they have a natural order to them, such as small, medium and large.

00:04:17.000 --> 00:04:24.939
In terms of the numerical data, discrete data is data which essentially is like an integer.

00:04:25.468 --> 00:04:32.519
Uh, it doesn't take on a continuous range of values, but it, it falls into discrete numbers.

00:04:32.598 --> 00:04:46.139
Whereas continuous data is essentially a numerical representation of something which can theoretically be described as a entirely continuous process that can take on any specific value.

00:04:46.809 --> 00:05:04.913
So if we think of continuous data, we often describe it in terms of It's central tendency, which is where the, the largest amount of the data sits and how the data is spread around that area of central tendency.

00:05:05.483 --> 00:05:15.144
So the, the most common ways to describe central tendency would be mean, median, or mode, the mode being the most common value.

00:05:15.899 --> 00:05:29.139
The median being the 50th centile value and the mean having a number of different definitions but the usual arithmetic mean is simply the sum of all of the values divided by the number of values.

00:05:29.408 --> 00:05:34.069
Distribution can be described as a range from the lowest to the highest value.

00:05:35.449 --> 00:05:40.988
You can describe subsets of that range, such as an interquartile range.

00:05:41.329 --> 00:05:51.819
Range describes the middle 50 percent of the values, so it's less affected by occasional extreme outliers at either ends of the range.

00:05:52.048 --> 00:06:05.228
And a variance is definition of a term which describes how far each individual point is away from whichever measure of central tendency you use, usually the median.

00:06:06.238 --> 00:06:11.639
Right, so when we're talking about numbers in general, we'd like to average things, and that's the mean, but I understand the mean's not as good.

00:06:11.759 --> 00:06:14.908
That's affected by outliers, and that's why the median is more unuseful.

00:06:14.908 --> 00:06:15.538
Is that correct?

00:06:16.869 --> 00:06:21.428
Yeah, it depends on the distribution of your data.

00:06:22.019 --> 00:06:30.269
So the mean is a good value to use often because it takes information from every individual individual.

00:06:30.939 --> 00:06:35.189
A data point in the data set and it uses that information in its calculation.

00:06:35.218 --> 00:06:52.319
So you're not throwing away any information, but because of the way it's calculated it can be quite affected by a small number of particularly high or particularly low values that don't really represent the typical value of the data, if there is such a thing.

00:06:52.658 --> 00:06:57.244
The median takes the middle 50 percent of the values.

00:06:57.663 --> 00:07:33.053
So it is effectively discarding information from, you know, the top tail and the bottom tail of the data set, and for that reason is, I guess, less desirable to use than the mean if those values at either end of the range are actually considered typical and representative of the true data set, but if they are unusually high or low outliers and we don't think that they genuinely represent the true population, then it's a good thing to discard that information and just use the middle 50 percent.

00:07:34.244 --> 00:07:41.084
The range takes that concept even further and simply uses two values from the data set.

00:07:41.194 --> 00:07:55.629
So you might have a million values in your data set and to describe it simply using a range, all that does is it takes the lowest value and the highest value And you usually put a little dash between them and you say, you know, the numbers range from this to this.

00:07:56.168 --> 00:08:00.639
And it doesn't tell you anything about what else is happening in the middle of the data set.

00:08:01.059 --> 00:08:10.528
So that can be obviously hugely influenced by outlying values and doesn't tell you anything about the middle of the range.

00:08:10.798 --> 00:08:16.209
But I guess it's useful to, it's a useful concept, particularly when combined with a median.

00:08:16.298 --> 00:08:22.584
So the median looks at the middle 50 percent of values And the range looks at the extreme ends.

00:08:22.764 --> 00:08:28.634
And it can give you a nice little picture and summary when taken together of what the distribution of the data looks like.

00:08:29.184 --> 00:08:31.634
So it's horses for courses a little bit, I'd say.

00:08:32.683 --> 00:08:38.474
And all these things are really used as a way of describing numbers in order to interpret results.

00:08:38.634 --> 00:08:46.033
It's a way of assessing how well treatment can be useful for certain individuals or for a population in general.

00:08:46.653 --> 00:08:59.313
I mean, statistics is a mathematical concept which is useful not just for medicine, but it's used in finance and engineering and agriculture and all walks of life really.

00:08:59.313 --> 00:09:06.134
So anywhere where numbers can be used to represent phenomena that exist in the real world.

00:09:06.453 --> 00:09:07.724
Statistics can be useful.

00:09:07.764 --> 00:09:11.303
So the applications are virtually endless.

00:09:11.524 --> 00:09:36.339
I guess one of the key concepts for statistics and another one which often gets a little bit forgotten when people are interpreting statistics in medical literature is that any time we do a study or we analyse a data set Typically, that is a sample which has been taken from a true population.

00:09:37.109 --> 00:09:51.448
Um, so it may be, for example, that we have recruited 100 patients who have had knee operations from all of the patients that you've operated on over the last 12 months.

00:09:52.019 --> 00:10:12.433
Now, you might have operated on, you know, Well, more than a hundred patients in 12 months and what we're looking at in the study is a sample of all of the patients that you operate on or we might be trying to extrapolate our thinking to all of the patients who have knee operations, not just by you and not just in this year.

00:10:12.874 --> 00:10:23.109
So that that concept that when we analyze a study we're analyzing A sample taken from a larger true population is a really important one.

00:10:23.158 --> 00:10:43.519
And often when we come back to interpreting some of the analysis parameters and testing hypotheses that a lot of that framework is based around estimating what we think these values would truly be if we had collected data on every person that had a knee operation.

00:10:43.899 --> 00:10:46.208
We've introduced the concept of hypotheses.

00:10:46.703 --> 00:10:49.014
Perhaps you can explain that in more detail if you could, please.

00:10:49.083 --> 00:10:49.433
Sure.

00:10:50.104 --> 00:11:01.764
Um, so hypothesis testing, I guess, is a way of using statistical methods to refute a null hypothesis.

00:11:02.124 --> 00:11:11.073
And that framework of thinking is generally derived from the concept that unless we know that we're going to improve.

00:11:11.769 --> 00:11:15.818
life or medicine somehow for particularly for our patients.

00:11:15.989 --> 00:11:22.649
We would usually defer to the status quo unless we know that what we're doing can make things better.

00:11:23.278 --> 00:11:25.578
On the, on the basis of above all do no harm.

00:11:26.318 --> 00:11:27.818
Exactly, exactly.

00:11:28.538 --> 00:11:40.339
Um, and also based on the fact that, um, you can have random variation, um, in data sets as well and we don't want to infer too much into those.

00:11:40.339 --> 00:11:50.778
We want to only make changes which take a lot of effort sometimes if we know that there's a true effect there and it's not just some random variation in the data set.

00:11:51.818 --> 00:12:03.979
So typically in a study that does involve a hypothesis test there will be a null hypothesis which the simplest scenario would be a study that involves two groups and a single intervention.

00:12:04.624 --> 00:12:12.764
And the null hypothesis would be that there is no difference between the two groups, which means that the treatment doesn't have any effect.

00:12:13.354 --> 00:12:30.254
So if we, if we conduct a test, a hypothesis test, we're really looking at our data set from our single sample and trying to work out whether that data is consistent with the null hypothesis.

00:12:32.549 --> 00:12:43.369
And if it's not consistent with the null hypothesis by a small amount, we may say, well, there may be a small true effect here, but this might also be due to chance.

00:12:43.749 --> 00:13:02.229
Whereas if the data in our sample is very, very inconsistent with the null hypothesis, that's much more convincing for us to say, well, there Actually, we think we have enough evidence here collected that we can refute this null hypothesis with confidence.

00:13:02.698 --> 00:13:08.299
And in rejecting that null hypothesis, we obviously then come up with an alternative hypothesis.

00:13:08.328 --> 00:13:13.649
And that might be that the treatment improves the outcome that we're looking at, or maybe it makes it worse.

00:13:13.688 --> 00:13:18.000
And we can make an estimate as to by how much does it increase or decrease.

00:13:18.279 --> 00:13:22.979
So that's the general framework of thinking and ideas behind hypothesis testing.

00:13:23.464 --> 00:13:28.404
And I believe that works then on working out chance of that happening and using what we call p value.

00:13:28.774 --> 00:13:29.563
Absolutely right.

00:13:29.654 --> 00:13:39.735
Yeah, so a p value is the chance of observing your data set if the null hypothesis is true.

00:13:40.014 --> 00:13:52.903
And obviously the more your data deviates from what you would expect under the null hypothesis, the chance of those results arising due to chance alone without your treatment having any true effect.

00:13:53.344 --> 00:13:57.443
become smaller and smaller as the differences become more and more extreme.

00:13:57.445 --> 00:14:10.214
The important thing to remember about p values is that they have come under quite a lot of criticism in recent times, largely because that is the only information that they convey.

00:14:10.604 --> 00:14:16.443
They convey the probability that, you know, your results arose due to chance alone.

00:14:17.174 --> 00:14:22.573
And so the lower that chance is, you know, the more confidence you can have in rejecting the null hypothesis.

00:14:23.144 --> 00:14:28.953
But it doesn't tell you anything about the magnitude of the change that you're actually observing.

00:14:28.955 --> 00:14:38.345
I always thought that increasing the sample size reduces or increases the chance of having a positive p value and that might be one of the reasons it was criticised in that sense.

00:14:38.735 --> 00:14:41.583
I also realised too that the actual value of 0.

00:14:41.585 --> 00:14:48.605
05 was just decided arbitrarily by Fisher, an early statistician, who thought that 1 in 20 was a reasonable number to choose.

00:14:48.894 --> 00:14:49.924
That's absolutely right.

00:14:49.924 --> 00:15:01.215
So the value that you choose as The threshold for what you consider a significant p value versus one which you're going to ascribe due to chance alone is completely arbitrary.

00:15:01.284 --> 00:15:06.894
There's been a convention for a long time now to set that value at five percent or 0.

00:15:06.955 --> 00:15:15.153
05, but as medical literature becomes more and more common, you know, there are more and more p values.

00:15:15.565 --> 00:15:39.294
studies conducted and papers published every day, year after year, we start to see a little bit of a phenomenon whereby there can be other issues at play, such as publication bias and a few other bits and pieces, which mean that this conventional thinking that, you know, one in 20 chance is something which is never going to happen unless there's a true effect.

00:15:39.705 --> 00:15:42.043
Um, you know, if you, if you've got thousands of.

00:15:42.784 --> 00:15:50.325
papers being published, you know, every day, and they're all testing hypotheses, you know, that's way more than 20 papers.

00:15:50.664 --> 00:15:54.875
You're going to get lots of them that are going to have statistically significant results due to chance alone.

00:15:55.475 --> 00:16:18.720
And so for that reason, you know, there's probably a bit of a push to start using lower p values to define significance and or just encouraging readers to interpret p values without necessarily feeling forced to ascribe a single arbitrary threshold to them as to whether they're significant or not.

00:16:19.049 --> 00:16:29.779
You can certainly look at it as a probability chance as to how likely this data set arose due to chance alone and make up your own mind about whether you think there's a true effect there or not.

00:16:29.870 --> 00:16:32.429
I'd like to let you know that Aussie Med Ed is supported by Healthshare.

00:16:32.759 --> 00:16:38.009
Healthshare is a digital health company that provides solutions for patients, GPs and specialists across Australia.

00:16:38.658 --> 00:16:46.059
Two of Healthshare's products are Baird Consult and A pre consultation questionnaire that allows GPs to know a patient's agenda before the consult begins.

00:16:46.549 --> 00:16:48.879
As well as HealthShare's Specialist Referrals Directory.

00:16:49.470 --> 00:16:53.110
A specialist analyzed health directory helping GPs specialist.

00:16:53.879 --> 00:16:55.460
What about the opposite end of the spectrum?

00:16:55.460 --> 00:16:57.839
So what you're saying at the moment is that 0.

00:16:57.839 --> 00:16:58.840
05 might be a bit high.

00:16:59.190 --> 00:17:02.099
What about the people who talk about, oh, things approaching significant?

00:17:02.589 --> 00:17:05.519
There are probably two sides to that.

00:17:05.660 --> 00:17:30.515
On, on one hand, I think to maintain integrity in the research process, you do need to be faithful to the traditional scientific method and I think to, to claim that you have found a causal link between something, you know, requires a whole lot of things to line up.

00:17:30.575 --> 00:17:37.025
One of which is to have observed A difference, a true difference due to your intervention.

00:17:37.575 --> 00:17:53.345
And the way to do that really is to define what you think is an important difference clinically before you start the study, um, and to also define exactly how you're going to analyse the outcome.

00:17:53.765 --> 00:18:01.384
And as part of that, I think you do need to define a threshold level of significance and stick to that in your analysis and in your write up.

00:18:01.494 --> 00:18:09.694
Certainly that can be frustrating for authors, I think, who might, in their data set, observe probably the effect that they were looking for.

00:18:09.724 --> 00:18:17.115
The effect might be slightly smaller than they were expecting, or the variance in the data set slightly higher than they were expecting.

00:18:17.355 --> 00:18:20.525
And as a result, their p value is not quite as small.

00:18:20.890 --> 00:18:26.259
as their, the threshold value that they had picked prior to starting the experiment.

00:18:26.430 --> 00:18:37.769
So in those scenarios, I think as the author of the published paper, you just need to stick to your a priori decision making framework.

00:18:37.890 --> 00:18:51.059
But that's not to say that as readers we can't also consider the fact that p values are are simply probabilities that results arose due to chance.

00:18:51.219 --> 00:19:07.479
Yes, the arbitrary thresholds are important, but we don't, they can be interpreted in another framework, I guess, as the reader, if you're not the person who has, has set the level, um, when you're registering a study, for example.

00:19:08.229 --> 00:19:14.739
So what about my other thought too, the larger the study, the greater the power of the study, the more chance it reached significance.

00:19:14.739 --> 00:19:15.839
And so it almost seemed like.

00:19:16.170 --> 00:19:20.569
If you just kept increasing the sample size, you'd end up, everything would be a 0.

00:19:20.569 --> 00:19:21.329
05 p value?

00:19:22.558 --> 00:19:23.429
Yeah, that's right.

00:19:23.588 --> 00:19:36.138
In the calculation of a p value, usually the things that will influence it are the size of the effect, the variation that exists in the data set, and what the sample size is.

00:19:36.749 --> 00:19:49.148
So, for any given combination of effect size and variance in a data set, The larger your dataset, the smaller your p value is going to be.

00:19:49.608 --> 00:20:08.509
And so as things like electronic medical records and data linkage and data sharing become more and more common, the possibility of mega datasets to emerge becomes more and more realistic and more and more common, and certainly that's a phenomenon worth bearing in mind too.

00:20:09.048 --> 00:20:13.558
So for any time that a p value is very small.

00:20:13.659 --> 00:20:23.939
It means that there probably is an effect there, but the p value may be small for any one of those three reasons.

00:20:23.979 --> 00:20:30.098
So, low variance in the data set, a large magnitude of effect, or a very large sample size.

00:20:30.608 --> 00:20:36.479
So if it's a large magnitude of effect, then obviously that's clinically very important to us as clinicians.

00:20:36.969 --> 00:20:44.368
The other two are more statistical phenomena, which are not so important for how effective the treatment is.

00:20:44.778 --> 00:20:49.028
Um, so it is important to look at the sample size when considering a p value.

00:20:50.038 --> 00:21:06.844
So if you see a positive p value, but you actually see the actual overall result saying a very small effect, You might be thinking, well, okay, that's useful to know, but it's not going to really change my clinical practice as much as a huge variance with a positive p value and a small sample size where you go, that's really important.

00:21:07.693 --> 00:21:08.263
Exactly.

00:21:08.933 --> 00:21:12.753
What about this idea of confidence intervals that you also see talked about as well?

00:21:13.003 --> 00:21:16.614
I get a little bit confused on that because it seems to have a range and it has a number in the middle.

00:21:16.615 --> 00:21:18.193
Can you explain that to me?

00:21:18.253 --> 00:21:20.933
And does that have anything to do with what we're talking about with p values as well?

00:21:20.934 --> 00:21:20.983
No.

00:21:21.824 --> 00:21:22.354
Sure.

00:21:22.564 --> 00:21:35.644
So confidence intervals are a range of plausible values within which the true population value will lie with a particular degree of confidence.

00:21:36.304 --> 00:21:40.084
Typically they're presented as 95 percent confidence intervals.

00:21:40.364 --> 00:21:47.473
So it's a range of values which will include the true population value with 95 percent certainty.

00:21:47.963 --> 00:22:02.173
So, In many senses they're analogous to p values, but they have the added advantage of providing a range of values, not just a probability of whether something arose due to chance or not.

00:22:02.203 --> 00:22:08.923
So the range of values that's provided by the confidence interval can give you an idea of the magnitude of effect.

00:22:09.144 --> 00:22:13.814
Often they can be calculated around an estimated value, so you might estimate it.

00:22:14.128 --> 00:22:17.769
You know, the average difference in blood pressure between two groups.

00:22:18.128 --> 00:22:31.719
Uh, so the average difference in blood pressure has a single particular value and that's the value that you'll see in the middle of the confidence interval range and either side of that you have the, the edges of the confidence interval range.

00:22:31.999 --> 00:22:36.278
So that's the range of plausible values that the, the difference in blood pressure could take.

00:22:37.064 --> 00:22:47.134
So in that scenario, if you then gave a medication that adjusted the blood pressure of the group of patients, they would then have a range and a confidence interval of 95%.

00:22:47.634 --> 00:22:50.933
And then do you compare the two confidence intervals that way and how do you do it?

00:22:51.493 --> 00:22:53.575
Not ideally like that.

00:22:53.584 --> 00:23:03.074
So you could calculate a blood pressure value in the first group and a confidence interval around that.

00:23:03.608 --> 00:23:07.808
And you could calculate a blood pressure in the second group and a confidence interval around that.

00:23:07.959 --> 00:23:24.469
What would be preferable to do, if you knew that the aim of your study was to compare the difference in blood pressure or that how it changes when you administer your treatment, you can make the outcome of your trial deliberately the difference in blood pressure.

00:23:25.088 --> 00:23:30.679
And so that's blood pressure one minus blood pressure two, and that then becomes a single value.

00:23:31.364 --> 00:23:39.163
You can then, uh, calculate a con confidence interval around that single difference value.

00:23:39.344 --> 00:23:41.624
And that's much more useful.

00:23:41.834 --> 00:23:52.183
And a more, a more valid way of telling what the difference is between two groups than, um, simply comparing overlap of confidence intervals between two separately created confidence intervals.

00:23:52.574 --> 00:23:57.854
We might come back to that example in a second when we start talking about the different tests and things we use and talk about how.

00:23:58.348 --> 00:24:02.479
What, whether it's a parametric test and what, what test you'd use in that example.

00:24:02.999 --> 00:24:09.659
If we move on a bit further though, what are the common pitfalls in interpreting statistical data in medical studies and how can they be avoided?

00:24:10.269 --> 00:24:18.419
I would say that the most common pitfall is probably to assume that a study is well conducted and that the conclusions are valid.

00:24:18.979 --> 00:24:24.358
I find it's much better to assume the opposite and ask the authors to prove you wrong.

00:24:24.499 --> 00:24:26.909
If I can't do that, I'll just remain dubious.

00:24:27.584 --> 00:24:30.263
Okay, almost like a null hypothesis on the study you're reading.

00:24:31.134 --> 00:24:31.763
Exactly.

00:24:32.054 --> 00:24:36.894
So, what are the key characteristics of a well designed and robust study that you need to look for?

00:24:37.693 --> 00:24:50.374
So there are a number of factors, and really, there's a huge long list of things to look for, because good research is really just a process of doing lots of little things right.

00:24:50.523 --> 00:24:56.868
And if you do all of those little things right, Then you'll have done good research.

00:24:57.058 --> 00:25:01.009
If you do none of those little things, then it's bad research.

00:25:01.348 --> 00:25:09.298
And there's a huge amount that sits in the middle there somewhere, where they've done some things right, or many things right, and some things not so well.

00:25:09.730 --> 00:25:20.929
But I guess the, some of the important concepts are to understand concepts like the study design is probably the single most important factor.

00:25:20.929 --> 00:25:28.844
So, whilst it's It's possible to have a well conducted case series or a well conducted observational study.

00:25:29.003 --> 00:25:49.354
If we assume that studies have all been conducted with a similar level of rigor, then a randomised control trial that is blinded is a much better design than an observational study of any sort and an observational study that makes some attempts to adjust for confounding.

00:25:50.344 --> 00:25:55.023
is better than a case series and a case series is better than a case report.

00:25:55.253 --> 00:26:02.804
So that level of evidence that people are probably familiar with in terms of an evidence based pyramid still holds true.

00:26:03.034 --> 00:26:13.594
Probably the caveat to that is the fact that publication bias is a real phenomenon and that can certainly influence meta analyses findings.

00:26:14.213 --> 00:26:37.878
Often the meta analyses and systematic reviews sit at the top of that pyramid, but sometimes If a meta analysis shows a difference between two groups, or that a treatment is effective, it's probably actually better to go and conduct a single, really well designed, robust, large, pragmatic trial to confirm those results, to ensure that it's not due to publication bias.

00:26:39.219 --> 00:26:55.433
If you've got a systematic review or a meta analysis which demonstrates no difference between two groups, um, then we can be pretty confident that That hasn't arisen due to publication bias and you can probably take that result as a, as being a true one.

00:26:55.773 --> 00:27:00.624
I was going to quickly just ask, just for the listener, what a systematic review and a meta analysis is.

00:27:00.634 --> 00:27:03.193
Can you just explain to them what that involves?

00:27:03.763 --> 00:27:20.858
Yeah, so a systematic review is simply a systematic search through Meta analysis is a process which is used to pull results from multiple studies and it comes up with an average effect.

00:27:21.489 --> 00:27:26.719
Um, so often, um, Meta analyses and systematic reviews are pulled together.

00:27:26.999 --> 00:27:31.618
You would need to conduct a systematic review before being able to conduct a meta analysis.

00:27:32.159 --> 00:27:45.588
So that the idea of a meta analysis is it's a way of generating a large amount of data to answer a question without necessarily needing to do that within a single new trial.

00:27:45.608 --> 00:27:59.433
It's using existing results in the medical literature to come up with a It's particularly helpful if you have multiple small trials, particularly if they have some, um, difference in their results.

00:28:00.213 --> 00:28:08.433
And a publication bias would be where lots of studies have been pulled but the actual, all the individual studies aren't of great quality and therefore they influence the results?

00:28:09.054 --> 00:28:20.709
Yeah, publication bias is typically this phenomenon whereby studies which show a difference between between groups or treatment effectiveness are more likely to be published than those that don't.

00:28:21.199 --> 00:28:27.959
And so when people conduct their systematic review, generally you can only find studies that have been published.

00:28:28.288 --> 00:28:40.828
So there might be a whole range of studies that people have conducted which represent the, you know, the true effect of your treatment or intervention, which have never made it to print and therefore never make it into a systematic review and meta analysis.

00:28:42.759 --> 00:28:43.919
I'm learning all the time.

00:28:44.048 --> 00:28:51.278
So what other types of biases do we need to be aware of in to try and design a well robust and ideal study?

00:28:51.368 --> 00:28:57.348
Essentially anything you can think of that can go wrong in a study can be a potential source of bias.

00:28:57.598 --> 00:29:00.878
Depends greatly on the study design and what you're doing.

00:29:01.138 --> 00:29:02.828
You might be conducting a survey.

00:29:03.449 --> 00:29:07.388
There might be ways that you are asking the questions which are a little bit leading.

00:29:07.689 --> 00:29:09.138
That can introduce bias.

00:29:09.679 --> 00:29:30.894
You know, if you're measuring an outcome, it might be that if you, the person who's measuring the outcome isn't doing it in a particularly objective way, and if they know which, which group participant has been assigned to, you know, that we have all sorts of inherent, um, biases within us as, as humans that can happen whether, whether we mean them to or not.

00:29:31.324 --> 00:29:41.104
There are other phenomenon as well in trial conduct whereby patients might be excluded from the trials for particular reasons and that might bias the results.

00:29:41.644 --> 00:29:54.943
Or it may be that if we're looking at how much a blood pressure pill drops our blood pressure by, if the pill which drops the blood pressure the most also kills patients, then the fact that patients drop out of our data set.

00:29:55.359 --> 00:30:03.069
Because they die, it's going to sort of dilute the observed treatment effect from the pill which drops blood pressure the most.

00:30:04.029 --> 00:30:08.598
So, there's really no end to the potential sources of bias.

00:30:08.869 --> 00:30:14.068
They can be sort of typically categorised into some of the more common forms, but it can be anything really.

00:30:14.068 --> 00:30:17.890
It's anything which generates a systematic deviation from

00:30:19.674 --> 00:30:29.325
Yeah, but in that pill discussion, if you had a red pill versus a blue pill, the blue pill might be more calming and get a slightly better drop in blood pressure because of the placebo effect as well.

00:30:29.785 --> 00:30:32.055
And that can lead to a treatment bias as well.

00:30:32.055 --> 00:30:32.663
Is that correct?

00:30:32.733 --> 00:30:33.894
Yes, absolutely.

00:30:34.595 --> 00:30:43.224
Is there any other particular things you'd like to do in a study apart from excluding bias and make sure it's been assessed appropriately and conducted appropriately?

00:30:43.375 --> 00:30:45.734
Is there anything else that could make a better study overall?

00:30:45.734 --> 00:30:45.749
Yes, absolutely.

00:30:47.740 --> 00:30:55.000
Yeah, so some of the other more important points are probably to register your study prospectively.

00:30:55.240 --> 00:31:07.509
Publicly announcing what it is you're going to do, who are you going to study, what parameters are you going to collect, what is actually your primary outcome of interest, and how are you going to analyse it.

00:31:07.819 --> 00:31:15.180
If you can describe all of those things before you start your study, that gives you much more faith that there's a, it's a valid.

00:31:15.539 --> 00:31:17.039
Result and conclusion.

00:31:17.599 --> 00:31:27.849
The alternative to doing that is to not publicly disclose any of those things and simply publish your results and describe your conclusions after the study's conduct.

00:31:28.109 --> 00:31:36.190
The problem with doing that is this phenomenon of p value hunting which is a symptom of the publication bias that we referred to earlier.

00:31:36.789 --> 00:31:53.184
So authors know that their studies are more likely to be published if there's a difference between their groups and often Scientists, clinician researchers will conduct studies because they think that there is an effect there to be observed.

00:31:53.615 --> 00:32:11.954
And so if people are then left to their own devices, some of those inherent human biases can come out and, and people can sort of, you know, change the outcome that they were initially intending to look at because one of the other outcomes they collected in their study now seems a much more interesting result.

00:32:12.065 --> 00:32:14.555
You know, there's a difference between the two groups and it's.

00:32:14.894 --> 00:32:19.615
Now seems, you know, quite clinically important, so maybe we'll report this one instead.

00:32:19.884 --> 00:32:30.045
Or they'll slightly change the definition of how they define their primary outcome and, and report that, the newly defined version because that, that provides a more statistically significant and interesting result.

00:32:30.173 --> 00:32:37.325
So registering studies, protocols and statistical analysis plans prospectively helps to guard against those things.

00:32:38.545 --> 00:32:43.025
And obviously for the listener, there's just things that can creep into studies and we need to watch out for.

00:32:43.974 --> 00:32:48.184
Yeah, I mean, people do study this from time to time.

00:32:48.184 --> 00:32:56.505
They'll take random samples from even quite respectable journals and examine the incidents with which some of these things happen.

00:32:56.684 --> 00:33:07.565
And whilst it's, you know, it's not to say that this happens all the time and everyone does it, it certainly has a significant prevalence in the medical literature across all disciplines.

00:33:08.625 --> 00:33:10.984
And what are the red flags that we need to look for in that scenario?

00:33:10.984 --> 00:33:12.615
Is it just the things you've already talked about?

00:33:12.615 --> 00:33:14.984
Are there anything particular that comes to mind that you see?

00:33:15.974 --> 00:33:23.473
I mean, by and large, it's the opposite of all of the things we've just talked about, um, that are the markers of good research.

00:33:24.154 --> 00:33:28.354
And anytime you don't see those potential markers of bad research.

00:33:29.263 --> 00:33:56.865
Another thing that can help sometimes in a study to set your mind at ease that A finding is robust and true is a sensitivity analysis because sometimes despite our best intentions there are actually multiple valid ways of defining an outcome or analyzing a particular variable and those different definitions and analysis methods can each have their own assumptions and drawbacks.

00:33:56.894 --> 00:34:06.720
Not, not that they're necessarily wrong to do it that way but there are just some assumptions that are made as part of the analysis process.

00:34:07.839 --> 00:34:21.239
So sensitivity analyses are designed to ask the question, what if we didn't make those assumptions or what, so what if we use the alternative but also valid definition of the primary outcome?

00:34:21.639 --> 00:34:26.369
What if we analysed it in a different way which is also considered appropriate?

00:34:26.929 --> 00:34:28.210
you know, in our field.

00:34:28.289 --> 00:34:40.809
So going through that process and then verifying if we didn't make these assumptions and we did it this alternative way, do we end up with the same study conclusion or does it change our study conclusion?

00:34:41.489 --> 00:35:12.239
So if you, if you do a range of sensitivity analyses and you get a different answer each time, that's quite unsettling in terms of You know, being able to have confidence in the result, whereas if there's a range of sensitivity analyses that are done and they all support the same study conclusion, then you can have a lot more confidence that these little assumptions that are made each step along the way in the analysis and the definition of outcomes and perhaps the choice of the primary outcome as well are not making or breaking the study, I guess.

00:35:13.070 --> 00:35:17.849
A question that's come to mind while we're speaking, are there any tools to help you analyse the research you're reading?

00:35:18.239 --> 00:35:22.510
Are there any guides at all, particular checkboxes you need to look at?

00:35:23.019 --> 00:35:23.900
Yeah, there are.

00:35:23.900 --> 00:35:29.130
There are some really useful guides, checkboxes, and help kits.

00:35:29.260 --> 00:35:39.130
Probably the best one to know about, which is the most widely applicable across all study designs and all research, are the consort tools and checklists.

00:35:41.065 --> 00:35:47.704
The CONSORT was a group that was formed, um, designed to promote high quality research.

00:35:47.775 --> 00:35:58.315
And they have produced a range of guidelines and checklists, which are really good if you're thinking of designing a study to use, but also as a reader as well.

00:35:58.344 --> 00:36:03.275
You know, any time you pick up a new study and read it, if you're running a journal club or anything like that.

00:36:03.695 --> 00:36:09.545
They can be a fantastic resource to use and they've got a different checklist for different scenarios.

00:36:09.735 --> 00:36:18.534
There's a guide and a checklist for randomised control trials, there's one for meta analyses, there's one for observational studies and they're very useful.

00:36:18.945 --> 00:36:22.454
So what sort of are the common tests we need to know or should be aware of?

00:36:22.925 --> 00:36:30.855
I think the most common test being used at the moment would be a student's t test for continuous data.

00:36:31.278 --> 00:36:35.289
And a chi squared test or Fisher's exact test for categorical data.

00:36:35.639 --> 00:36:44.418
Those two tests are relatively easy to calculate and are very robust in a range of different scenarios.

00:36:44.429 --> 00:36:53.639
So they've become commonly used tests for good reason and they can be applied to a large range of scenarios.

00:36:53.648 --> 00:36:55.679
So they are good ones to know about.

00:36:55.958 --> 00:36:59.045
In terms of, um, how many other tests there are.

00:36:59.485 --> 00:37:05.045
There are a huge gamut of different tests out there and whether or not to consult a statistician.

00:37:05.605 --> 00:37:26.903
There are some scenarios where I think if you feel confident to analyze a data set yourself then that can be a great thing and there are other times when you know if you feel not so confident that involving a statistician is a really good thing to do and that you know there's a range of complexities in data sets and analyses and so where you sit as an individual.

00:37:27.313 --> 00:37:34.623
And where the study data set and analysis sits on that spectrum is always going to be a different beast for different projects.

00:37:34.974 --> 00:37:42.023
I guess the one thing I've learned in my statistical studies over the years is how easy it is to get it wrong sometimes.

00:37:42.184 --> 00:37:47.373
So I've certainly learned to have a very low threshold for involving a statistician.

00:37:47.844 --> 00:37:54.313
You know, even being an accredited statistician myself now, I have a very low threshold for involving someone else.

00:37:54.813 --> 00:37:57.963
Not necessarily for them to do the whole analysis.

00:37:58.253 --> 00:38:03.373
In some cases where it's outside of my scope, I'll ask someone to do the whole thing for me.

00:38:03.664 --> 00:38:08.974
And in other scenarios, I might just ask for some supervision of what I'm doing.

00:38:09.264 --> 00:38:23.898
So, I think any of those It's whatever you feel comfortable with, and even for a given individual that will vary from project to project because different projects have involved different levels of complexity.

00:38:24.179 --> 00:38:28.969
But I'd always err on the side of involving a statistician if you're not sure.

00:38:29.119 --> 00:38:36.239
And I would always err on the side of doing this work and putting this thinking in and involving the statistician if you're going to do it.

00:38:36.878 --> 00:38:45.469
earlier rather than later in the project, you know, ideally in the design phase, the initial design phase of any study is where that involvement should come.

00:38:45.838 --> 00:38:53.438
Healthcare professionals, how can they integrate this sort of assessments of studies and better evidence based medicine into their clinical practice effectively?

00:38:53.878 --> 00:38:55.068
What are your thoughts on this?

00:38:55.668 --> 00:38:56.719
Uh, that's a tricky one.

00:38:57.199 --> 00:39:02.949
So changing clinical practice is difficult and definitely not my area of expertise.

00:39:03.639 --> 00:39:11.409
You know, I think understanding the evidence is the first step, uh, and that's certainly where I am more comfortable.

00:39:11.739 --> 00:39:21.148
From there, I think, you know, there are many other organisational and interpersonal elements that start to become increasingly important.

00:39:21.594 --> 00:39:24.514
And I guess, you know, how do you make that transition?

00:39:24.804 --> 00:39:32.054
I guess starting to talk to your colleagues about this new evidence that you've read is probably the first good step.

00:39:32.273 --> 00:39:45.753
You'll get a little bit of feedback from them and if you both agree, hey, this is a promising new revelation, you know, maybe we should look at changing our practice, at least then you've got a friend alongside you.

00:39:45.824 --> 00:39:52.268
So, that's probably As far as I would take it in terms of, you know, my level of expertise.

00:39:52.289 --> 00:40:04.228
After that, you know, there are lots of other people out there, lots of my colleagues who are much better versed in the ideas of quality improvement programs and organizational change.

00:40:04.469 --> 00:40:07.429
The role of managers, um, comes into that.

00:40:07.458 --> 00:40:11.918
It starts to become, you know, managerial domain as well.

00:40:12.039 --> 00:40:12.228
So.

00:40:13.378 --> 00:40:36.579
Depending on what the change is that you're trying to implement can be quite complicated and difficult, and whether or not there's evidence to support efficacy in terms of patient outcomes, or cost effectiveness in terms of healthcare dollar savings, is almost one small part of a much more complex piece of machinery to actually make that change in clinical practice, I'd say.

00:40:37.039 --> 00:40:45.143
I'm also aware that there's a science behind this change in clinical practice as well, so keep It's not really my area of expertise.

00:40:45.563 --> 00:40:49.614
Well what we've really summarised is that statistics are important to analysing results.

00:40:49.664 --> 00:41:05.554
But unless you do a properly conducted study, assessing what biases there might be, outlining what you're planning to assess along the way, and using appropriate statistics to analyse that, then you can't really be confident that the results you're showing are definitely what is actually happening in real life.

00:41:06.153 --> 00:41:09.923
You've also got to assess studies that you're reading in a similar fashion.

00:41:10.273 --> 00:41:14.014
And use tools such as that from, you've mentioned, like the CONSORT to assess them.

00:41:14.614 --> 00:41:21.454
And also being aware that when you do assess a result, that it could be useful in clinical practice to involve a team to help critically assessing it.

00:41:22.103 --> 00:41:24.494
Would that be a good summary of what we've been discussing today?

00:41:24.784 --> 00:41:26.434
Yeah, I think that's a fantastic summary.

00:41:26.563 --> 00:41:27.134
Well done.

00:41:27.525 --> 00:41:30.844
I've always liked to finish off about the use of artificial intelligence.

00:41:30.974 --> 00:41:38.293
What about the role of artificial intelligence in helping analyse data, or is that going to have a greater risk of introducing biases along the way.

00:41:38.934 --> 00:41:41.704
Oh look, I think you've hit the nail on the head.

00:41:41.713 --> 00:41:49.514
It has huge potential, but there are also the potential for major biases and problems.

00:41:50.054 --> 00:41:55.233
I think artificial intelligence covers a range of different concepts.

00:41:55.353 --> 00:42:02.403
The one which is probably most relevant to statistics is probably machine learning.

00:42:03.224 --> 00:42:20.844
Machine learning is a way of analysing data based on some automated, pre specified rules, but usually it is a multi layered process, or a process which feeds back on itself numerous times.

00:42:20.884 --> 00:42:30.643
But each layer, or each time it conducts this process, it's usually doing some sort of relatively simple or straightforward analysis.

00:42:31.143 --> 00:42:35.983
analysis technique, which is not new, something that already exists in the world.

00:42:36.233 --> 00:42:39.233
For example, something like a logistic regression model.

00:42:39.233 --> 00:42:53.454
If you have a binary outcome, you know, a machine learning model might use something like a random forest program, and that can use a series of layered logistic regression models to decide a final binary outcome.

00:42:54.688 --> 00:43:00.918
And it, it does that without necessarily disclosing a lot of the information.

00:43:01.789 --> 00:43:16.228
I think what, maybe one of the things we could have talked about earlier that now becomes relevant to assessing AI stuff is it's always good to see the diagnostics of any of the analysis plans that have been done in studies.

00:43:16.398 --> 00:43:26.733
So ideally if people can present their study data set in a de identified Present the code that they use to analyse it.

00:43:27.014 --> 00:43:35.324
And some techniques such as regression models can often be tested to see how well the model actually fits your observed data.

00:43:35.353 --> 00:43:38.344
So those model diagnostics can be really good to present as well.

00:43:38.713 --> 00:43:42.684
So there are some things that can be good to look for in good research.

00:43:42.704 --> 00:43:46.884
And the absence of those things can sometimes be a flag for bad research.

00:43:47.023 --> 00:43:52.563
With AI it just takes that to the next level because it's doing so much computational work.

00:43:52.594 --> 00:44:00.813
It requires a lot of code and there's a lot of output from the code because it's running multiple models again and again and again in this iterative process.

00:44:01.204 --> 00:44:09.824
But really the only way to be sure that they haven't done something silly is to look at all of that code and all of that output.

00:44:10.224 --> 00:44:18.483
And so we don't necessarily expect that everyone We'll be able to understand that code and that output, but some people can.

00:44:18.574 --> 00:44:31.983
And, uh, I think the one thing that we would ask is that those things are published each and every time someone uses, um, machine learning as part of an analysis method for a study.

00:44:32.134 --> 00:44:40.233
Simply to publish the, the code, ideally the data set, um, uh, and the output so that it, it can be verified.

00:44:41.278 --> 00:44:48.949
Basically, without knowing all the intricacies of what is actually computing, you really can't be sure that the numbers that come out are correct.

00:44:49.018 --> 00:44:49.768
No, that's right.

00:44:49.829 --> 00:44:52.509
There are some scenarios where it can be really useful too.

00:44:52.559 --> 00:45:01.719
Like I think in something like image recognition, pattern recognition aspect of it, through an iterative process, can be extremely useful.

00:45:01.739 --> 00:45:06.079
But you contrast that against something that has a numerical outcome.

00:45:06.079 --> 00:45:18.219
For example, if we come back to our blood pressure, scenario, usually in a blood pressure study, the outcome is a number which can be calculated based on a relatively simple formula.

00:45:18.469 --> 00:45:26.179
So you'll see some people occasionally applying something like a machine learning model to calculate a blood pressure study outcome.

00:45:26.684 --> 00:45:39.233
And that's a huge waste of computational power and using an iterative process which starts with a sort of a random guess as to what the number might be and gradually narrows it down based on your observed data set.

00:45:39.304 --> 00:45:41.304
When you could actually just calculate.

00:45:41.938 --> 00:45:45.148
what that value should be through a relatively simple formula.

00:45:45.429 --> 00:45:47.539
So sometimes it gets a little bit misused as well.

00:45:47.539 --> 00:45:57.259
It's become a bit of a sexy term that people like to apply to whatever it is they're doing irrespective of whether it's actually the most elegant or accurate way of deriving the answer.

00:45:57.478 --> 00:46:06.838
Well finishing off with that blood pressure model, would you use a chi square test or student test to analyze the differences in blood pressure after treatment or what would you tend to use in that scenario?

00:46:08.159 --> 00:46:16.268
Yeah, so blood pressure is a numerical outcome which theoretically could take on any number of values not just integers.

00:46:16.559 --> 00:46:25.639
So it's a good example of a continuous variable and we know that it's relatively normally distributed amongst most populations.

00:46:25.648 --> 00:46:35.289
So we can use a student's t test to analyse it or a linear regression model are two good ways it should give you the same answer essentially.

00:46:37.338 --> 00:46:43.108
Well look Adam, thank you very much for coming on Aussie Med Ed and enlightening us on statistics and the use of medicine.

00:46:43.318 --> 00:46:44.509
So thank you very much.

00:46:45.208 --> 00:46:46.898
No worries Gavin, thanks for having me on.

00:46:47.179 --> 00:46:48.259
It's been great to be a part of your show.

00:46:49.248 --> 00:46:50.599
That's brilliant, thank you once again.

00:46:50.789 --> 00:46:56.509
I'd like to remind you that all the information presented today is just one opinion and that there are numerous ways of treating all medical conditions.

00:46:56.929 --> 00:47:00.929
It's just general advice and may vary depending upon the region in which you are practising or being treated.

00:47:01.889 --> 00:47:08.898
The information may not be appropriate for your situation or health condition and you should always seek the advice from your health professionals in the area in which you live.

00:47:10.259 --> 00:47:17.639
Also, if you have any concerns about the information raised today, please speak to your GP or seek assistance from health organisations such as Lifeline in Australia.

00:47:18.588 --> 00:47:22.838
Thanks again for listening to the podcast and please subscribe to the podcast for the next episode.

00:47:23.088 --> 00:47:24.898
Until then, please stay safe.

00:47:25.969 --> 00:47:30.168
I'd like to let you know that Aussie Med Ed is sponsored by Avant Medical Legal Indemnity Insurance.

00:47:30.728 --> 00:47:39.568
They tell me they offer holistic support to help the doctor practice safely and believe they have extensive cover that's continually evolving to meet your needs in the ever changing regulatory environment.

00:47:39.568 --> 00:47:47.059
Avant They have a specialist medical indemnity team located here in Australia and have access to medical legal experts 24 7 in emergencies.

Adam Badenoch Profile Photo

Adam Badenoch

Statistics

After completing his medical school, internship and residency at Flinders Medical Centre, South Australia, Adam completed his anaesthetic training in the South Australian between 2010 and 2014.

He has completed fellowships in difficult airway management at the Royal Adelaide Hospital in 2014, followed by Medical Education & Simulation at Flinders Medical Centre in 2015. He travelled to Canada in 2016 to complete a hepatobilliary and liver transplant anaesthesia fellowship at Toronto General Hospital before returning to work in Adelaide in 2017. In 2023 he completed a master of biostatistics from the University of Adelaide.

He has a diverse range of interests including ENT, Maxillofacial and Craniofacial anaesthesia, Hepatobilliary and liver transplant anaesthesia, Neuroanaesthesia, research and statistics.

Adam is a passionate South Australian, Hawthorn supporter, and father who works publicly at Flinders Medical Centre and privately as part of the Wakefield Anaesthetic Group.