Are you Ready for Reddit?

A Guide to Reddit: The (potential) Front Page of Social Media Medical based Education

Thanks to Daniel Cabrera and Scott Kobner for this brilliant post!

Reddit

 

 

 

What is Reddit?

Reddit is one of the most popular and rapidly growing social media platforms today, with Alexa ranking it as the 36th most popular website in the world. Defining Reddit may be difficult, but it can be described as a socially organized and community run content aggregator.

The website doesn’t have a central authority organizing and managing the posts, it rests on volunteer submitters and moderators (mods) to function as curators with a great amount (some would say too much) of independence. Despite its indy or guerrilla spirit, Reddit is owned by mega publisher Conde Nast publications.

How it works?

The platform is essentially organized in users called redditors (noted /u/user) and forums or communities called subredits (noted /r/topic). The users can submit any linkable content such us other web pages, videos, pictures or self-entry-text to a specific forum. Each post gets up voted or down voted which along a very complicated algorithm based on time decay increases or decreases the visibility and noteworthiness of the post.

The user who submits content gets awarded karma proportional to the amount of votes, being karma a sort a merit badge or bragging-right, with no intrinsic or extrinsic value; but likely working as an immediate gratification mechanism.

One of the most popular Reddit features is the ability to comment about the posts. All entries contain a colloquial section where the users can discuss the content of the submission; not uncommonly the comments are more engaging an interesting than the actual post. Similar rules in terms of karma and noteworthiness related to votes and time applies to the comments.

A great video from GCP Grey explaning how Reddit works:

The Culture

What makes Reddit different from other social media hubs is the powerful culture behind the platform and its users. The website has its own lingo, code of conduct and heavy anonymity governing the website, this leads to a strong sense of community and belonging, while creating the feeling of being different from other more tamed platforms.

The community values curiosity, knowledge, wit, engagement and timeliness and rewards or punishes the content based on them. The values of Reddit resonate particularly strong with the current teenagers coming to age and the Generation C, who are defined by being connected, communicating, content-centric, computerized, community-oriented and always clicking.

Reddit can be an extraordinary positive force of thought and action but at the same time it has been involved in very reprehensible and dangerous acts; Reddit functions as a reflection of the culture where it exists and all the good and bad is only a representation of our current society.

How to use Reddit for social media based education?

The technical and cultural characteristics of Reddit offer a robust and free access platform to create digital communities of learning and practice.

Even acknowledging Reddit’s critique of mob-type behaviors, the ability to create and publish content, the capacity to be curated by moderators, asynchronic nature and the social negotiation of the content are key characteristics of the upcoming social learning paradigm.

A digital teacher can create a subreddit, asking the learners to become users; then posting not only didactic units, but also Journal Club type of discussions, polls and even evaluations. The learners can interact not only with the teacher but also with other learners of the group, other communities and users that may appear relevant to the discussion. The teacher acting as a curator (or moderator) can steer the discussion where it appears most appropriate for the curriculum.

A great example of how history teachers use Reddit can be found at /r/AskHistorians

 

 

“The future is already here — it’s just not very evenly distributed”

– William Gibson

Stress Inoculation Training

One of the hottest buzzphrases in Emergency Medicine and Critical Care Education is Stress Inoculation Training (SIT).

For this podcast, Swami had the opportunity to sit down and chat with Michael Lauria. Mike is a 1st year medical student at Dartmouth University Medical School but he has extensive experience in SIT from his time as a Pararescueman in the US Air Force. Mike’s prehospital and retrieval experiences are translatable to resident education. To get some background on where Mike is coming from, check out his lecture “Making the Call” on YouTube as well as his recent guest appearance on EMCrit on Toughness.

Here we discuss the origins of SIT, its use in the military and how we can bring SIT to the world of medical education. We also touch on the strengths and weaknesses of SIT. Mike discusses some concepts that are critical to implementing SIT into resident training. So, take a listen, see what you think and post some comments and critiques. We, of course, would love to hear your thoughts and opinions.

References

Meichenbaum D. Stress inoculation training: a preventative and treatment approach. Principles and Practice of Stress Management 3rd Edition. Guilford Press 2007.

Saunders T et al. The effect of stress inoculation training on anxiety and performance. J Occ Health Psych 1996; 1(2): 170-86. PMID: 9547044

LeBlanc VR. The effects of acute stress on performance: implications for health professions education. Acad Med 2009; 84: S25-33. PMID: 9907380

Recommended Reading

Grossman D, Christensen LW. On Combat, The Psychology and Physiology of Deadly Conflict in War and in Peace. 2008. Link

EMCrit Podcast 118 – EMCrit Book Club – On Combat by Dave Grossman.

Asken M, Christensen LW, Grossman D. Warrior Mindset. 2010. Link

Siddle BK. Sharpening the Warriors Edge: The Psychology & Science of Training. 1995. Link

 

Guest

Michael J. Lauria

MS1, Dartmouth Geisel School of Medicine

Critical Care/Flight Paramedic

Dartmouth-Hitchcock Advanced Response Team (DHART)

M. Lauria

Play

Teaching Risk Taking Behavior in Medical Education

Swaminathan Headshot 2013

This post was put together by the incredibly talented and brilliant, Swami.

 

 

 

A 44-year-old healthy man presents with dull chest pain for 3 hours. His EKG is unremarkable. What’s his risk for acute coronary syndrome? Should he get a troponin? Two troponins? Observation and a stress test?

The Emergency Department is an inherently high-risk zone.

The Emergency Department is an inherently high-risk zone.

Emergency Medicine is an inherently risky specialty. In fact, many would say that risk stratification is our specialty. When a patient presents with symptoms, we use our clinical knowledge to determine what we think to be the most likely cause of those symptoms. We then apply studies and investigations to help confirm that diagnosis while attempting to “rule out” other diagnoses. At the end of this, we are often left without a specific diagnosis and need to make a disposition. When we decide to admit or discharge a patient, have them follow up in 24 hours or 1 week we are risk stratifying. For those we send home without a diagnosis, we try to determine how long they can wait to see another doctor for further investigation. We know that some of these patients will decompensate and return to the ED and so we are risk stratifying the likelihood of that decompensation.   Thus, during each patient encounter, the Emergency Physician needs to perform multiple risk stratifications. For example:   A 41-year-old man on aspirin presents with minor head trauma. His GCS is 15 and he is neurologically intact. He complains of a mild headache.

  • Does the patient need imaging now?
  • Does the patient need observation for 2 hours? 4 hours? Overnight?
  • Can I send the patient home safely without imaging?
  • Will the patient’s status degrade in the next 24 hours? 48 hours?
  • Should I schedule neurology follow up? If so, when?

This is a fairly simple case yet multiple risk assessments are involved. Each of these decisions must take into account hospital factors (i.e. ability to obtain follow up) and patient factors (i.e. distance from the hospital, reliable to follow up).   This brings us to the central questions of this post:

  • How do I train residents about risk?
  • How do I train residents to develop their risk threshold?
  • How do I train residents to embrace risk?
Damn it Jim, I'm a doctor not a CT scanner!

Damn it Jim, I’m a doctor not a CT scanner!

Clearly, we can see the need for this type of training. While we’d all like to have the magic tricorder to tell us if the patient has an intracranial injury, has a concussion etc we don’t. We deal with tests that are less than perfect and make decisions based on these tests.This raises the first point I always discuss with my residents. There is no such thing as a “rule-out” test. There is no test or series of tests that can definitely “rule-out” a disease. We use the tests to risk stratify the patient. Take another case:

A 22-year-old woman presents with right lower quadrant pain and vomiting. She is tender and you order a CT scan of the abdomen and pelvis. The scan is read as “no intra-abdominal pathology is identified that explains the patients pain.” Has the patient been “ruled-out” for appendicitis?

We know the answer to this question is no. CT scan of the abdomen and pelvis has a sensitivity of 98-99% and so there will be patients that are false negatives. In spite of the fact that we know this, we usually tell patients, “You don’t have an appendicitis. You’re going to be fine and we’ll be discharging you in a bit.” What we should be telling patients is “The CT scan doesn’t show signs of appendicitis. I think it’s unlikely you have an appendicitis but the test isn’t perfect. There’s still a chance. We’re going to send you home but here’s what you need to watch out for.” The second statement is an acknowledgement that we have risk stratified to a low risk category but not no risk. This approach goes for any patient we see whether it be chest pain, an ankle injury or abdominal pain. Although this may appear to be nothing more than semantics, I argue that this change in terminology is central to teaching what Emergency Medicine is about.

Once we’ve rid the residents of the idea of ruling out disease, we need to encourage them to think about risk stratification when they present the patient and incorporate that into their presentation. Residents are smart and once they get to know the faculty, they tailor the presentation and their proposed workup to what they think the faculty member will want to hear. We should encourage them, instead, to present the patient and tell us what they would do if they were in charge. This allows them to begin to feel the responsibility of their plans. Unfortunately, this takes time. After they give you their plan, you need to explain why you would do things differently. Why is your plan more or less risky than that of the residents? Explaining this will allow them to develop their risk taking behavior.

The most important part of risk stratification and risk decisions is the patient. When I finish a patient encounter and am ready to discharge patients home, I always sit down and have a discussion about risks and the need for follow-up. These two things go hand in hand. Often, patients believe that discharge from the Emergency Department comes with a clean bill of health and a 5 year, 100,000 mile guarantee. This again reflects the disconnection between what we as physicians think and what patients perceive.

When discussing disposition with the patient, sit down and turn off the phone.

When discussing disposition with the patient, sit down and turn off the phone.

How do we teach residents to communicate risk to patients? First we should start with modeling the behavior. Have the resident follow you while you have this discussion with the patient. Here are some simple things to do to maximize this interaction and model the proper behavior:

  • Sit down and turn off the pager/phone (no interruptions)
  • Explain everything that’s happened during the ED stay
  • Explain the findings (or lack there of) from your evaluation
  • Discuss your evaluation of all of the information and the presence of clinical uncertainty and the importance of prompt follow up
  • Discuss how the follow up will be arranged (patient calls or you are calling for the appointment)

Ask if the patient has any questions

I also like to add, “I’m okay with being wrong but I want you to give me the opportunity to make it right. Come back and see me or one of my colleagues if anything concerns you.”

There is no act of teaching that benefits the resident more than watching the proper behavior modeled. For the next patient, you flip the scenario and have the resident lead the discussion and you watch them. Afterwards, you offer them critique and tips to improve.

"No I've never sent home a patient to die but that's because I'm a good doctor."

“No I’ve never sent home a patient to die but that’s because I’m a good doctor.”

Finally, I think there’s an important role in discussing difficult cases where your risk stratification was incorrect. I’m not talking about during formal department Morbidity and Mortality conference but rather conversations in the clinical environment about these cases. This act stresses the importance of following patients up in order to evaluate the appropriateness of your level of risk taking behavior. Residents should understand that our understanding and practice of medicine is not perfect and mistakes will be made. The vital thing is to learn from these mistakes and adapt our clinical care and risk taking behavior accordingly.

Risk stratification and risk taking behavior are central aspects of Emergency Medicine. It is our job as resident educators to help residents develop these skills and attitudes. Since I’m by no means an expert in this area, I encourage you to email me, post comments etc on the topic so we can all learn more.

Adopting FOAM: When and How?

OLYMPUS DIGITAL CAMERAIn this post, Rob Cooney, the man, the myth, the legend, discusses FOAM and the adoption of FOAM into practice. Recently, the FOAM world was able to have a front row seat at a great debate between two titans as they went head-to-head over the issue of adopting a change in practice.

 

Part 1: SMACC Gold: What to Believe-When to Change

Part 1: SMACC Back-On Beliefs of Early Adopters and Straw Men

Part 3: SMACC Back-Back on What to Believe and When to Change   

While I enjoyed the debate and found myself nodding in agreement with each of the speakers, in the end, I was left feeling a little wanting. There was very little practical advice given on how to adopt a change. In fairness to the speakers, this is a high level construct, a 50,000 foot view of the issue, designed to inspire you and make you think. That being said, Scott Weingart did give a glimpse of the solution:

“You need to put in the time. You need to read. You need to understand how to critically appraise new evidence; how to integrate it into your existing belief structure; how to then test that based on bedside clinical experience; based on your understanding of physiology, based on the specifics of every individual patient.”

He is absolutely correct in his statement. My only concern is that his call for a rigorous method will scare away early adopters and push them back into the early majority. I’m pretty sure that wasn’t his intent, but just in case, I wanted to offer a more “boots on the ground” approach that I think many of us could use to determine when to implement a change in practice.

The methodology that I believe will allow everyday clinicians to adopt changes in their practice is called the Model for Improvement. This year, I have the good fortune of being an IHI/AIAMC fellow. This means that I am learning, living, and breathing quality improvement. The Institute for Healthcare Improvement is a remarkable organization with quite the track record for implementing positive changes in healthcare. The model they use? The Model for Improvement. So what does this model look like?

 

fig 1-model for improvement2

 

As you can see, the model is based on three critical questions followed by iterative cycles of testing and learning. Let’s break it down piece by piece.

“All improvement requires a change, but not all change is an improvement.”

As you can see from the above quote, if we want to get better at something we have to make a change. Unfortunately, we sometimes can change things for the worse. Choosing what to change can be difficult. This is why the fundamental questions are critical.

Question 1: What are we trying to accomplish?

The first question can be viewed as the “aim statement.” This question must be answered very specifically. With quality improvement work, we attempt to identify the system, a timeline, and goals.

For example, “Within the next 12 months, we will reduce the door to doctor time in the emergency department from an average of 45 minutes to an average of 20 minutes.”

For practitioners working on individual improvements, they can easily choose much more manageable chunks to work with. Consider the use of push-dose pressors. Perhaps you have had difficulty with post-intubation hypotension and you’re considering the addition of push-dose pressors. Your aim could simply be, “I want to reduce the incidence of post intubation hypotension to less than 10%.” The key is to be as specific as possible. As they say at the IHI:

“Hope is not a plan, some is not a number, soon is not a time.”

Question 2: How we know that a change is an improvement?

 “In God we trust. All others bring data.”

                                                                                                            -W.E. Deming

This may seem obvious, but in order to determine if a change is an improvement, we have to measure something! In the above example of push-dose pressers, you would have to measure your rate of post-intubation hypotension. If the addition of push-does pressers did little to decrease your rate of hypotension, you’d likely abandon the practice before fully implementing it. While this seems intuitive, the actual measurement process can be made more robust by considering three types of measures:

Outcome Measures: These measures look at the performance of the system under study and are derived from the aim, i.e. rate of hypotension after intubation

Process measures: These measures look at the rate of utilization of an activity, i.e. use of meds, fluids, before and after vital signs

Balancing measures: Trying to improve one system at the expense of another should be mitigated as much as possible. Balancing measures attempt to look at the performance of the overall system. For example, how long does it take to intubate a patient with the addition of new drugs? Is there more hypoxia after the change?

It is also important to note that there are three kinds of measurement: research, judgment/accountability, or improvement. In terms of improvement data, we are not looking for rigorous, randomized, double-blind, placebo-controlled level data. We simply want “just enough” data to determine whether the change we are implementing is leading to an improvement.

Question 3: What changes can we make that will result in improvement?

This question is where improvement gets kind of fun. Depending on the complexity of the system that you are trying to improve, you may be able to come up with very simple ideas and test them easily. Answering this question also allows you to be quite creative in the solutions you suggest. Have you seen something work another industry that you think may apply to your day-to-day practice? Try it out!

Once you’ve answered the three questions above, it’s time to test the actual changes. These are done through iterative cycles known as “PDSA or Plan-Do-Study-Act Cycles.”

 

figure 2-use of pdsa1

These are simple experiments that take the ideas that you’ve created above and actually test them.

Plan: What are planning to do? How will you do it? Is anyone else going to test the change with you? how will you collect the data?

Do: Implement the test and collect the data

Study: What did you learn? Did the data match the predictions?

Act: What you need to change before the next cycle? Did it work well enough that you can apply it more broadly?

Notice from the above figure that the cycles are designed to be iterative, meaning one cycle flows smoothly into the next cycle as the data guides the improvements. Too often in healthcare, we identify a change and go straight into implementation. This is the dangerous practice that both Scott and Simon cautioned against. The below figure illustrates why it is important to implement changes very slowly. Every change comes with a cost. The higher the cost, the smaller the test should be. If there is a potential of harming a patient, VERY small tests are the first choice. This helps to mitigate the harm while allowing for future cycles to scale up if the change seems feasible. It also allows the early adopters to get things right before pushing the change out into the workforce.

new fig 3

While this model may seem complex, with a little bit of trial and error, it is quite simple to apply. It is also universally scalable. Want to lose weight? Apply the model: What am I trying to do? Lose weight. How will I know the change and improvement? My weight will drop, my clothing will fit easier, I’ll feel better, etc. What changes can I make? Eat less, exercise more, etc. These three critical questions, once answered, can then be tested, measured, and modified. Whether trying out simple new things were attempting to modify complex systems, use of this model allows a practical approach to making changes that drive improvement. It also allows “less expert” early adopters to safely dip their feet into the world of FOAM.