Smartphone Screenings: Assessing Memory and Cognition using a Mobile App

Image
Headshot of Dr. David Berron
David Berron, PhD
Image
Headshot of Dr. Lindsay Clark
Lindsay Clark, PhD

What if you could test your cognition from the comfort of your own home using a smartphone? Drs. David Berron and Lindsay Clark have spent years researching cognitive neuroscience, culminating in a 2024 published study investigating the effectiveness of a smartphone app as a tool for detecting cognitive impairment outside of a clinic or research setting. Drs. Berron and Clark join Dementia Matters to discuss how the app and tests were developed, the benefits and drawbacks of this approach  and the implications of remote testing in the healthcare field.

Guests: David Berron, PhD, Clinical Cognitive Neuroscience research group leader, German Center for Neurodegenerative Diseases (DZNE), and Lindsay Clark, PhD, licensed neuropsychologist, clinical core co-lead, Wisconsin Alzheimer’s Disease Research Center (ADRC), assistant professor, Division of Geriatrics and Gerontology, University of Wisconsin School of Medicine and Public Health

Show Notes

Read Drs. Berron and Clark’s study, “A remote digital memory composite to detect cognitive impairment in memory clinic samples in unsupervised settings using mobile devices,” online through the journal npj Digital Medicine.

Learn more about Dr. Berron and his research on his website.

Learn more about Dr. Clark on her profile on the Wisconsin ADRC’s website.

Connect with us

Find transcripts and more at our website.

Email Dementia Matters: dementiamatters@medicine.wisc.edu

Follow us on Facebook and Twitter.

Subscribe to the Wisconsin Alzheimer’s Disease Research Center’s e-newsletter.

Enjoy Dementia Matters? Consider making a gift to the Dementia Matters fund through the UW Initiative to End Alzheimer’s. All donations go toward outreach and production.

Intro: I’m Dr. Nathaniel Chin, and you’re listening to Dementia Matters, a podcast about Alzheimer’s disease. Dementia Matters is a production of the Wisconsin Alzheimer's Disease Research Center. Our goal is to educate listeners on the latest news in Alzheimer’s disease research and caregiver strategies. Thanks for joining us.

Dr. Nathaniel Chin: Welcome back to Dementia Matters. Today I'm joined by Drs. David Berron and Lindsay Clark. Dr. Berron is a research group leader at the German Center for Neurodegenerative Diseases, known as DZNE, in Magdeburg, Germany, where he leads and heads the Clinical Cognitive Neuroscience group. He is also a scientific co-founder of the startup company Neotiv, which created the phone app we will be talking about today. Dr. Clark is a licensed neuropsychologist and an assistant professor here at the University of Wisconsin in the Division of Geriatrics and Gerontology at the UW School of Medicine and Public Health. Dr. Berron and Dr. Clark worked together on a study published in March 2024 that looked at ways to detect cognitive impairment via a smartphone app, building on recent efforts to incorporate remote memory assessments into research and clinical care. Drs. Berron and Clark, welcome to Dementia Matters.

Dr. Lindsay Clark: Thank you.

Dr. David Berron: Thanks, happy to be here.

Chin: Going forward, of course, we know each other and so we'll call each other by our first names. I do want to start by talking about your publication, which is titled A Remote Digital Memory Composite to Detect Cognitive Impairment in Memory Clinic Samples in Unsupervised Settings Using Mobile Devices. This is based on your study where you looked at thinking tests completed by 199 people of various cognitive abilities on a smartphone. Before we talk about your findings, I'm hoping you, David, can share for our listeners why digital testing is important in the field and why you chose a smartphone app specifically.

Berron: Yes, thanks, happy to do so. I think there are several advantages of smartphone-based testing, or overall digital testing. When it comes to smartphones, it's specifically that they are scalable and more accessible because almost everyone has a smartphone nowadays. Also, as you can imagine, during the Coronavirus pandemic, that actually even has become more. On the other hand, when we think about one-shot single tests that we do, they always only give us a snapshot in time when it comes to cognition but our cognitive performance actually fluctuates over time. Smartphone-based assessments give us the possibility for repeated testing at different times and different days that then might give us a more representative picture of our cognitive performance. Then that also comes with higher ecological validity because these tests can be done out of the testing environment or the hospital and actually at our comfort and in our own home environment. Finally, maybe another important aspect is that the digital metrics that you can actually record with a smartphone, such as response times or movement over the screen, allow for more fine-grained outcomes that might be particularly interesting to detect very, very subtle cognitive impairment in early stages of the disease.

Chin: Yeah, I really appreciate your answer as someone who doesn't actually administer cognitive testing in my memory clinic but relies on it so heavily in understanding what someone might be experiencing and how they're performing. Two things I take away from what you said are: one, it's being done in their environment. A memory clinic environment is not their everyday life. Then, two, there are factors that we can't really assess in everyday environments that you're talking about, that a phone might be able to pick up on, like response time and such. I'll be asking you, Dr. Clark or Lindsay, about that in a few minutes because I think that's such an important piece to digital testing, which is still relatively foreign to a lot of our memory clinics. Before we get to that, though, how does this app work, David? What did you have the participants do, and what exactly did it test for?

Berron: Yeah, so the app overall does memory tests. It delivers three different visual memory tests. That's important to stress—it’s not verbal memory tests, but you actually see pictures and need to memorize them. The tests have been developed to target different memory networks that are affected in different stages of Alzheimer's disease. Overall, participants get push notifications, so the phone actually sends you a message. Then you have to do an active task—that’s also important to stress. It’s not passively recording in the background; it actually tells you when you are on task, then you actually do the cognitive task, and you also know when it’s done again. In this particular study, participants actually did a test approximately every two weeks for an entire year. Just to quickly describe the tests in a bit more detail—for example, in one, participants need to distinguish very similar images just based on their memory. For example, they see a certain sofa and later on they see an almost identical sofa where we just change the shape a tiny bit. Then they actually need to tell us: is it exactly the same, or is there a change? Then, talking about specific metrics, they actually have to touch the screen exactly at the location where they think that the sofa looks different. In a slightly different test, we show them a room with actually two objects at specific locations. They need to memorize that, come up with a little story to link everything in one coherent memory. Then, after a delay, we actually queue them with a specific location in the room and ask them which specific object has been there. As you hear, it’s quite challenging tests but they’re also fun. Yeah, that’s basically how they work. Another detail—overall, they take 15 to 20 minutes and usually come in two halves. In our particular study we did this every two weeks, so it’s a time effort of 15 to 20 minutes every two weeks.

Chin: In essence you've gamified cognitive testing, which is usually not thought to be a very fun thing to do.

Berron: Yeah, so we certainly tried. Not overdoing it, though—it’s still a cognitive test. Overall, looking at the usability questionnaires and what people tell us, there’s obviously a range but overall people enjoy doing the tests digitally.

Chin: And just so our listeners know that this wasn’t something that was done overnight, how long did it take to actually build this application and the thought that goes into this type of testing?

Berron: Yeah, it’s actually a long time. It all started with cognitive neuroscience research. That was mostly studies using functional neuroimaging to understand how different memory networks are actually involved in specific memory functions. Then, at some point, there was the idea that if we really want to bring that to scale, it actually needs to go in a smartphone app. I would say overall, that’s at least a decade. When it comes to the specific technical implementation, that’s roughly in the last seven years.

Chin: Yeah, that’s incredible. Thank you for giving those numbers. Now, Lindsay, you must have been somewhat skeptical about this from the beginning as a neuropsychologist in a memory clinic. How accurate can an unsupervised smartphone app be in detecting cognitive impairment? Then, with that answer, what are the advantages and disadvantages of being able to do this clinically?

Clark: Yeah, I was a little bit skeptical at the beginning because it can be hard or difficult to collect quality data remotely when you're not in the room. As you know, this is really different from what we do in a memory clinic. In a memory clinic, we're in a room with a person. We're administering standardized tests. We're watching them take these tests, seeing how they're doing it, and we've limited any distractions. It's a completely quiet environment. On the other hand, in the smartphone app people are doing these assessments wherever they happen to be when they get the push notification, with whatever tends to be going on in their life. It's a really different type of environment and scenario for testing cognitive function. As far as how accurate an unsupervised app can be, we've really only started to conduct studies to answer this question. Much of the early studies were looking at feasibility—meaning, can people actually download the application and perform the tests? Do they respond when they get the push notifications? Do people feel like they enjoy doing the tests? Are they satisfied? There was a lot of development in just making sure that an application was usable, user-friendly and people could actually sit down and do it. There have been studies over the last few years comparing the test performance data that we get from the smartphone application to in-clinic neuropsychological standard tests. We do see correlations between how people perform on these smartphone applications and the memory tests that we do in person in the clinic, so we do think they can accurately measure what we're trying to measure as far as how well a person's memory is functioning. I'll probably let David go into a little bit more detail in our study about how well these applications can detect cognitive impairment. We do feel like we are able to pick up cognitive impairment, but I'll let him talk more about the details of the particular finding in that study. You asked about the advantages and disadvantages to doing this or being able to do this clinically. I think there are a number of advantages that David already mentioned earlier. As far as in the clinic, I think one of the advantages is potentially providing more accessible cognitive screening tools for clinicians to be able to have in their toolbox. Smartphone applications provide an opportunity for patients or people to be able to perform cognitive testing outside of the clinic as well—so potentially giving primary care or other providers a tool that they can have people use in their home. It doesn’t take up extra time in the clinic. Patients can do these and potentially bring results back to their clinical providers. Just being able to provide more options for cognitive screening, I think, would be a really wonderful opportunity to use digital technology for. We know that there are a lot of barriers to getting cognitive screening or cognitive testing in healthcare, and so that's one potential advantage clinically for these types of tools. You also mentioned in the memory clinic, it’s kind of not exactly what we do in our home environments. People can be anxious about having memory testing when they're in the clinic. There's kind of that white coat effect of being observed by someone when you're doing testing. Another advantage of having a tool like this is that people could do testing in an environment where they feel more comfortable and not having to be observed by someone while they're doing these tests. Lastly, as already mentioned, when we do these memory tests clinically, we're doing them all in one single shot. We have people come into the clinic, and they're doing all the tests in one maybe two-hour period. If someone's having a bad day, is really tired, in pain, or having a lot of stress, that can affect their cognitive test performance and make it more challenging to detect cognitive impairment, whereas these kinds of digital tools can be done in a briefer period over maybe a week or two weeks so we can get a more representative sample of a person's actual cognitive function with the idea of reducing some of the noise related to other factors that can affect cognitive testing. As far as disadvantages, I think that the flip side of not having people in a structured environment is that there's a lot of potential distractions going on around them that we can't control for. When we do memory tests, we show people some images and then we need them to come back and do them on their smartphone, maybe 30 minutes later or an hour later. People might get sidetracked by things going on in their life, and then we might not have that data. There's some issue with that loss of control over the environment. People also have different devices. Sometimes with certain tasks, this can affect how they see the stimuli or, if it’s measuring response time, how they're responding. Those are, again, other factors that can create some noise in data collection. I think lastly, a disadvantage is the digital divide, meaning that there are a number of people in our country who don't have access to devices—or maybe they have access to devices but don’t have access to the internet or a cell phone tower nearby. That can limit those individuals in being able to access a smartphone application for this kind of testing.

Chin: Well, thank you, Lindsay, because you set this up very nicely. There are certainly pros to this. There are certainly some limitations that need to be addressed. Cognitive testing is incredibly important in the evaluation of someone who is experiencing thinking changes. Any way that we can improve our access, as well as the testing itself, seems like a worthwhile endeavor. What I'm learning from the two of you, though, is a lot goes into this before it hits the clinic, which will be one of my questions at the end. A lot has to go into feasibility, usability, accuracy, and validity, so this is not something that can happen in a year. Frankly, it's taking over a decade to get to this part. I mention this because now I want to know what you guys found and what this article talks about. So David, I'm going to start with you. If you can share with our audience, what did you find in this study? Then in that answer, if you could talk about something that comes to my mind—how do you know if people aren’t cheating? Or how do you know if people aren’t distracted, as Lindsay has said, that they didn’t leave and come back to it? Because it’s unsupervised, what are other things that you consider other than did you detect impairment?

 

Berron: Overall, I think we can summarize what we found mostly in two points. On the one hand, that adds exactly to what Lindsay already said, we looked at the construct validity. We were interested in whether what the app or the smartphone actually measures really compares to what we measure in a face-to-face standardized setting in the memory clinic. There we used the preclinical Alzheimer's cognitive composite. That's a composite score out of several cognitive tests covering memory, but also covering executive functioning, for example. What we found is that it actually compares quite well, so there's a high correlation between what the app measures and what we measure in the clinic. As Lindsay also already touched upon, that is actually an impression that you get from the entire field at the moment. If we would travel back like five years, I think the big question was, oh, do these novel tools actually measure anything meaningful? Now, if you look across studies and across tools, I think we can take from that that it's actually possible to get valid, reliable data and that it's also feasible in the target population, so in a group of older adults and even patients. I think that was one of the main first points. Then critically, we were interested whether we can use the smartphone-derived score to actually classify participants into those that we would rate or that actually would show cognitive impairment versus those that actually don't show cognitive impairment. Here we actually calculated this analysis and found that it's actually very good at discriminating these two groups. Yeah, I think that's the main takeaways. There are challenges too, as you so you actually touched upon two: cheating and also distractions. How do we actually get to those? First, for both these scenarios, we try to actually encourage people not to do. On the one hand, we tell people, OK, please go to a quiet environment, take your glasses, take care that there will be no distractions, and only then actually start the cognitive assessment. I think for cheating–and it's also like what other groups out there are doing–we remind people that this is for research and that it's important that we get the accurate scores and that it's not necessary to cheat. I think overall, there's not a strong interest actually to cheat in these kinds of studies. Yes, I guess the uncomfortable answer is that we cannot completely be sure that people don't. Regarding cheating, one thing that's maybe important to stress is in the visual memory test, it's really, really difficult to cheat because it's not a verbal list of words that you can just note down. You actually see images, later on you see similar images, and you don't even know what will have changed on the following image, right? I think cheating is actually pretty, pretty unlikely. When it comes to distractions, we then rely on self-reports. Following the test, we actually ask people how well could you concentrate and have you been distracted? Just to go to the specific study that we're talking about, people rated their overall concentration levels at a four out of five, so they could concentrate well. When we asked them for every session that they did, actually how often they were distracted, 8% of sessions were actually characterized by a distraction. So again, it's not nothing, but it rather has a minor influence. For now, we just have to live with that result. In the future, there could even be the possibility to actually flag sessions where there were distractions and then, for example, discard them.

 

Chin: Lindsay, as a clinical neuropsychologist and researcher, what is the potential impact of having this unsupervised remote cognitive testing on a phone for both memory care as well as research into cognition?

Clark: Yeah, so I think in research, the potential impact is the ability to study outcomes that we haven't been able to study easily in the past. For example, we can try to measure really early cognitive changes associated with Alzheimer's disease. When we try to do this in the research lab now, we have participants do paper-and-pencil tests every year or every two years. Then we look to see if, over a decade, people are declining on these cognitive measures over time, if they have brain changes associated with Alzheimer's disease for example. By being able to measure cognitive function more frequently, we can actually look over six months or over a year. Do we see these cognitive changes? This might help us be able to detect these changes earlier and intervene earlier in the future. I think speaking of intervention, there’s a number of clinical trials going on, looking at “Are new treatments for Alzheimer's disease effective?” In order to answer that question, they have to have a cognitive or other meaningful outcome to measure over time. By developing tests where we can better assess earlier cognitive changes in these clinical trials, we might be able to see if there's an improvement in cognitive performance over time sooner than we would–because of a new drug for example–if we were having people do paper-and-pencil tests again every year or every two years. I think it just gives us the potential for having more robust or more sensitive outcomes to these cognitive changes that we really want to be able to measure. Additionally, smartphone applications or unsupervised remote testing applications can help us potentially to expand the sample of people who are enrolled in research. One of the issues that we've struggled with in research is that our samples of people that we're measuring are not representative of the people who are at highest risk for Alzheimer's disease—those from historically marginalized communities, maybe those from rural areas. This is something where we could potentially expand access to people to participate in research studies. If we have something they can do in their home, it's accessible, then we can really look at this disease and its process in a more representative way. I think in research, there's a lot of potential for these unsupervised or just generally remote testing applications. Clinically, I think we've talked about the barriers to getting cognitive testing in the clinic. There's often really long wait times to see a specialist. Primary care providers don't have a lot of time in the clinic to be able to fully assess cognitive function and provide a diagnosis. Creating some more tools that would be available for clinicians to be able to access, have their patients do outside of the clinic setting, but still be able to be used for treatment planning or for triage to a memory specialist or for potentially helping with a diagnosis—I think these are potential clinical applications that remote assessment paradigms can help to address.

Chin: So right now, it seems like the biggest impact is going to be in research, and then eventually this will translate to a clinical environment. Does that seem fair?

Clark: Yeah, I think that's fair. I think it is being moved into the clinic. There are research studies about “Would this be effective?” or “Do providers find this useful in the clinic?” I think in research, yes, for sure. Then I also think there's research studying how to implement these tools in the clinic, that I think is really important. I think it is moving into the clinic slowly, but still in the research context.

Chin: But these spaces are blending—that’s helpful. Thank you. That leads me to my question for you, David: what improvements do you think this kind of app and assessment style is gonna require before it’s translated into these large-scale clinical settings?

Berron: Yeah, as we discussed before, I think that research in the last few years has really demonstrated that a variety of different tools are already at a stage where they are feasible, reliable and where they give valid results. One thing that’s I think missing to really move it into clinical settings is that there’s no clear framework yet that specifies how novel digital cognitive assessments need to be validated and what they need to fulfill. For example, even beginning with what’s the gold standard to compare them with or what’s the proper comparison group. I think this really makes it, at the moment, difficult to judge the quality of different tools and might be one hurdle. On the other hand, as Lindsay said, there are already implementation studies but it’s still not entirely clear how to implement them and in what setting. It’s even more complicated because that's different across different healthcare contexts. Will it be an on-site, short digital cognitive assessment that’s done at the doctor’s office, or is it something that’s remotely done, where people actually maybe get it prescribed and then continue testing at home before coming back with the results? I think that’s still something that’s not very clear. There are already studies testing that in specific healthcare systems, globally actually. Maybe to quickly talk about our own data–we did a study in the German healthcare system with primary care physicians, but also specialists in memory clinics. That exactly followed this remote logic where we have participants with subjective cognitive decline come to their physician, then got the app prescribed, used them at home for repeated testing over a certain period, and then came back to the physician with the respective results summarized in a physician letter. In that study, we actually didn’t test the diagnostic accuracy. We were just interested in what other hurdles and challenges to actually bring that into the healthcare system. The takeaways were actually that the doctors found it feasible. They see an added value, more in primary care than in memory clinics because, there, it’s just more important. I think we need more of these studies to find the optimal implementation, but also, critically, we need diagnostic accuracy studies that are not only done in research cohorts but actually really done in the real world. I’m pretty sure we’ll see more of that in the coming time.

Chin: Well, to end our conversation today, my question for both of you is: what’s the next step? This is sort of a follow-up, David, to what you said, but what is your next step, both of you, in this line of research on digital cognitive testing in a remote, unsupervised environment?

Berron: Lindsay, if I may start, I think we can certainly share that what we’ve published so far was the cross-sectional data from our study. We have already worked on the longitudinal data, so with the question whether we can actually not only detect cognitive impairment but also cognitive change over time. It actually seems that remote digital assessments are able to detect cognitive change in MCI patients. That now opens up interesting use cases for clinical trials, but potentially, at some point, also in healthcare. I think another really interesting line of research is to detect Alzheimer’s disease even earlier than at the mild cognitive impairment stage—already in the preclinical disease stage. Here, it will be interesting to combine digital cognitive testing, on the one hand, with novel blood-based biomarkers, on the other hand, to actually add disease specificity to the cognitive assessments. I think that will be really, really exciting in the coming years.

Clark: Yeah, and I would just add continuing to validate these paradigms in larger, more geographically and racially diverse populations is needed. We really need to make sure that the tests are measuring what we think they’re measuring in everyone, and that they’re feasible and accessible to everyone. I think we need larger studies, and we also need to better identify the barriers that some people will face in trying to access these kinds of tools. That might be–I mentioned before–not everyone has smartphones or tablets. Even if people sometimes have a tablet or a laptop, maybe they don’t have access to the internet or to a cell phone tower nearby so they won’t be able to necessarily engage, and how we can try to better fill those gaps. I think that’s one area of future direction for me, just continuing to validate these tests in larger populations. Then also thinking about practical uses, for example, how many repeated assessments do we need to get the most reliable estimates of cognitive function. We want to do enough to best sample a person’s cognitive function, but we don’t want to have too much where it’s burdensome for participants or patients. Some of those practical questions are on my mind. Then I think, as David discussed, really how do we implement this in a healthcare system, and healthcare systems that vary widely in their workflows and processes across the world. Continuing to study implementation studies of how to use remote cognitive testing to best add value, provide tools for clinicians and increase access to cognitive screening and testing for patients.

Chin: Well, with that, I want to thank both of you, David and Lindsay, for coming on this podcast today. For our listeners, that’s Dr. David Berron from the German Center for Neurodegenerative Diseases in Germany and Dr. Lindsay Clark from our own University of Wisconsin. Thank you both for being here.

Clark: Thanks for having us.

Berron: Thanks, Nate.

Outro: Thank you for listening to Dementia Matters. Follow us on Apple Podcasts, Spotify, or wherever you listen or tell your smart speaker to play the Dementia Matters podcast. Please rate us on your favorite podcast app – it helps other people find our show and lets us know how we are doing. If you enjoy our show and want to support our work, consider making a gift to the Dementia Matters Fund through the UW Initiative To End Alzheimer’s. All donations go towards outreach and production. Donate at the link in the description. Dementia Matters is brought to you by the Wisconsin Alzheimer's Disease Research Center at the University of Wisconsin–Madison. It receives funding from private, university, state, and national sources, including a grant from the National Institutes on Aging for Alzheimer's Disease Research Centers. This episode of Dementia Matters was produced by Amy Lambright Murphy and Caoilfhinn Rauwerdink and edited by Eli Gadbury. Our musical jingle is "Cases to Rest" by Blue Dot Sessions. To learn more about the Wisconsin Alzheimer's Disease Research Center, check out our website at adrc.wisc.edu, and follow us on Facebook and Twitter. If you have any questions or comments, email us at dementiamatters@medicine.wisc.edu. Thanks for listening.