Greetings. It's good to be back here again, doing another telebehavioral health lecture for this audience. It's one of my favorite things to do. This is just a great community we have set up. So I'm hoping we have a really fun talk today. We're going to cover a lot of material. And the goal of it, from my perspective, is not to cover everything in detail, but provide you with some basics, some introduction, some foundation, so you have a general idea on the various telehealth modalities-- and then also introduce some terminology and definitions that you can use to build on so that you can better understand the literature and better understand new technologies as they come so that you'll be able to incorporate all these into your toolbox on how you can best help your clients and patients. So we're going to cover mainly clinical video teleconferencing today, but we are also going to introduce some of the other digital modalities such as texting, web-based sensors, AI. Because when most people think of telehealth, they think clinical video teleconferencing. But I think we need to move beyond that. We need to include all those other digital modalities. So let's get started. So here are some three just basic learning goals for today. One, be able to share one common myth related to telemental health. Two, state both what is and why is the EULA important when using apps in clinical care. And with regards to sensors and wearables, what is this white space stuff that people talk about? And then beyond that, like I said, my goal for today-- each of the topics we're going to be covering would be whole talks in and of itself. But the point of today is to just give you some basics, some terminology, some definitions, so you can have some foundation upon which to better understand these technologies so that you can start thinking about how you might pick one or another and incorporate them into your own tailored telebehavioral-health-treatment plans. So where did it all begin? Telehealth or clinical video teleconferencing has actually been around a lot longer than you might think. I love this picture. This is actually 1959, and I believe it's Kansas. And what you're seeing here is a bunch of supervisees having a group session with their supervisor. And check out that cat on the TV screen. Doesn't he look like every stereotypical psychiatrist from the 1950s that you've ever heard about? So this is actually 1959, I believe, Kansas. So telehealth has been around for a long time. It wasn't, however, until about the 1970s that you started to see federal funding come for telemedicine projects. It was really clunky. The technology was not there yet. It wasn't until the 1980s, I think, Apple and Microsoft and all that you started to see the introduction and widespread use of computers into everybody's day-to-day use. Internationally, interesting, Australia was a leader in telehealth. And why might that be? Well, think about Australia. It's got your major urban centers on the coast and vast areas of rural areas that they need to provide care for. So Australia was well set up to really embrace telemedicine from the early stages. So for the longest time, progress in telehealth was being held back by the development of technology. But in the '80s and '90s, you saw a pivot whereby the technologies were now coming on so fast that that wasn't holding telemedicine up anymore. It was actually policy rules, billing, intrastate issues, things like that. And that started to become a real drag on the progression and the future of telehealth. That's where we were and then came COVID, and blew everything open with all the waivers. And you could provide care anywhere as long as you were providing care. And obviously, COVID was a terrible thing. But from releasing the policy requirements on the development of telehealth, it was actually a good thing. So now we're in this highly dynamic phase now where-- hopefully, COVID is under control. But you're seeing people in companies and corporations trying to grapple with coming out of the COVID age, coming out of those waivers and trying to develop these new hybrid models of care. And this is just highly dynamic. If anybody tells you they know exactly how to do hybrid care, they're lying. At least today, they're lying. The people are still trying to figure this stuff out. So we're in this really interesting period of time right now where we're trying to incorporate this all into this hybrid period. Let's shift gears and talk about some myths and facts, which is my nice way of saying let's take a look at the core literature. If you're going to be a provider, you've got to know your evidence base for whatever mode of treatment you're providing. So let's go over some of the basic foundational landmark papers that support telehealth because it's a recommended form of treatment. You aren't going to provide a non-evidence-based treatment anymore. Those days are long gone. So if you're going to be providing telehealth care, you have to know what some of the core literature base is. So when I was first starting with all this digital health and digital implementation stuff and trying to get clinics and providers to adopt it, this is what I always heard. Oh, Brad. Dr Felker, no, you can't do as good a diagnoses over clinical video teleconferencing as you can in person. Well, the literature would suggest otherwise. And here are two core papers that would suggest that you can diagnose over clinical video teleconferencing just as well as you can in person. Then I would hear, oh, no, the treatments aren't as good over clinical video. You got to have in-person treatment. Well, let's look at the literature again. Literature would say you can provide just as good of care via clinical video teleconferencing as you can in person. Now if you're listening to me very carefully, you're seeing me pick my words carefully. You notice I'm saying as good as. What's important to understand is most of these studies and papers never sought to show that clinical video teleconferencing is better than in-person care. These all use what's known as non-inferiority methodologies. They set out to say, is it as good as? Let's dig down and pick one of these papers. So this is really considered a landmark paper, the Ruskin paper. This was one of the first very well-designed, well-executed, randomized trials of treating patients with depression, clearly depressed. They had good measurement-based care. They randomized these patients into in-person versus clinical video teleconferencing. And Ruskin et al. was able to show that they provided just as good care via clinical video teleconferencing as in person. So this is considered one of the landmark papers. I think it's important for you to know where this foundation comes from. What about some other myths and facts? I get this one all the time, particularly at the VA where we have an aging population. Oh, the elderly don't like it there, Brad. No, they don't. They're not going to want it. Well, they actually really like it a lot. And why is that? They don't have to schlep up and down the highways anymore to come to the medical center. They don't have to take their aching back and hike across a parking lot and sit in a waiting room, all that for a 30-minute appointment. They can get their appointment. They can get it done, and they're done for the day and they move on. They also appreciate the access to multiple providers. You can get a whole network of specialists together. And the technology has gotten very user friendly. Most of our elderly population can easily be taught how to use this. We have a whole talk on the digital divide. I recommend you go watch too. And they're all using it to FaceTime with their kids and grandkids. So the elderly like it a lot more than you might think. And here's some papers to support that. Our patients and providers aren't satisfied with it, Dr. Felker. No, they much prefer in-person care. Well, some do, but not everybody does. A lot of patients prefer this eye-to-eye contact right there. They're not working with their provider who's typing and answering them questions like, how's that blood pressure? They feel much more connected over video. They feel more comfortable in a safe spot. They'll report less of that white coat anxiety. They appreciate the better access to specialists, we just talked about. We talked about the travel time. They feel more comfortable. The power relationship has been more equalized, and they see this as a safe space. So another example is once again, I work at the VA. And let's say I'm working with a veteran, who is traumatized by military sexual trauma. It maybe be a little triggering for that veteran to have to go sit in a busy waiting room full of other people, who they might perceive as perpetual as possible, people who could abuse them. So people find this a safe, comfortable place, and they appreciate all these other things. So patients and providers are very satisfied. Providers feel safer, particularly if they're dealing with patients who are sick or have COVID or something like that. So that's another myth we hear about. And here's some papers that support that. But this is one I got all the time, early on, that only the most stable people can be treated with clinical video teleconferencing. And initially, the only people we were allowed to see were rock stable. I mean, quite frankly, I think their mental health was better than mine. But anyway, our teams decided to start pushing those barriers. We put in a bunch of infrastructure. We tried to put in a bunch of safety planning and a bunch of these factors, and we started just pushing the envelope. And we found we could see more and more complicated mental health cases and improve access to care using telehealth. So it's important to think about that. And we have whole other talks that talk about setting up that professional environment, setting up the safety planning. Remember, the clinical video teleconferencing is just a modality of care. You need to know how to use it appropriately and where to use it appropriately. You need to think about what type of people you're going to be treating, what are their diagnoses, how stable they are, how unstable they are, how much infrastructure you can build in around that to do that safely and effectively. Video teleconference is just a modality of care. It's not the be all to end all. It's up to you to design that professional encounter and make sure you have all the safety planning. Once again, we have lots of other talks in this series that go over that in detail. And here's a paper we actually wrote on this topic. Another one, ethically, it's too risky compared to in-person care. Well, not if you, once again, do it correctly. You need to think, are you providing ethical care or not? And this is important to think about. Can you create that professional environment where you can provide competent, safe care? Once again, this is a modality of care. So it's like any other treatment modality. The patient you're working with has to be able to provide informed consent. They have to understand this is a recommended form of treatment, understand the risks, the benefits that they can say no to, all the usual informed consent things. You need to be able to assure that continuity of care, health equity, and you have to be using HIPAA-compliant secure software. So if you do all those things, yes, you can provide highly ethical care over clinical video teleconferencing. So before we shift gears here, I just want to quickly review on clinical video teleconferencing. So we talked a little bit about the history. We talked a little bit about its evolution. We talked about how it's developed and it's evolved, and we're in this very dynamic phase now. And then we talked about what are some of the core literature, I think, you should be familiar with. I know in terms of understanding that ethical, I mean that evidence base, which is really important. Then we talked about some considerations on this is a recommended form of treatment. What do you need to have in place to be successful and provide ethical care? So now we're going to shift gears and get out of clinical video teleconferencing. And we're going to move into the other digital modalities. This is what I'm telling you. When you think about telehealth now, you've got to really think about these other digital modalities and think of these as all tools in your toolbox that you can use. As I noted earlier, each of these topics could be their own lecture in and of itself, but the purpose of today's presentation is just to introduce them, get you thinking about them, understanding some of the definitions and the terminology so that you, in your reading, can be well-prepared to start understanding this whole other language. So let's get started. Web-based tools-- these are proliferating like crazy. They're being designed for patients. They're being designed for providers. They're being designed for both. So how do you know if it's a good one? Well, that's a real problem right now. And they're highly variable. It's hard to tell which is which. So at this point, I don't have a really good reference for you to see how to grade these things. Be a critical shopper. If you want to incorporate web-based tools into your care, look at them and really understand what's going on with them. Currently, they're being used mainly as providing screening tools, used for education, self-help tools, things like that. Here's two links of two sites that I like that are useful. And most of the literature would say that these are best used when they're provider-facilitated. Sure, you can send a patient to a website. They may get some useful information out of it. But they're likely to get a whole lot more information out of it if you're working with them, where you have a colleague on your team working with them. For example, for a long time here at our VA, I worked in integrated care. That's actually where I got my start. And we had a PTSD group in primary care. And it was an educational group. And it was set up and run by a nurse. And she didn't have time to get all that content for having an educational group every week. So she went to afterdeployment.org, where they would have whole modules on different symptoms of PTSD. And she would pull up one module-- say, for sleep or insomnia or whatever, irritability-- show that module, and then that was the educational tool for that group session each time. So that's an example of a facilitated use of a website. So be a choosy shopper when you go to one of these things and trying to figure out, is this what you want to use? Can you trust it? Is it safe and so forth? And then if you're going to use it, think about ways that it can be facilitated. Apps-- there's an app for that. I mean, apps are everywhere these days. Now unlike the websites, there are some places you can go to, to get some feedback or judging, is it a quality app or not? The APA has one. The AHRQ has one. The VA App Store, all the apps on there are available to public use and have all been well-validated and measured. So there are ways now that you can go and figure out, is this a quality app? Has it been well validated for clinical care or not? The next thing to know is it's just like-- once again, it's a recommended form of treatment, like a medication. You should do informed consent with it. And you got to see, does the patient really want to use an app? If not, they're never going to do it. So you spend some time getting to know them about that, getting to know what platform they're using, to see if that app is compliant or not, see if they know how to download the app. If the two of you, you and your client or patient, have found one that you think would be useful and it's well-validated, can they even download it? You may want to set up a session or have your digital navigator or somebody with those skill sets help download and get that app set up for them. This might have been one of your learning goals. Make sure you and the client/patient understand the End User License Agreement, better known in the field as the EULA. What is the EULA? You know what the EULA is. It's when you're picking an app and you scroll down to the bottom and it's the micro font of tons of information and you can just click I agree. We can't do that now. This is a recommended form of treatment. You, the provider, the one recommending the app, you need to read through that EULA in detail. That's critically important because this is where you're going to learn about data tracking, privacy issues, potential charges, and how their information or data could be used or not. So you need to understand all that as part of your informed consent, risks and benefits, and be able to choose whether you think that app is appropriate or not, and then be able to relay that information into your patient or client to see if everyone agrees. So you can't just click on agree on that micro font at the end of the app anymore. You, the provider recommending this app, needs to read that and understand it. And we actually wrote a paper on this as well, kind of a introduction to apps 101. And there's the reference for that. Texting-- that's everywhere these days. Text me. This uses Short Message Service, SMS, and can be used in a variety of formats these days. Texting and apps are often combined now in terms of treatment recommendations and plans. And with texting, it's basically similar to the web-based programs. But they're best used for sending reminders, information, supportive messages, promoting different things. You probably are getting texts already from your providers, reminding you of appointments coming up and so forth. And here's a nice reference on texting in healthcare. Some of the benefits of texting is it extends something known as the EMA. What is that? The Ecological Momentary Assessment. That means you're able to understand what's going on in that individual outside of your regular session, and being able to evaluate them in a bigger-- you're understanding what's going on in their world a little better now. Texting has been shown to increase patient compliance and feeling connected with their providers. It can encourage treatment in treatment-resistant populations, and improve that communication and connectivity. The downside is, however, that you're now intruding into that EMA, intruding into someone's personal space, whether they want it or not. And then there's confidentiality issues too. So you have to be careful of how that information is being sent. And is it under secure format and software and so forth? So all these things have to be thought about ahead of time before you just jump in and start using texting or apps or web-based. Wearables and sensors-- this is really exploding in the area of the EMA. And these are coming on fast and furious. If you got that Apple Watch on, it's monitoring you. Think about all the different ways you're being monitored. I'm being monitored all the time here just with my phone. So in this, we introduced the EMA a moment ago. But this really extends that concept of self-monitoring to emphasize real-world, real-time data capture. And so that's really extending it out now. You're seeing a lot of this being used now to monitor health, monitor behavior, changing lifestyle, and so forth. These sensors and wearables are great at alerting, communicating, giving feedback, detecting change, monitoring symptoms, providing information 24/7 on folks. And what's interesting now is it can be provided, like I said, 24/7, longitudinally. And that information just becomes like this tsunami of information that's coming in. So how are you ever going to monitor that? I'm thinking that AI is going to be the key to monitoring all of that data starting to pour in. And it will allow you to start collecting all that data once you've organized it-- and we're going to talk about AI in a moment-- to start trying to help you with your clinical-decision support. We talked about more terminology. We're going to really start getting into terminology and definitions now as we move into sensors, wearables, and AI. So as you're thinking, as you're doing reading in this area, you might come across digital phenotyping? What is that? Well, that's taking data from whatever device that's doing that, monitoring it, and being able to describe it-- now to describe that individual. Phenotyping is describing what that clinical picture looks like. And you're now able to do it on a moment-by-moment assessment. And that's going to really extend that EMA. You're really starting to understand what's going on in their world 24/7, and you can really describe it. So that's digital phenotyping. Different wearables and sensors do different things at this point. Might all change tomorrow. But at this point in time, whereas your wrist wearables-- think your Apple Watch and things like that-- they collect physiologic data. Think sleep, wake, heart rate, skin conductance, things like that. Your mobile cellphones, they track more your social interactions, where you are, where you're going, how much screen time, how connected you are with different people, and so forth. And these phones now are being routinely being measured to monitor stress and track depression, anxiety, things like that. And we'll look at some papers here in just a moment. So you can now see how you can start combining these sensors, your watch and your phone, to really start getting this interesting, fairly thorough digital phenotype picture of an individual-- which brings in this idea of the white space, which may have been one of your learning objectives. So when you think about the history of typical healthcare, you go to see your provider. They take a history, maybe do an exam. You discuss what that all means, treatments recommended. You go your separate ways, come back, follow up. You don't know what's happening between sessions. It's very episodic. Now with these sensors and wearables and otherables modalities we're talking about today, you can monitor someone between these appointments. And that space between appointments is now known as the white space. Terminology may change tomorrow, but if you see the white space terminologies, that's what it's being referred to. It's tracking and monitoring an individual between your episodic clinical appointments. So you can really get this new clinical picture or digital phenotype of an individual by combining all of this data. I'm a visual learner. So what would that look in real world? Well, this is what's being done now. So your sensors are picking up the things we just talked about-- location, movement. How much scrolling are you doing on your phone? What apps are you using? Are you up all night or are you during the day? Who're you communicating with, and so forth-- all these sensors. And so then in using AI programs we'll be talking about in just a moment, it goes through some software. And it starts collating on a low level all of those variables, and starts organizing them into different groups-- location, movement, bed-wake time, social interaction. And then that gets fed into a higher level of behavioral markers. Now these labels are now weighted in a certain way so they can start taking on different types of behavioral-type symptom, management of what we might understand in behavioral health, which then can actually be correlated again into a clinical state, where making diagnoses or changing symptoms and things like this. So this is a nice review paper. I really like this diagram because as a visual learner myself, all that stuff I've been talking about can start to feel a little overwhelming. So how does it all get organized? And like I said before, you are potentially risking a tsunami of data, just massive amounts of data. If you're tracking one of your clients or patients 24/7, think how much data they're generating. Now multiply that by your panel. Now multiply that by the clinic. How in the world is anyone ever going to monitor all that data? And to my way of thinking, it's going to have to be AI or artificial intelligence because otherwise, there's just going to be no way that you can find high-risk people or low-risk, or use this information in a way to effectively impact a behavioral-health-treatment plan. So let's look at some of the papers, some landmark papers that support this. One was by Dr. Ben-Zeev here with us at the University of Washington. He was one of the first ones to look at combining smartphones and sensors to monitor behavioral health, and their movement and their activities and tying them to measurement-based care, looking at ratings of stress and depression. And they were able to show this association, over time, in terms of how these are all related to these outcome measures. Great paper. The SNAPSHOT study-- this was one that took it another step up. Now they used machine learning, ML, which we'll talk about in a moment-- it's a type of AI-- to look at both objective physiological and behavioral measures collected using sensors and mobile phones to detect stress and poor health. And they were able to identify different predictors of stress and mental health classified on this data-- skin conductivity, temperature, behavioral activity, naps, mobility, screen time. And they were able to tie all this together with the stress into a mental health classification. And they were able to conclude that physiological data-- phone, mobility-- these were really good predictors in distinguishing self-reported high or low stress states as measured by well-validated outcome measures such as Myers-Briggs, or SF-12, or any of the other well-validated score measures that you guys are familiar with. And as a result, they were able to actually predict and modify these behaviors. So they would recognize these behaviors, be able to work with the client, and actually help them modify these behaviors over time. So SNAPSHOT study is another considered a big landmark paper in this domain. Mindset-- this is an interesting one. This is designed and done and implemented by researchers at the VA. And they looked at patients with PTSD. And they set them up with a watch-type monitor that would measure skin conductance, heart rate, things like that-- physiologic measures. Then they also taught them different-- through therapies, how to calm down, relax if they were triggered by their PTSD. So what they were able to do with these physiological devices was able to pick up when a veteran was getting triggered. And before it blew up into a full anxiety or panic attack, was able to relay these changes in physiology to the veteran, who then would cue them to use the coping skills that they had learned to shortcut these triggering events. There's a reference there for you too. [SIGHS] If you're not exhausted by now, you should be. But anyway, once again, this is just an introduction, just a sort of amuse-bouche, shall we say, of all the different digital modalities. Now, we're going to shift gears and finish up by talking about artificial intelligence. This is moving super fast right now. So once again, the goal is not to gain mastery of this, but just to introduce some history, introduce some terminology so that you'll have a better idea of where it came from, and be able to read and keep up and be knowledgeable going forward as this really fast-moving field continues to develop. I'll share with you some references. Hopefully, they're useful. They'll probably be outdated maybe before I even finish this talk. I don't know. But anyway, it's an important area we now need to know something about. So AI has actually been around for a long time too. It was officially first named by Dr. John McCarty with his colleagues listed there at the Dartmouth Conference in 1956. And it was really based on a lot of the work that Alan Turing did. You may have heard of the Turing test. Alan Turing was a famous programmer during World War II, and was very involved in breaking the German enigma code in using computers and such. And he came up with this idea of, can a computer think or not? And that was Turing test. Could you tell when you're talking to somebody or when you're talking to a computer? So there's a little bit of the history of AI. So then you started to see different forms of AI coming out. First step, looking back, people would call weak AI. And that was technology designed to just complete a specific function. Give it a job to do and it'll do it. Speech processing would be an example of that. Well, if you had weak AI, you're going to have strong AI. And that's a technology-- now you're getting in this Turing test area, where the results are may be difficult to tell whether that's coming from a human or coming from a machine. That would be strong AI. Now we referenced ML a little while ago, and that's machine learning. What does that mean? That machine learning or ML is when you have a computer that's designed to start being able to learn on itself without being programmed to do-- I mean, it's programmed to learn on itself, but not to be programmed to come up with all the answers. So it's going to be learning over time and becoming more complex and more detailed. And examples of ML would be statistical learning models, neural networks we'll talk about in a moment here, genetic algorithms, data mining, image recognition, natural language processing, I think IBM's Watson, things like that. Then you would have supervised machine learning. What does that mean? Well, that means, you as the human, you're going to let the computer learn on its own, but you want to tweak the rules a little. And so you're going to pre-label or pre-weight some of the variables as it goes into that process so that it will be-- certain variables will carry a greater weight than others. So that would be supervised machine learning. That will help predict different things. Well, if you have supervised machine learning, guess what, you're going to have unsupervised machine learning. No labels. Algorithms are all going to be generated by the computer now. And we'll start sorting it, as it learns, into various programs and patterns. And then you have something known as machine perception. And that's the ability of the computer to recognize images, sounds, touch, smell, when it's interacting with different humans. So think Google Care, facial recognition, things like that. So here's just some basic terminologies. You start to get your head wrapped around AI. Let's talk about neural networks now because that's the next step in the development of complicated AI. So programmers started thinking about the human brain and the way the neural networks work in the brain. And they wanted to simulate that same style of learning and interacting. So they would develop neural networks and programmed the computer in those ways. And that led to the development of deep neural networks. And then this is all used to do different types of learning. And it's hierarchical because then you have different layers of these neural networks start to get all stacked up upon yourself. And that can lead to some very abstract processing. And think about the way the human processes visual information, that sort of neural network and such. And so that's what the programs are aiming for, is developing that same type of deep thinking that the brain does. The way they do that is different neural networks or layers are made up of nodes that then combine the data with different coefficients. Think about the pre-labeling we were just talking about to amplify or dampen the results that they're looking for. So this type of deep neural network is really good at identifying intricate structures, high-dimensional data, like by reading a clinician's note, understanding what the clinician is saying. Now here's some more terminology. For now, you're starting to get into what's known as the black box. It's getting so complicated. Now we, humans, are putting the data in. And then it's thinking. It's doing machine learning. It's doing this on its own on multiple nodes, multiple neural networks. We don't know what's going on anymore. It's now in this black box until outcomes the algorithm. Just some more terminology to help you understand the differences in your reading. There's some nice papers there for you. Once again, I think I mentioned I'm a visual learner. Well, here's an example of how these neural networks and deep neural networks are designed and how they might work with the different nodes that would pre-label and weight the different variables. So digitized input is brought in. It proceeds through multiple layers. As I talked about, this is different from AI in that the neural networks is not designed by humans, but depends on the number of layers engaged. Think about the Google Brain. That would be an example of that. Many different types are being delivered this way now and that you're now seeing these computer systems that are exceeding human ability to calculate and draw conclusions. So as I mentioned earlier, if we're going to be using sensors and wearables and apps and collecting data throughout the white space, how are we ever going to be able to out who's at risk? Who in the middle of the night is becoming suicidal? If we're collecting that data, we are responsible for it. And to my way of thinking, the only answer is going to be AI, just to be able to through that vast tsunami of data. Here's another one, how that might work in our field. So we've already talked about all that input coming in from sensors and wearables and how just vast amounts of data will start pouring in. It'll go through into these deep neural networks, sort it all through. Different nodes are weighted in different ways, and out will come information that we can use as a health provider. So this is how it might work in a health setting. Now once again, field's moving really fast. But if you've heard my talks before, you know I always ground them in the evidence. So let's look at a few papers that I think are good places to start for you to wrap your head around this complex topic. This paper by Milne-Ives et al., Artificial Intelligence and Machine Learning in Mobile Apps, is a nice scoping review of apps and AI. And they looked at different types of AI and machine learning and for a variety of purposes, aimed to address a variety of mental health needs. And this review showed that overall, the studies demonstrated that it could work. It could work to support using apps, but it was very early. This is 2022 paper. It's a little old, but it's not that old. And they found that in their review, early, maybe not ready for prime time and need some more research, that there's a lot of weakness in study designs and so forth, but feasible. So that's, I think, the take-home point from that. That AI with apps, feasible, but the literature base is not solid yet. That's how I would interpret that paper. Here's another review on AI and mental health. And in this one, looked at how AI could really look to change how care is being provided, such as early detection of mental disorders and developing personalized treatment plans, and even the introduction of AI-driven virtual therapists. This review found that there was all kinds of issues with the papers, a lot of ethical challenges, a lot of bias. They did not think that this was ready for prime time at all. It's coming. It's feasible, but that this review found that there were significant ethical challenges to really being transformative, having AI for running mental health care at this point in time. Here is another paper looking at AI and machine learning and decisional support in mental health settings. And they reported, based upon their review, that there is significant implementation barriers. Once again, that technology is there, but the providers are not ready for it. There's trust issues or ethical issues. People are not ready to implement this. And that's an important thing. It's where I've put most of my career, is into implementation science. And as I like to say, you may have heard this from other talks, everything looks good on a whiteboard. Getting it implemented in the field is tough. So this is what they ran up against, that there are some real issues related to implementation and developing a provider trust, this review. That they needed more research on all the different stages of development and communicating with all the partners, and they really put an emphasis on ethical trust, confidence. And so the technology may be there, but it's not ready, this review, for putting it in the clinical space yet. It's not trusted by providers or patients. Another review paper, this one looking at the field of science in terms of study design of intelligent machines. This is a big review, 429 manuscripts, 147 reviewed in detail. And once again, found some [INAUDIBLE] mental health. It's coming, but they found that the evidence had a lot of issues in terms of how well done the studies are, lack of consistency, how the AI was being applied, how the data-- think about that data tsunami I was talking about earlier, what those processing pipelines looked at. How was the data being interpreted and being utilized? And concluded there are significant shortcomings, that a lot more work needs to be done to make it more reproducible and supported and acceptable to routine management in the field. So once again, not quite ready for prime time, but it's coming. What about psychiatrists? Why don't we just ask people like me? What do you think about AI? Interesting. This paper is already getting a little old, but do not believe that it was going to be able to replace them. That it would be helpful for taking a history, creating some treatment plans, administering tests and validating measures and things like that, but could not reproduce that empathic care. They're not going to be replaced yet by that. So it's good for synthesizing data, but not that interconnectivity that both psychiatrists and clients and patients value. About half felt that it will significantly change their job, but nobody really thought that it's going to make us obsolete. And psychiatrists had real concerns with lack of privacy, transparency, stigma, dehumanization, loss of empathy, incorrect diagnoses, and burnout-- doing lots of figuring what are their roles anymore. [SIGHS] Bet you thought I'd never get to conclusions. So once again, the point of this talk was not to give you mastery on all this. It was an introduction to just form a base from which you can start to build on. So we covered a lot of material today. Once again, just a lot focusing on introduction, definition, terminologies, things like that. So in conclusion, I think these new digital technologies are here. They're being used already. And you as a responsible provider, I don't think you can turn your back on them. I think it's going to become routine and part of care going forward. So it's going to be incumbent upon you to understand the informed consent, the evidence base, how to use these, how to combine ones. What's a good one? What's a bad one-- understanding the EULA, understanding the white space, understanding all those ethical concerns if you're going to incorporate these into care or be able to understand others who are incorporating the care. Like I said, they're going to be combined in all different kinds of ways. You can start to see how you can use video conferencing and following up with texting and apps. And different clients or patients will have different collections of these digital modalities. It's not ready for prime time. It's not integrated yet, but it probably will soon. What I would call in into the next phase of research is going to be what's known as translational research. That's taking it off the benchtop and now using implementation science methodologies to properly evaluate them in the field, and get them implemented in a way that they are clinically useful and that patients and providers trust them. The technology issues are going to continue to outpace. We started with the history. We're in this new phase now that the technology is now being held back by our ability to understand them-- policies, billing, et cetera. Ethical concerns are significant. And I want to pause here for a moment and talk about this. Lots of brilliant people developing technologies-- lots of brilliant, super smart, well-meaning coders and programmers and people out of Silicon Valley all developing these things. But we, the providers-- we, the clinicians, need to be at the table to help address the ethical concerns related to this. The programmers, the coders, they can define these technologies, but they don't know what we do. They don't understand the ethical implications of what they're doing. So I think it's critically important that we, you, I need to be at the table when these new technologies are being developed and implemented to address the clinical ramifications and the ethical concerns, and how these new products and modalities can be properly used. And going to have to understand that they're here. They're going to be part of routine practice going forward. So that's it. I hope that we were able to cover some interesting material today. I know I put a lot at you and hope this is useful as a building block for our other talks. Thank you.