Inthis episode, Sheppard Mullin partner Jim Gatto, co-chair of its AI Team, joins us to discuss the AI revolution in healthcare, including how it is enhancing and improving the industry while navigating ethical and legal risks.
At the cutting edge of advising on "data as an asset" programming, Sara's practice supports investment in innovation and access to care initiatives, including mergers and acquisitions involving crucial, high-stakes and sensitive data, medical and wellness devices, and web-based applications and care.
A partner in the Corporate and Securities Practice Group in Sheppard Mullin's Dallas office and co-chair of its Digital Health Team, Phil Kim has a number of clients in digital health. He has assisted multinational technology companies entering the digital health space with various service and collaboration agreements for their wearable technology, along with global digital health companies bolstering their platform in the behavioral health space. He also assists public medical device, biotechnology, and pharmaceutical companies, as well as the investment banks that serve as underwriters in public securities offerings for those companies.
Phil also assists various healthcare companies on transactional and regulatory matters. He counsels healthcare systems, hospitals, ambulatory surgery centers, physician groups, home health providers, and other healthcare companies on the buy- and sell-side of mergers and acquisitions, joint ventures, and operational matters, which include regulatory, licensure, contractual, and administrative issues. Phil regularly advises clients on matters related to healthcare compliance, including liability exposure, the Stark law, anti-kickback statutes, and HIPAA/HITECH privacy issues. He also provides counsel on state and federal laws, business structuring formation, employment issues, and involving government agencies, including state and federal agencies.
We're pleased to have Jim Gatto here with us today, who is our partner here at Sheppard Mullin. His practice focuses on intellectual property, blockchain, AI, financial technology, among other emerging tech and business models. Jim serves, of course, as the co-leader of our artificial intelligence team, co-leader of Sheppard's blockchain and fintech team, and he also leads Sheppard's open source team. Jim also publishes The Legit Ledger, a podcast focused on the latest trends in blockchain. Jim has over 35 years of legal experience focused on all aspects of IP strategy, technology transactions, and tech-related regulatory issues. Thank you so much for joining us here today, Jim.
We're seeing it in the headlines constantly, AI. There's a lot of folks who are opining on it. But because of your deep background, can you maybe first just start in the simplest terms of what AI is and why we're seeing the explosion in healthcare specifically, right now?
It really is using computers, algorithms, and data to try to mimic aspects of human intelligence. And many people debate whether AI really is intelligent or whether it's just as good as how it's programmed, and people are entitled different opinions on that. But I think if you look historically, AI has been around since the '50s. The term was coined in the '50s. There were various attempts at different ways to approach AI, and one of the earliest approaches was much more of a rules-driven approach. You'd have an expert in certain subject matter, they would create a set of rules for computers to do depending on certain circumstances, and it required really trying to map out all the knowledge in that domain. And that really didn't work. I would say that was really more artificial than it was intelligent. Fast-forward to where we are today, algorithms truly do learn. There are numerous examples of things where AI has done stuff that humans have not done, which shows that it's actually not just memorizing things, but it truly is learning. And I think where we are today, it's more intelligent than it is artificial.
Generative AI is a different animal. Generative AI is often trained on more different types of content. So it can be text, it can be images, video, music, software. It's trained more on expressive material, and the output of generative AI is typically something that's more expressive information. So you can output images, you can output reports, analysis, music, video, all of the other kind of creative content. And that's really what generative AI is primarily focused on.
The reason that we kind of had this tipping point in November is that there's really at least three things that need to come together for AI to truly work and to be intelligent as opposed to more artificial. And that is you need a tremendous amount of data, you need a tremendous amount of computing power, and you need good algorithms. Now, we've had the algorithms for a long time. Many of the algorithms that are out there that are being used are kind of open source algorithms at this point. There's innovation that's happening as well, but they've been around for a long time. But it's really the confluence of computing power and data that's really enabled AI to take off like it has.
When we think about healthcare, it's everything from healthcare administration. So a lot of the things that go into running a hospital or other facility, so much of that is going to be and is in the process of being transformed and benefiting from AI to ring out incredible efficiency. The tools are being programmed to look for what the programmers told it to look for. Now, AI is being used for those purposes, plus identifying relationships that were unknown before. Again, part of the intelligence part of this, it's finding connections in the data that people previously didn't know about, and that's part of what makes it so powerful and different from what's been done for the last couple of decades. So we're seeing a tremendous amount of use in that area. In fact, there is a Gartner report that came out that said that by 2025, they predict that generative AI will be used in more than half of all drug discovery and development initiatives.
One of the big problems with AI, and this is why especially for things like medical professionals, it will never replace them, is that as good as it is, it's not perfect and it's often wrong. It often produces what's called hallucinations or bad results. I don't think we'll ever get to the point any time in the near future where we're going to have AI that is reliable enough for important things like medical decisions to not have a human involved in the process. So I think augmentation is a really big factor for those types of jobs. For administration, for other jobs like that, there'll be replacement. And then just in general, the field of diagnostics, AI, again, can find issues in problems and connections that sometimes humans can't even find.
Yeah, it sounds like despite all the hallucinations and issues that we see with AI, it will, as you mentioned, augment and improve. And as far as impacting quality of care and access to care, it'll only enhance those.
I know Phil and I hear this all the time, the epidemic of isolation, eldercare, home care, monitoring within the home, on the prevention side and on the chronic side where it's not necessarily provider interaction, but it's patient interaction with a smart device or a smart app.
Let me take the eldercare one first. So there's a company I work with as a group. It's actually more like an AI incubator. They have a set of companies, but they have some really interesting core technology in connection with eldercare. For example, what they do is just using video, it could be from your phone or whatever, you could have a video set up and just based on gait analysis and using AI, they're actually working on being able to predict when elders may fall, for example. They've identified patterns within the gait, for example, which will predict that. And so that can prevent falls instead of trying to figure out why someone fell or how to treat them afterwards, which may be important, but if you could prevent the fall in the first place, it's even better. So that type of technology, I think, is going to become more ubiquitous. And again, that's part of the user technology that people will help themselves with this technology in areas like eldercare and the like.
With respect to isolation, so it's really interesting. One of the areas that we really haven't touched on is robotics, but there's a growing number of humanoid robots that will be companions, and a lot of them are helpful. They do everything from they can clean your bathroom. They literally will be over time, interactive with people and it almost will feel like there's a friend there, there's someone they can speak to. I know it may seem a little futuristic, but there's a lot of these humanoid robots that are actually out in the market right now. They're still kind of early, but they're rapidly gaining in usefulness. And so I think that's something that we'll see a little bit more of as well.
AI can be used in a lot of ways to be predictive of when people are experiencing loneliness. Or, based on what you say, what you write, it can actually detect in many cases, human emotion. There's a lot of technology around that. And determining when people are sad or depressed is one of the things that they can help identify and flag that for either loved ones who want to take care of them or medical attention. Again, it could share it with providers, etc. So I think we'll see all of those areas continue to improve over time.
Yeah, so I mean, for any given application, there may be some specific issues, but let me kind of hit the broad categories that generally cover most of them. So one of the first and the biggest issues is data. So as I mentioned earlier, all of the AI is trained on different forms of data. If it's personally identifiable information or health-related information, obviously it's important for companies that are training these models to make sure they have a right to use the data that they're training on. And there's already about a dozen lawsuits that allege that some of these large language model tools were trained on data they didn't have a right to use, some of which is personally identifiable information. Some of it is biometric privacy protected information. Some of it is health-related information. So the data and the training, it's really important if anyone gets involved in training their own models that they ensure the right to use the data that they're using.
3a8082e126