Episode Transcript
[00:00:00] Speaker A: Welcome to the Entrepreneurial Strategy series. This is the January 2026 session. We're super excited to kick off the year with this topic.
AI and IP and Healthcare and Health Sciences.
We put on the Entrepreneurial Strategy series every month. It's usually the last Thursday of the month, it's usually at 12pm Eastern and we try to cover all topics related from the entrepreneurial journey from concept to exit. And so if you, if you. The actual session is being record and it will be downloadable as a podcast, but it also is, it also is available at our YouTube site. You just search YouTube, Gerhardt Law or YouTube entrepreneurial strategy series and you will find all the ESS's as we call them. You'll find all the past episodes and there's some great information out there and so consume as much as you want.
We are always looking for topics and speakers and so if you're interested in possibly speaking on a panel or have an idea for a topic, please get in touch with me and then I'm happy to kind of chat with you about that. Anyway, you can't not be an entrepreneur or be in healthcare or even dabble in health sciences and not hear artificial intelligence. You can't just be a living human being these days and not have and not not hear about artificial intelligence. And so this is, this is not actually our first ESS on artificial intelligence. We have done a few other sessions on AI, but this one is really important because this is an AI and IP focus on life sciences, health sciences and healthcare.
We're really excited to hear who is in the audience. We heard for some people, but now I'm super excited to introduce our speakers.
I'm going to have them introduce themselves.
We're going to start with Sheila Mintz. And so Sheila, let me just spotlight you and please take a moment to introduce yourself. Who are you? Where are you calling from?
Yeah, a little bit about yourself.
[00:02:09] Speaker B: Hi, I'm Sheila Mintz. I'm an attorney in New Jersey. I am, I've been practicing in health law for 30 years and recently have recently, within the past couple years, have been involved in implementing AI systems within some medical practices and hospital systems that I work with.
It's really, really interesting. It's kind of the Wild west, as David mentioned.
As I said, I'm the chair of health care law for a law firm in New Jersey called Capehart and Scatcherd, which is in New Jersey and New York. But I'm going to be here for the next day.
So as of Monday, I will be joining a firm, which is a national firm, a regional and national firm called, I can't remember the name yet, Royer, Cooper Cohen Brownfeld, which is in Philadelphia and New Jersey and Nashville and a couple of other in New York, a couple other places. I'll be head of health care for that firm. And I'm very excited about the opportunity given that it was, it's like such a quick transition. I don't have any of my new contact information, you know, as part of the, as part of the advertising for this promotion for the, for this webinar. But I can put it in the chat if anybody wants to reach out and give me a shout after this.
So I'm going to be talking basically about some of the things that have concerned me, I think, in implementing health, you know, AI and healthcare really been the ethical issues that I think everybody's been talking about, the HIPAA and data security issues, which is always an issue in health care. And then regulation, the existence of regulation in the United States to the extent it exists and sort of best practices in the absence of regulation.
So I won't talk yet about what I'm going to talk about, but those are the three kind of high points of things that I'd like to, that I'd like to address when I speak.
[00:04:28] Speaker A: Amazing. Well, Sheila, thank you. Congrats on the new firm.
We can definitely share your information. Basically, after this entire session is recorded, we send a recap to all the registrants. There's about, I think, 400 registrants, so everybody will get it, including everybody on this call. They'll get a recording and they'll also get your contact information. So thank you for joining us and we're super excited to have you. This is actually Sheila's second ESS with us. We did one very early on years ago on cannabis, and so check that out as well.
My next guest is actually my partner at the firm. He is the founding partner of Gerhardt Law. I met him about 10 years into my career and I've been working at the firm now for about, I think, 11 or 12 years. And he truly has changed my life for sure.
And so I'm super excited to have him. He's also not. This is not his first ess. We've done a few together and so check those out as well. But Richard, Richard Gerhardt of the founding partner of Gerhardt. Laura, take a moment, introduce yourself and let everybody know who you are and what you're working on and what you're doing.
[00:05:40] Speaker C: Well, thank you very much, David. It's a pleasure to be here. And I have to say that the last 12 years working with you has just been a blissful experience. You're an amazing attorney, an amazing person. So we feel lucky to have you with us and truly appreciate everything that you do, including putting together this entrepreneur strategy series which has been going on for a while and has really been, I think, one of the high points of Gearhart Law in the sense that at Gerhardt Law, our mission is to help companies start new businesses and support innovation at larger companies as well. And, and having the education and the information out there and available for people is I think, a crucial part of our mission to help with the commercialization and intellectual property protection process. So for myself, I was been a patent attorney, intellectual property attorney for almost 37 years. I started my career at Jones Day, Revis and Pogue. Now it's Jones Day. It was a big international law firm. I worked in their Swiss office in Geneva, Switzerland, and I did corporate transactions in international arbitration. I came back to the United States and I started working in the field of intellectual property and found out after the first couple of weeks that this was really my passion and this was something that I could really do for the rest of my life. And so that was about 37 years ago and I haven't changed my mind since.
After working in private practice for a bit in Chicago, I ended up working at CeibaVision. I took on the role of head of intellectual property there, General Patent Council.
And then later I was moved from Atlanta, Georgia, which is where CIBA Vision was, to Novartis Pharmaceuticals, where I headed up the US area, function for intellectual property. So working in Novartis and SIBA Vision really gave me a lot of industry insight into how the healthcare system works. And it was great opportunity, great opportunities for training and for learning. And eventually I left Novartis and started Gerhardt Law.
We'll be celebrating our 20 year anniversary coming in June. So that's a real milestone for me personally and for our team. I kind of think that if you can stay in business for 20 years, hopefully you're doing something right. Right.
And I attribute it to having team members like David who always go the extra mile for our clients.
So my specialty is intellectual property. I'm mostly a patent geek. I do a little bit of trademarks, but patents are my specialty and the firm has a focus on life sciences. And so, I mean, that's about me. In addition to my work at Gerhardt Law, I also serve on the leadership council of Rutgers Health. Rutgers Health is the or the part of Rutgers University that oversees all of the health care activities of the university, the research, the medical schools. They have two medical schools which they're trying to merge. The dental school, the nursing school, the School of Health Policy.
And I work with the chancellor there on a team to help support the initiatives associated with Rutgers Health.
And in addition to that, my wife and I host a radio show called Passage to Profit. It's a nationally syndicated radio show heard in 40 markets across the United States. And there we talk with entrepreneurs who are interested in talking about their business journeys. And it's a lot of fun for us.
And again, it's part of our educational mission. So that's me.
[00:10:04] Speaker A: Awesome. Thank you, Rich. Thank you for that. It's super fun and super cool to have you.
We'll come back to you a little bit as to the topic of artificial intelligence and intellectual property, and maybe even we can share some insight as to how we at the firm are using AI.
But let's go back. Let's start with Sheila. Sheila. Let's get into it, everybody. I think there were some, you heard in the introductions, EMRs and HIPAA and, you know, ethical and stuff. So I'll. I think maybe one of the things maybe we could kind of start off with is, you know, there's a lot of, like, good and bad to artificial intelligence, you know, So I think Sheila coined it herself as, like, the dark side of. Of AI. And there's the good side. You know, there's the dark and the light and the good and the bad. Let's stick a little bit on the dark so we can end with good later. But let's talk about some of the dark sides as it happens with AI and its intersection with health care. Maybe you can talk.
[00:11:08] Speaker B: Well, I guess some of the dark side, I've seen it in a few different ways.
There are, you know, I guess first, maybe one first, not foremost necessarily, but one of the ways that there. There are a lot of people seeking to develop AI systems, AI programs, you know, AI models, platforms, and it's some of them. And health care is a huge business right now. It's. Everybody's really interested in it. There's a lot of private equity, venture capital investment in it. There is a lot of money being invested in health care. So it's kind of a very sexy way to think about starting your AI platform to do something in healthcare.
And, you know, and. And that's. That's great.
Some of the people are a little bit, as I said, it's a wild west are a little bit too cowboyish for my taste personally that are not, are not really considering all of the issues in what you have to do to provide effective patient care.
You know, I've had developers that might, that clients sought to establish relationships with get mad at me because I've insisted on that, on the fact that there'd be some HIPAA protections and they felt that it was sort of irrelevant to what they wanted to do. And you know, for those people that are less, you know, a little bit cavalier in their approach, I think those are, those are some of the things that I've, that I've experienced with a number of different developers that have sought to implement programs in some of my clients science systems and have gotten kind of testy with me when I wanted them to change certain ways that they handled things and make sure that there was data encryption and those kinds of things and then perhaps also not use the protected health information of the patients of the practice for their own benefit.
That all of those things were, you know, I think people have to be aware of who they're dealing with and whether those people are as committed to ethics and practice and data protection as they are, or whether perhaps they're seeking, you know, their big exit strategy. Right. Their big buyout.
And so that's, you know, those things are sometimes a little bit tricky. But I think it's very helpful to have someone to have people in the practice who are really familiar with best practices and how you do data encryption and making sure that HIPAA is protected and that they're, you know, that they're setting up these models that their algorithms are transparent and things like that that make it, that make it easier to implement. The other dark side, I guess things that I see that I see in working with my medical practice clients is that health insurers have really kind of glommed onto AI as a method for medical record review. I know one of the participants mentioned that to me and this is where I'm seeing it in sort of a bad light in that they essentially go in and they deny everything.
Particularly in mental health. For some reason there seems to be a particular, you know, mental health seems to have a target on it lately, at least in New Jersey. And you know, I have clients that just send me reams and reams and reams and reams of EOBs saying deny, deny, denied lack of medical necessity without any explanation. And because it's AI, it's difficult, almost impossible to get a response from the company that is doing the medical chart review as to what in particular they found missing with the particular coding or with the services provided or with the patient history or any of those things. So it's very hard to know how to remedy the situation and to prove medical necessity if you don't know what they find deficient to begin with.
Interestingly, there's some entrepreneurs that have sort of seen, sought to. On the good side of it, there's some entrepreneurs that have sort of are seeking to sort of counteract the use, the very prevalent use right now of AI and claims denials to assist medical practices in really perfect coding that. So that they will not get, or be less likely to get the, you know, the blanket, you know, reams of paper with deny, deny, deny on it and be able to actually overcome that or meet every single element of every single code that needs to be met so that, in fact, they can get their claims paid.
So that's a little bit of entrepreneurial spirit that's being showed to sort of overcome some of the dark side of it.
[00:16:43] Speaker A: Yeah, and that's actually a good segue because many of those entrepreneurs that are creating AI system systems to remedy some of the problems that you just mentioned are probably coming through our doors to kind of protect those technologies. So it's a good segue to. Rich, maybe you can talk about some of the dark side of AI and intellectual property.
[00:17:08] Speaker C: Sure. And I do want to kind of pick up on some of the themes that Sheila mentioned.
I think, you know, if I heard a lot of questions at the beginning of the. During the networking piece, I think you kind of have to go back to current law and current rules and regulations as a framework to answer those questions. Just because AI is on the scene now, it doesn't mean that the law and the legal obligations that you were under previously have changed. Maybe they will change or maybe something in your situation will be different. But I think as a starting place, you have to go back to the law as it currently exists if you want to be compliant. So I just want to make that point.
It's important that you talk with attorneys like Sheila or Gerhardt Law for particular legal issues. But I think as you're trying to understand the impact of AI, I think you need to go back to that, you know, initial, initial legal framework. And it's. It is kind of counterintuitive.
You know, when Sheila was talking about developers, you know, there's always been this tension between life science and software, even in terms of the intellectual property landscape in life sciences. Intellectual property patents particular, are fundamental to the business model. People rely on intellectual property as a basis to commercialize their new medicines, their technologies, software. It's a little bit different, right? I mean, there's still this debate in the software community about the value of intellectual property. And there are, you know, there are people who are evangelistic about open source and, and with the evolution of the Internet and the evolution of AI, data is the product and it's the currency. And so those companies are looking for ways to exploit the information and profit from it, whereas the life science model is very, very different. So when Sheila talks about aggressive developers, it doesn't surprise me at all because I think there's a lot of people on the software side who approach things from an open source perspective and are less concerned about privacy. And I think her information on that is really important to keep in mind that you definitely want to find people in the field who are used to working in the life science areas and appreciate the importance of privacy, because if they get it wrong, you could be the one that suffers the consequences. Right.
And so I think that that is, you know, a very important point. I mean, on the dark side of AI, I always remember this particular situation I heard about, I guess there were some scientists in Europe who used AI to generate compounds that would cure a particular disease. They also used AI to create 50,000 compounds that could serve as poisons.
Right? And some of those poisons were undetectable.
And so the dark side of AI is that there could be crazy people out there who, and there inevitably will be crazy people who use AI for the wrong reasons.
And I guess that's just the trade off that we have to make. I mean, we recently hired a new managed service provider for the firm and they are so concerned about security that I honestly think it slows down our ability to execute by something like 5%.
But the trade off is not to have any security for our emails or malware or all of those things.
And so, you know, there's always trade offs with these new, new productivity tools and, and so you're going to be seeing that with AI as well.
So, so anyway, that was, that was Certainly, you know, 1, 1, 1 dark side of AI. And in the field of intellectual property now, there's a lot of things that can happen and we're just kind of waiting to see what would happen. For example, somebody could create documents that serve as prior art which would block somebody's ability to, you know, to get a patent. But these documents could be very broad and very comprehensive and they could be based on, you know, AI speculations where nobody's ever done an experiment.
Right. We don't even know if it works, but it could actually. People could be filing, you know, tons of patents or publishing extremely large, lengthy papers that.
Where no human being was involved, but using that to block the legitimate work of scientists who are actually understanding the technology and who are trying to bring it forward.
I think there's just so much new territory that we haven't discovered yet and haven't understood yet that it's really too early to tell.
But that has to be balanced against all the possibilities, too. And I think everybody here has worked with artificial intelligence in some capacity and can appreciate the possibilities.
A very challenging time for attorneys and for us together in understanding where all of this is going. And I guess that's why we're all here.
[00:23:41] Speaker A: Yeah, Good points, Richard. I want to get back to some of those points. I kind of want to hear Sheila's thoughts on something you said about the current legal frameworks that we have. So I guess Sheila, I mean, one of those being hipaa, we hear, we talk about HIPAA all the time. Maybe you can break that down a little bit, because to Rich's point, AI is, is. Is literally open. There is no privacy, per se. Yeah, there's no. I mean, you might be able to pay for privacy depending on the software, depending on the AI software you use, but in general, it's open, which is the direct opposite of hipaa, which is privacy. So maybe you can talk a little bit about what's the current regulatory framework and where are we going? What are we doing? How is this being treated?
[00:24:26] Speaker B: Well, it's, you know, it's. It's. As usual, the law lags about, you know, 100 miles, you know, after, after, you know, innovation.
So there, the law is really not caught up with AI at the federal level. There's some guidance that had been enacted to talk about AI, about sort of best practices, but they were limited to very discrete areas of AI, really more about algorithmic transparency than data protection. Of course, there's hipaa. Now, HIPAA is the Health Insurance Portability act of, you know, I don't even remember what year it was enacted, but it's been around for 30 years probably at least.
And, you know, that talks about protecting data of patients, protected phi, which is protected health information.
And, you know, there's. It's easier to do that in the way that we've done business up until now, because if you're using sort of traditional software, not an AI system that, you know, that is sort of. It's Much more amorphous.
You can create encryption and you can set up systems to protect the data, whereas AI is kind of, as I said, amorphous. It's all over the place.
It's open.
So there are best practices for that. But there has not been a whole lot of regulation in the prior administration in the United States.
There. There was some guidance that was enacted, but there's not a lot of federal. Of federal.
You know, a federal guy. There's not a lot of federal regulation. In fact, you know, the current administration has said that they don't want to regulate AI at all.
You know, and so it's really incumbent on the states to enact their own regulations. And that's been happening in a variety of cases. I mean, as you might imagine, California, which is the base of Silicon Valley, has enacted the most regulation.
Not all of them are, or most of it is not really related to data protection per se, but more like technical types of regulation.
Several states, like Arizona, for one, have enacted using AI in health care, basically prohibiting medical decisions to be made without the use of human input.
There are several other. Just technical kinds of.
There's several other technical regulations that have been enacted by a number of states, but there haven't been across the board a great deal of legislation related to data privacy per se. So just a couple of states, and that's basically it. So you're kind of stuck in this patchwork of regulation.
So we're basically going back to hipaa. And when it really started to become implemented in health care over the last few years, I went back to our, you know, to HIPAA and to our HIPAA Business Associate Agreement, which is. Which is an agreement that every, you know, what called covered entity, which is physician, hospital, anyone providing clinical services has to give. Give a party might receive their protected health and basically work you can to, you know, to.
[00:28:32] Speaker A: Hang on.
Sorry to interrupt you. I think. Is everybody hearing that echo on. On Sheila?
[00:28:38] Speaker C: I'm hearing it, yes.
[00:28:39] Speaker A: Yes. I'm also hearing it.
[00:28:40] Speaker B: Oh, dear. I don't have my. I don't have my cell phone on. I don't. I'm not sure what else I can do.
[00:28:46] Speaker A: I think maybe just talk closer into the mic on your computer. It was just the last.
[00:28:50] Speaker B: Where is the mic? On my computer? All right. I don't even know where it is, but I'll get closer. How's that? Let's try. Is that better?
[00:28:58] Speaker A: Yeah, yeah, much better.
[00:28:59] Speaker B: Okay, good. So hipaa. So HIPAA basically, as I said, describes the relationship between what you have to do to protect, you know, a medical practice? I'll use a medical practice, for example, because there's a lot of covered entities under the law, but it's, it's too complicated.
So if you use a medical practice or anyone, any provider of clinical services has a legal obligation to protect the data and the information of their patients, any identifying information at all. Like for instance, not the name, not the address, not any of those things, not just what illness they may or may not have, but actually anything would potentially identify them.
So if you're thinking about using, using obviously AI in any kind of a clinical setting, it's almost impossible that an AI platform is not going to have access in some way to that medical, to that phi. So how do you build out and create policies for practices that will, that will allow them to. Is that helping?
Can everybody hear me okay?
[00:30:31] Speaker A: Yes.
[00:30:31] Speaker C: Yes.
[00:30:32] Speaker B: Okay, good. Okay, you know, I'm sorry, I lost my train of thought, but guidelines for, you know, for doing the best you can to protect the data and some of that is providing for really robust data encryption and for providing for it for the data to be encrypted in transit and at rest and to be held in, stored in a particular way and for it to not to be used to make sure that it's de identified if the vendor, if the software developer is going to use it in any other.
A lot of people do data mining or use de identified data to sell and create other products.
So to make sure if it's de identified that they have no ability to then re identify it. Because you can imagine that there are a lot of potential abuses that can happen if you have information about people who have certain types of conditions and things like that, that it could be used in ways that are detriment to the patients. And ultimately, as I said, with covered entities with physician practices or hospitals, if your vendor has breached hipaa, has released data of your patients, the buck always stops with the doctor.
So regardless of what you provide for, if the best of protection in the world, if it doesn't work, then the buck always stops with you. So it's really incumbent on the medical practice or on the covered entity to make sure that these protections are built in to the best that you can, given the situation right now. But because this is such a wild west type of situation, it's really, I kind of view the, you know, the agreements as sort of living documents in that they will constantly be refined and ameliorated as more and more is known about how to use health care, how to use AI in healthcare and more and more different ways that it's being implemented or, you know, it becomes part of it. So that's, I mean, HIPAA is really the basis of any regulation right now in terms of how to best use, how, you know, how to best use, you know, AI in medicine. But you know, so what you try to do in the business associate agreement with a vendor or with a developer is to sort of shift some of the responsibility for acting data onto the, onto the vendor, but that it doesn't always work. I mean they can, you need to make sure there's, there's some actual due diligence, due diligence that has to happen.
One of the things that I always try to build into my HIPAA business associate agreements is, you know, the, the medical provider's ability to audit, to ask for systemic audits, say, you know, every quarter, every six months to make sure that there hasn't been a breach and that data is really being encrypted in the way that it should be so that, so that at least you have some sense that it's, the data is being protected properly.
And then, you know, the, yeah, so I mean those are the really big issues in terms of data protection. Is HIPAA is the basis right now of any kind of legal structure for, for AI in health care?
[00:34:28] Speaker A: Okay, well, that's great. There's definitely a lot to unpack there. But Rich, do you have any comments on that before you start talking about kind of the same question? What's the one? If you have comments, please. But then your question will be what's the current kind of regulation kind of structure for, for, for, for intellectual property and does it address AI?
[00:34:50] Speaker C: So, so Sheila, you'll be happy to know I was taking notes during your, during your discussion. So. So thanks for that.
I just have a couple of things to, to maybe add to that. One is that obviously anything that goes out on any of the LLMs, whether it's ChatGPT, Perplexity, Grot, you know, Google, Gemini, all of those platforms have serious security risks associated with them and they're very, very open about that. So in terms of your work, you have to be very, very careful to make sure that if you're, you don't put any confidential information on those platforms.
Now that said, I'm informed that Microsoft Office and Copilot have a much higher level of security and so that the way Microsoft sets up their LLMs, that there's, you're considered to be a tenant and that all of the information, confidential information that you put into that tenant is not subject to disclosures on other LLMs and is not, you know, otherwise used to train other LLMs like the other platforms.
It's also true that if you want higher levels of security with the LLM like ChatGPT, you can get enterprise levels that are approaching the levels of security that you might get with the Microsoft tenant.
So, Sheila, just thinking about what you're saying.
If I were doing an agreement now, I was signing something with an. An AI development company, I would want some sort of warranty or assurance that whatever LLM that they might be using or whatever model they're using, they are, you know, they are using the highest levels of security. And I would get, you know, our technical people involved with that to kind of verify what is the, what is the level of security.
A lot of companies, I'm sure this is not news to many of you, but are building their own LLMs. And so one of our projects at Gearhart Law is to build an LLM based on our data alone that is secure and is not connected to the, you know, to the general LLM. So that's also a strategy. And I'm sure most of you are working with institutions that are in the process of creating those singular LLMs.
From an intellectual property standpoint, we have clients that are patenting and protecting these types of inventions. So just to give an example, I can't tell you too much about it because I don't want to violate any of our client client confidentiality obligations, but one of our clients has developed a software that interacts with state, state database that integrates payment methods for medical payments. Okay. And I just, I just got an email from him a couple of days ago saying that there is. Somebody's looking to acquire the company.
Right. So this is a very hot area of entrepreneurship right now.
I was at a recent Morgan Stanley presentation. He said 63% of investment dollars are now going into AI.
And the healthcare industry as a whole is really embracing AI from an investment standpoint, from a consulting standpoint.
So David and I were joking this morning. It's like, well, can you have an AI small molecule? You know, I mean, just actually putting it in. There is maybe a way, you know, if you can, if you can incorporate AI into your business plan somehow, then you, you, you stand maybe a better chance of attracting investments. And in fact, we have other, you know, medtech companies, software as a service, companies that are in the medical field that are putting AI into their presentations.
And we are incorporating the AI component into the inventions that we're trying to protect. So that one of the questions that we ask is, well, is there a room for AI maybe in the future when it comes to this invention? Because we want to make sure that if the invention becomes AI augmented that, that we're protecting that possibility as well, even if it's, you know, a pretty, pretty straightforward type of technology.
So, so I mean that's, that's, that's what I would say. I mean, I think, you know, AI is a, is a, is a very hot area right now for intellectual property protection.
I would say that if you are interested in patenting something in the AI realm that you should get to it sooner rather than later. So whenever you get these super hot technologies like, like AI and everybody is working on it and you have a lot of activity on it, there's getting, there's a ton of patent applications that are getting filed on it. Right.
And in the US it's, and everywhere else, just about it's first to file. And so getting an early filing date on your intellectual property is really important because there's a lot of other people that are also filing inventions around AI and it's a little challenging because there's this window from the time you file an application till the time it's published and we can't really know what's already in there. So there may be something out there that's kind of similar to what you're doing.
So it's, and we can't uncover that through any kind of searching. So it's important to get the invention on file as quickly as possible.
So.
[00:41:39] Speaker A: Yeah, I love that, Rich. I think that's actually a nice segue into some of the good about AI. Definitely the investment in AI I think I read today, I think last year there was like $368 million invested in companies and $68 million of that, which was like maybe a little more than a quarter. Right. Was all in AI kind of AI stuff.
So yeah, this is even, even though it's the wild wild west and it's hot to, to Rich's point, if you want to protect some of the things that you are inventing related to AI, then you should race to the patent office as quickly as you possibly can.
[00:42:25] Speaker C: And if I can just make one other point and that is so if you're working at a company and they're doing an AI implementation of some type, don't assume that the people who are working on it are thinking about intellectual property protection.
And so normally they're focused on getting the implementation Done. But if it's a unique application and it would provide a commercial advantage for your company, then, you know, you shouldn't, you know, it may make sense to just say to somebody, well, you know, have you. This seems like a really cool new idea. Has anybody thought of, you know, protecting this? And the answer may be yes, but you, you'd be surprised how often it was like, oh, we really didn't think of that. So it's something to, you know, keep in mind. And having worked at big companies, I'm very familiar with that phenomena. And one of our jobs was to constantly educate, you know, the team to make sure that they, if they saw something that was protectable, they could bring it up and then it could be decided. So.
[00:43:35] Speaker A: Right, good point, good point.
So on that, on, on the good side, Sheila of AI and Healthcare, what could you.
How is it being used?
[00:43:47] Speaker B: Well, I think, you know, I think there, there certainly are some really, really good, conscientious developers that have been, that have been doing things that are, you know, are really, are really wonderful in assisting with diagnoses. I know they're implementing a lot of, you know, of AI in electronic medical records software and in diagnostic software that really can be tremendously helpful to providers in making decisions about things that could be conditions that could be particularly tricky and things like that. And I think there's some real opportunity for that, given that there's a shortage of medical professionals and that there are, and that people are very stretched. I mean, in the United States, particularly, because of the way our medical system works. Physicians don't always have a lot of time with patients. And I think that having AI available to them as a diagnostic tool can be really helpful to, you know, to let them pay more attention to a, you know, to, to a patient and help them make decisions about how to treat the patient. I mean, I think, you know, I guess my, my feeling is, is AI can be really good as, as guidance, I think where the, where the. I guess, and I hate to, I'm talking about the downside, I guess, again for it. But you can't talk about the good.
[00:45:33] Speaker A: Without talking about the bad, right? At the end of the day, I.
[00:45:36] Speaker B: Know you can't, you know, like, there's, there's everything got a silver lining or whatever, but, you know, so, yes, there's some really, really great ways to implement it. But I think, I think, you know, you have to really not, not view AI as a substitute for the judgment of, of a physician or, or other medical provider because, you know, it's it really depends on how the AI, you know, the AI LLM is built. You know, I mean, that really is. So it has to be. It. It's really just incredibly important that it be built in a way, you know, I don't even want to go into, because we don't have all that much time left. But I'd be happy to do another presentation talking about, you know, transparency in, you know, in the algorithm and the black box, decision making, which can be a problem, and all of these other issues that are coming up which make it more difficult to utilize in health care.
But I think as a tool, it's great.
But I think it's very important because AI lacks some of the empathy and awareness of socioeconomic issues that are prevalent in medicine, that it be the last word be provided by a living human being and not by the AI system. And I mean, I'll go into embedded biases and things like that in AI at some other point, but I know we don't have a whole heck of a lot of time left, but, you know, that all of those things are. I think it's important to have the human component. And, you know, sometimes I had a client who is an interventional cardiologist and a really, really nice guy. You know, he's been practicing for, I guess, 20, 25 years. And he said, oh, you know, when I go to the hospital, they have EMR and it's so great.
And the AI has.
Is able to diagnose about correctly about 90% of my patient conditions. And I said, well, that's wonderful, but what about the other 10%?
The difference between this physician and perhaps somebody new coming up is that when AI is used in training, that people tend to rely on it to their detriment and to their patients detriment for a situation where someone might, you know. Well, he. He's aware because he's had to learn how to exercise his own knowledge and judgment of when AI is wrong.
And so he's able to pick that up. But the. But someone coming up as a newly trained physician, having gotten very familiar and comfortable with AI, may tend to rely on AI and less on their own judgment, which can. Which can lead to.
I see that. I mean, AI is everywhere, obviously, including the practice of law. And one of my partners was telling me that she was in court and the opposing counsel was sanctioned by the judge because they had presented a legal brief with every single citation that had been created by AI and every single citation listed for case law in the brief was wrong.
And so the person was sanctioned by the judge and is up for possible professional disciplinary repercussions.
So there are downsides to relying too much on AI but we wouldn't want that to be the result to cause.
I mean, in medical care, it's pretty important because you can talk about someone, you know, possibly dying or becoming seriously ill as a result, so that, you know, it's something to really make sure that people that. That it's. That there isn't the, you know, the encouragement to. To rely solely on AI for making those difficult decisions.
[00:50:04] Speaker A: I love that, Sheila. So long live humans. Human components.
[00:50:09] Speaker B: Yeah, along with humans.
Humans.
[00:50:11] Speaker A: I mean, we love AI, but let's talk about the human component. Rich, you must have some comments on what Sheila said. When it comes into property and humans, we.
[00:50:22] Speaker C: Well, another dark side of AI story, right? And I don't want to dwell on this, but it's a very, very sad story.
There was a. A man who was going through a rough emotional time, and he had gone through a divorce and lost his job, and he was relying on AI as a counselor.
And AI reinforced his negative beliefs about himself, and he ended up taking the life of his mother and also himself.
And part of the. This came to light through a lawsuit that was filed in California.
And AI had constantly rein. Reinforced this notion of himself, that he was worthless. And at one point, he even asked, should I see a clinician? And should I. Or should I take a. A mental health test? And AI said, no, you're fine. You don't need to do this.
And so it's. I mean, it's just so. It's really, you know, so the human side is so important because, you know, AI seeks to try to please people, and the software wants you to use it, and it will tell you things that you want to hear. Right? And so.
And, you know, obviously, it has a lot of repercussions, you know, that way. And.
But it's also, you know, it's also a marvelous tool. So, for example, in my work, I have a billion questions, always about all sorts of things, and I never felt like I could get answers to. But I use AI now to do that kind of research and do that kind of analysis.
[00:52:24] Speaker B: And.
[00:52:24] Speaker C: And whereas before, I would have had to pay a lot of money to a consulting firm maybe to get this information, and now I can get it instantaneously. Obviously, you have to look at it and say, well, is this reasonable? And it's okay to challenge it, but, you know, the. The uses in the medical profession are. Are. Are incredible.
I recently had to Go through some testing and I got some reports back from the testing laboratory.
I decided to put them into AI and have them analyzed.
I felt like, well, okay, if it gets into the large language models, I don't care because maybe this information could be used to help other patients with the same issues.
It came back with an analysis for me of my blood work that was just amazing. And that the tests they ran were super complicated. It's not something that you would normally see. And when I met with my doctor, I, you know, it was like, well, this is what I learned, doctor. And he's like, yep, that's right.
You know, so still a lot of, a lot of POs, a lot of positive uses out there for that. That, so, you know, definitely a, a, a double edged sword.
[00:53:50] Speaker A: Awesome. Okay.
By the way, I should, let me, let me remind everyone that we are recording this because you've registered. You will get a follow up email from us with the actual recording. So don't worry.
The other thing I'll say is if you have a question, just put, put, you know, kind of raise your hand on Zoom. There are some questions in the chat, but before we get to the questions, Sheila, just maybe you can spend a little bit time on, you mentioned having good agreements with service providers and kind of what that means. Maybe just talk a little bit about that. And again people, if you have questions, just put your hands up and we'll turn to them shortly.
[00:54:35] Speaker B: Well, I guess, you know, the, the agreement would be, I mean, typically you have a, you know, an agreement with a vendor with it, you know, with a developer to, for, for a certain product. So you want to look to, to the agreement to see certain things that are important, like what, what in what, what privacy, you know, what have they instituted to protect data. So that will be in that agreement. How have they sought to limit their own liability? Those are the two things that I really look at when we're talking about the agreement. Obviously there's other terms like how long you're going to use it or how much money you're going to pay them and all of those things. But I look at for, you know, for privacy issues and for protection issues are the limitations of the liability and you know, and then what they, whatever representations they make about their encryption and their protection of your data. So those are both good. So in addition to that, when there's going to be a potential, even a potential for a transmission of patient information, there's an agreement that goes between a medical provider and a vendor, which is called a hipaa Business Associate Agreement.
And that basically talks about liability for breaches and things like that. And I have made my agreement more robust. It's pretty robust as it is, and takes into account federal law, New Jersey law, and other state law to protect data and to protect identifying information.
But also now I've implemented into the Business Associate Agreement that I use a lot of whatever guidance I can find on protections for AI, because so many of the vendors are using, even if it's not, even if their platform is not wholly AI, there's a part of it that's AI that really try to build in all of those. And I'm just using my new Business Associate Agreement for everyone, regardless of whether there's an AI component or there's not. Because if there isn't today with that particular vendor, it's very possible that, you know, in another six months there will be, and then you won't have provided for it. So I just like to take into account that there's going to be a potential AI component to every type of engagement. And that's really the, you know, the seminal, the foundational document that describes the relationships between the medical provider and the person providing services to them and to their practice, to their business, and about data protection. So those, I mean, that's the real foundation of how you determine who's going to be liable and how much they're going to contribute. And if there's a breach, there's certain, there's certain practices under HIPAA that have to happen depending on the size of the breach, by the size of the breach. It's how many records, how many people's records have been, have been released. And, you know, there are some large hospital systems that have millions of patients. So you can imagine that it's pretty. It's pretty expensive to remedy a breach.
So you want to, you know, to transfer some of that liability to the vendor if they're the cause of that. So when you do that, you not only want to transfer the responsibility for the breach to the vendor, but you also want to make sure that they have the wherewithal to remediate the breach, should there be one. For example, are they insured for a data breach?
That's really important because particularly with startups and a lot of the AI platforms are startups. They don't have a lot of money. I mean, they're not well established, they're not necessarily financially secure.
So it's wonderful if you have all of these protections built into your agreement. But the old adage that you can't get blood from a stone is still true that if there's a data breach it has to be remedied. And if you have a startup with three people that just graduated from MBA school, then if all they have is student debt, you're going to get a whole lot of nothing in order to be able to remediate that breach. So the physician practice is going to be stuck with it. So you want to make sure that they have actual data breach insurance which is robust enough to handle any conceivable data breach depending on the size of the, of the data set that's going to be going over to them.
[00:59:57] Speaker A: Yeah, those are great points. I think that, I think now as we kind of, we kind of all discuss kind of AI development companies are coming out of the everywhere, they're coming out of every corner, they're just everywhere. So doing your due diligence on the right company, what can they do, what they're capable of, what type of insurance they have. That's really key, key, key. Rich, any, any comments there before we turn to questions?
[01:00:24] Speaker C: Sure. Well, I think we were talking in the pre meeting about confidentiality clauses too and NDAs.
And you know what, what you wouldn't want is somebody to take your formulation or your confidential information and then put it into a large language model. From a patent perspective, the question hasn't been decided if putting technology into a large language model constitutes a public disclosure. There's a lot of attorneys who think that it does.
And so I think another layer of protection beyond what Sheila is talking about is ensuring that you're. Your NDAs have provisions that say no, you can't. I think you need to be explicit and say no, you can't put this information into a large language model unless you talk with us first. Right. If you want to be reasonable, blah, blah, you can say without our permission. But then you can, if they come back to you and say, well we want to do some analysis, then you can kind of talk through with them what it is they're planning to put in and what is the level of security that they have in order to make sure that it's protected from a confidentiality standpoint for purposes of patent protection. And of course you also just don't want your confidential information out there. Right. So if you put a formulation or if you put a small molecule into chat GPT that that small molecule could and you could pop up in somebody else's search. Right. And that you want to keep that proprietary.
So when you're working with MTA's Material Transfer Agreements, when you're working with confidentiality agreements, when you're working with license agreements. I think the practice will eventually coalesce around having some explicit language in the confidentiality provisions that addresses the use in large language models.
[01:02:34] Speaker A: Excellent point. And then just one, just to piggyback on that a little bit. If you're not, if, if you're putting something on, you know, chatgpt that you don't have any intention of patenting, but that you want to keep it a trade secret, well, then you just told the entire world, world. And so there's no secret left anymore. That's kind of how you have to think about it.
Rich, there's a question here. We'll turn to the question, just some good ones in the chat.
It's from Stefan.
What exactly can you patent if you're developing a platform slash company that incorporates AI? What is protectable about that?
[01:03:13] Speaker C: Well, you know, hi, Stefan, thanks for asking. And you know, I'm an intellectual property lawyer, so I think everything can be protected, right? So that's just the, the attitude that I, I approach these things with.
And, you know, you can, you can, you can protect the, the sequence of, you know, steps, you can protect the routines, you can, it's, it's, it's better if you can connect the algorithm to something tangible.
You do have to avoid the dreaded 101 rejection, which is not eligible subject matter.
But, you know, really the best way to approach it is to, you know, get a consultation with a professional and, you know, discuss what it is that you're trying to do. And then we can, you know, set it up to, you know, give you the best options for protection.
We've patented or at least filed applications on, so we're still kind of waiting, you know, the jury's still kind of out on some of this stuff, but we believe that just a pure AI invention can be, can be, can be protected as long as, and it's, it's part of the job of the intellectual property lawyer to kind of figure out, well, what's the best angle and, you know, what approach is going to offer the broadest protection. So, so, so my answer is whatever you're thinking of or whatever you're trying to protect is probably protectable, but it's also going to depend on the environment that you're working in. So.
[01:05:02] Speaker A: Good, good answer. Sheila, there's a question here.
What can a patient do to protect themselves against the dark side of AI? For example, when Sheila was talking about patients having claims denied unjustly, what can a patient do to protect themselves?
[01:05:19] Speaker B: It's not so much the patient being denied because the claim is denied only after the services have already been provided. So in this case, let's say the client that I'm thinking of is a behavioral health provider. And so they provide the services already to the patient, but they, when they submit the claim for reimbursement to the insurance company, the insurance company then doesn't pay the service provider. It doesn't. I mean, the way it would affect the patient is that, is that they, you know, the service provider goes bankrupt and can't provide services anymore because they're not getting any money, which is actually a thing that has happened.
So it's, it doesn't, it doesn't truly affect the patient because the patient is still receiving the services. It affects the provider who's not getting paid.
[01:06:17] Speaker A: Interesting that. Yeah, I hear you. What about. So Daniel has a.
I guess this is more of your, More of your opinion based on your experience.
He writes a watermark on AI generated image. Images is a rational feature. Should clinicians be obligated to reveal the use of AI in clinical decision making?
[01:06:44] Speaker B: I think that's part of the best practices and the part of ethical compliance that I wasn't really able to get into. But I think it's very important that the patient know that there's a component of AI is being used in providing services to them because, you know, they, they, they're coming to, to a physician or to other kind of health care provider to receive services, and they're expecting to get it from that particular provider and not have the provider then go into a computer and try to figure it out. You know, try to Google the search basically, and find out what's going on.
You want the expertise, knowledge, and ability of that particular person to be making a decision for you because ideally they know your history. Well, they've at least taken a good history and they know some of the different components that might go along with whatever condition you're coming to complain of.
So if there's going to be an AI feature of the diagnostic tool, then that. That needs to be disclosed to the patient.
[01:07:56] Speaker A: Yeah, I mean, I tend to there.
[01:07:58] Speaker B: Has to be informed consent, really. I mean, in ideal situation, there has to be informed consent that the patient needs to understand it.
[01:08:08] Speaker A: Right.
[01:08:08] Speaker C: I think the informed consent is really the key.
Not too long ago, somebody raised the question of what would you rather be seen by an AI doctor that's 100% right or a human doctor that's 90% right?
You know, and when it comes to the delivery of medical services, you know, I'm not sure I would want to hear bad medical news from AI.
I think I would rather hear it from a person.
Right.
But getting back, I think, you know, I think we all in the professions, lawyers as well, do have an obligation to disclose uses of AI and so people know what, you know, what it is that they're, they're, they're getting.
I see so many fake videos on YouTube now. I don't know what to believe. I mean, half of them, I mean, it's just crazy. And I've stopped watching a lot of stuff because I don't know if it's real or if it's not. And there's no disclosure requirement there that forces, forces people to tell you, and I'll send like a video to my kids and I'll be like, dad, that's AI. You've totally been duped. You know, and so it's pretty embarrassing. But, but in any case, I do think that we need to come to have some sort of policy. And like Sheila said, informed consent, I think is, is the key in the medical profession. So.
[01:09:46] Speaker A: Yeah. So we're almost out of time. By the way, Juan, I know you have a bunch of questions about copyright. I urge you to maybe contact me directly. I think we are in touch, Juan. So maybe we schedule some time and we can talk about some of the copyright questions, which are a little bit out of the scope of today's session, but thank you for them.
Just final thoughts from. From Sheila and Richard.
Just in general, where, like, where are we going with this?
You know, are, is the, are we gonna have to regulate the ethical use of AI or, or, you know, kind of. What do you see in the future?
So, Sheila, I'll let you go first. Rich, you can go second, and then we'll wrap up.
[01:10:33] Speaker B: Well, I think realistically, there's going to have to be regulation of AI. There's going to have to be more robust regulation, because having it be on a state by state basis, which is essentially what's going on right now, it's just not.
It's kind of a recipe for disaster.
And you risk, especially in health care, because you're dealing with people's health, which is kind of important to them for the most part, you know, that it's. That there's so much, there's so much potential for abuse there that you really want the developers that are, that are going to treat it in an ethical fashion. And unfortunately, you know, one always hopes that people will do that sort of spontaneously. And there are some people who clearly will.
You know, I've met some developers that are really concerned about making sure that they do it right.
But then there's a lot of people that are kind of like, hey, this is a good way for me to turn a quick buck, like there are with anything. So those are. Because the existence of those people and how really critically important, you know, the information, their appropriate use and ethical use of AI is to the patients that it's really important to.
I think Al's mill of regulation is where it's going to have to go.
[01:11:59] Speaker A: Yeah, agreed.
[01:12:00] Speaker B: It protects people against their own worst nature.
[01:12:03] Speaker A: Exactly, exactly. Rich, final thoughts?
[01:12:07] Speaker C: Yeah, I mean, I think that one important feature of AI, at least in certain areas, is transparency.
The rules that they use to train AI, especially in the areas of health care and mental health.
I think the companies that are creating these rules and databases need to be transparent about those rules and how they're created and what they're doing to ensure that people aren't being harmed.
And so of course, these companies are treating this, these programs and these training methods as confidential because they don't want their competitors to take what they've created. But it's important for the public to understand how is this information being tabulated and how is it being used and how is it being delivered, especially in really sensitive areas. And so I think, think, well, I agree there should be some regulation. I think we need higher levels of transparency to understand how the com, how, how AI gets to these answers because they go back to the, the tragic situation of the depressed man.
You know, we can either count on the companies to regulate themselves or we can ask them to show us what they're doing.
And maybe that's, that's also a way to prevent those kinds of tragedies.
[01:13:48] Speaker A: Yeah, excellent. Final thoughts.
I just want to say thank, by the way, thank you for everyone for attending. Thank you, Sheila. Thank you, Richard. Thank you for this discussion there. If you have any specific questions for Sheila or Rich, reach out to them.
They're happy to chat.
There's a lot to break down here. We had to keep it kind of, you know, 30,000 foot view, but to get into something specifics, just reach out. We will be back in February again, the last Thursday of February. Our next ess is all about accounting and finance for startups and entrepreneurs.
And so we have two amazing speakers and I think you'll all really get a real, really enjoy it. Again, thanks, Rich. Thanks, Shayla. Thanks to all. Have a great, great week or weekend, and we'll talk soon. See you soon.
Bye, everyone.
[01:14:42] Speaker C: Thanks, David.
[01:14:43] Speaker A: Thanks, everyone.
Bye.
[01:14:45] Speaker C: Bye.