GovFuture Podcast

GovFuture Podcast


LLMs & AI Generated Content in Government, Interview with Scott Beliveau, USPTO [GovFuture Podcast]

May 16, 2023

Do you know what the US Patent and Trademark Office is doing with AI generated content and large language models such as ChatGPT or Google Bard? And how will this impact the US Patent process? Building upon the panel discussion at the April 2023 GovFuture Forum, we interview Scott Beliveau who is the Branch Chief of Advanced Analytics and lead product owner of data and analytics at the United States Patent and Trademark Office. In this podcast episode Scott is able to dig a bit deeper on how the adoption of transformative technology in the public sector provides both risks and benefits and how each need to be weighed when it comes to adoption of emerging technology. He also shares how the public sector can take previous lessons learned and apply them when looking to use Large Language Models (LLMs) and AI generated content in the public sector.



 If you enjoy listening to this podcast please rate us on apple podcasts, Google, Spotify or on your favorite podcast platform. Also, if you’re not already, consider becoming a GovFuture member to take advantage of all the community has to offer including Access to a diverse network of government innovators, Opportunities to collaborate with government agencies, exclusive access to events and resources, and a platform to have a voice in shaping the future of government innovation. To sign up go to govfuture.com/join.



Show Notes:



Trimmed Episode Transcript: (note there may be transcription errors or mis-attributions, so please consult the audio file for any potential errors)



[Kathleen Walch] We’re so excited to have with us today, Scott Beliveau, who is the branch chief of Advanced Analytics and lead product owner of data and analytics at the U.S. Patent and Trademark Office (USPTO). Welcome, and we’re so excited to have you with us today, Scott.



[Scott Beliveau] Oh, thank you. Thank you very much for having me here.



[Kathleen Walch] We’d like to start by having you introduce yourself to our listeners and tell them a little bit about your background and your current role at USPTO.



[Scott Beliveau] Oh, man of many hats at the USPTO. In generally, I think all of them sort of surround or touch in some way shape or form on data and really constructive uses of that data to help facilitate our mission at the USPTO. So, you know, at the USPTO, our mission is to the issuance of high quality patents and trademarks in a timely manner, which help bring all the kind of the good things around you to life, new innovations. To that end, those particular innovations, actually little known fact, supports about 41 % of our nation’s GDP. And in my role within our Office of Chief Technology, it’s how do I capitalize on America’s innovation and the data surrounding that to bring better services and other use cases to our customers?



[Ron Schmelzer] Well, that’s fantastic. I think one of the great overlaps of USPTO in general with this whole idea of government innovation is that this is what the agency is all about. Ironically, the whole purpose of the USPTO is to facilitate transforming innovations and making them a valuable part of the economy. The USPTO is established in the very first article of the US Constitution.



[Scott Beliveau] Article one.



[Ron Schmelzer] Yeah, very key to make this. So people will realize how important this is to the fabric of US, especially, but in general, this ecosystem. So getting into a little bit more, because I know we’re talking about some of the transformative technology that’s being around and some of the great things we all see in public press and maybe our own experience. But of course, there are also some risks. So maybe you could talk a little bit about the risk side. So what risks have you previously seen from the adoption of transformative technology in the public sector in general?



And maybe how can we take some of these lessons that we learned, perhaps the hard way, when looking to use some of these new emerging technologies such as large language models, AI generated content in what we want from our public sector interactions?



[Scott Beliveau] Yeah, so I think, you know, I try to categorize the risks into three buckets for, you know, for the time being, one being trust, the other being transparency and the third being truthfulness. You know, a lot of time, you know, in looking at our role in the public sector, it’s, you know, in all of us in public sector of different roles, missions, responsibilities, as it were, you know, it’s extremely important and live sometimes.



You know, at stake when it comes to, you know, making decisions. So in making decisions and looking at things like large language models, you know, oftentimes what we’ll see is large language models may be full of bias. They may be trained on data that, you know, we don’t know where it came from. You know, it may not be from reputable sources.



You know, it may be, you know, either intentionally or unintentionally malicious. But as our role as public servants, it’s extremely important for us to maintain the trust in the public. So when we say, you know, we think there’s a storm coming or, you know, this particular drug is beneficial, that you’re able to trust that message.



They trust that source. The second part of that is really transparency. You know, when we’re making decisions in the public, you know, as well as, you know, the entire public. The entire IP system fundamentally is really based on trust and transparency, because, you know, someone in exchange for explaining your idea, you get, you can get an exclusive right for that and then your people build upon that particular aspect of it is you really need to know what goes into it. And having that openness to know that, yes, you know, not only do I trust this, but I track I understand I follow the facts that led to that particular decision. And then the finally is truthfulness. I mean, it’s very, you know, a lot of the models that, you know, we’ve seen in they’re getting, they’re getting better every day really are, you know, they sound truthful. They sound very truthful, and are sometimes very convincing versus going back to something like Eliza back in the 70s, 80s, which sounded a little quirkier. So it’s really, you know, the risk is knowing that an information or, you know, a decision or, you know, a basis for a recommendation, you know, has a solid foundation on it.



It’s, you know, such that, you know, people can make good decisions based on it, you know, in their lives.



[Kathleen Walch] Yeah, you know, I like your three T’s. I also like your Eliza reference for our listeners that don’t know about Eliza was very one of the original AI chatbots. So, you know, USPTO has taken a stance that not many other agencies have with regards to use of large language models. So in what ways, if any, are you seeing government agencies using large language models such as chat, GPT and Bard, and maybe can you share with our listeners, you know, what decisions the USPTO has made on use of that and why.



[Scott Beliveau] So I think a lot of agencies right now are still, or they’re kind of taking a very cautious approach. And it’s, you know, it’s a rapidly moving technology, I think is we’re all well aware of that the policy, you know, governing, we’re sort of the guardrails for the particular the uses haven’t quite been established yet. So as a result, a lot of the use cases, certainly looking at them tend to be a little more of the customer service oriented, you know, looking at trying to information, more efficient information retrieval.



You know, I had a, someone was telling me example of trying to, you know, if you have a large amount of documentation on maintaining, you know, tanks or something like that, being able to help the facilitator go through that information more quickly has been the use cases. So, you know, within the USPTO, you know, and I think it’s in our unique role overseeing you know at least a portion of intellectual property. There’s a lot, there’s definitely a lot of concern with respect to, you know, our use of it. From number one, I’m going to say, you know, we’re not sure about when people are, you know, we’re using these models, particularly ones that are publicly accessible, that the information trained necessarily respected intellectual property rights. You see that in Europe right now where some there’s some cases or some discussions going on that the disclosure of copyrighted material in the models is going to need to be required. So we’re not quite sure if you know some of these things respected those intellectual property rights so it’s a little on the nose for our agency and respecting and protecting intellectual property rights would use it. And then the other certainly is the, you know, we have a, we don’t want a scenario in which, you know, somebody’s real whole life, you know, dream of being an inventor in their garage that, you know, they’re using one of these technologies and it slips out and their intellectual property then becomes part of someone else’s, you know, language model and repeated to an another person. So in our sort of look within our agency, we’ve taken sort of, I think as you said, the bold measure to actually say we don’t want people to use it.



Our employees, as well as our contractors, have been basically been told that under existing policy for our rules of the road usage of internet, that it doesn’t fall within that particular scope or boundary of a safe, effective use for us. That’s not to say we’re not experimenting with it in sandbox environments to try, because we certainly are, because we’d be amiss to not take advantage of these innovations ourselves, but we’re trying to do so in a manner that is sort of safe, effective, and it goes back to those three Ts, as mentioned.



[Ron Schmelzer] Yeah, it’s sort of interesting because we’ve been covering AI for such a long time. Some of our Gov future listeners might recognize our voices, because we talk about AI a lot on things like the AI Today podcast, but for sure that the biggest things, feels like we’ve reached some sort of tipping point. It used to be that we all, AI has been simmering in the background since the beginning of computing, the beginning of the 1950s, but we’ve sort of crossed some point of having a sort of critical mass, maybe, of well enough trained models that can do things that are pretty, pretty good, that it’s starting now actually to find that the challenge is just being creative enough to think about, well, how can I apply this?



Because if you can think of that creative application, the tools are basically there now to do it. So the interesting that the tipping point has been reached from that perspective. We’re nowhere near the singularity, so I’m not gonna talk about that tipping point, but actually it does bring up the ideas, because people are starting to come up with all these really interesting and creative ways that, of course, start to blur those lines, and of course, these are ethical risks, the things we talk about a lot when we’re trying to make good use of technology.



So maybe we could talk about some of the ethical risks, some of the things we’ve been seeing around these transformative technologies, and you’ve already expressed it a little bit, but how the government is working to address these risks, and maybe areas that you think we can do a little bit better here, especially in the short term.



[Scott Beliveau] Yeah, and I think some of it, well, in terms of the pervasive nature of it, I usually use my mom as a good example. When my mom is asking me, my mom in Rhode Island is asking me about it. We’re still a little scared of computers, admittedly, but when she’s asking me and talking about it, it sort of hits that zygast moment of, oh my goodness, it really is, this is really that tipping point of a thing.



So looking back at someone like my mom, the concern is, as a government official employee, what is the role of government in society as it comes to protecting someone like my mom? She’s not necessarily the most technically savvy person on things, so when we start having these, some of these dialogues about, oh, we’re gonna move up to higher value work. Well, not everyone is always gonna move up to higher value, value work. We always make this assumption that, I can’t remember the fallacy term of, we make it assumption, well, because there were horse drawn carriages and those went away to cars that sectors or society moved forward to other higher paying value things. Is that going to be true moving forward in terms of that ethical question that I certainly have concern about? I in our panel we talked about maybe it’s jog of segment killers, but that still doesn’t give me a warm fuzzy feeling when I think about people like my mom.



Being part one of those people who are in the segment, the killed segment. So in government, how do we make or how do we certainly promote the use of these disruptive technologies in a way that doesn’t become a company by company arms race as it were to new products amongst each other for market share in taking a step back as to the more fundamental, fungible use of this technology. Because as you said, how much further are we now? Right now it’s very easy and helpful for information retrieval and we’re trying to come up with ideas for new ways to use this technology. But at what point does then the technology get to the point where it’s coming up with it? We can ask it for the two use cases. How do you think we should use you?



I’m sure if you asked one of the models right now we would have some suggestions, but those suggestions are based primarily on what it’s learned from other models as it were. Yeah, the funny thing is for a long time, people have been wrestling with this issue of is AI a job killer? For a lot of folks will think, well, this is sort of what happens with technology. It’s like always waves of new technology, industrialization, it causes disruption, but there’s always like it frees us up to do other things and those other things create a bunch of new jobs. I think the challenge we face is that now I’m kind of wondering a little bit about that because AI is such a broad technology that it’s not just machines to do weaving or machines that can harvest faster or things like that. It’s literally got such broad application that it’s impacting so many categories at the same time. I actually just read this week that Wendy’s is testing replacement of all their drive-thrus right now with a chat, GPT based chat bot, which of course does a much better job than the previous chat agents. People are not responding to that in a positive way or an excited way. There’s other things like that. I think what role do government agencies have?



Is it possible to in terms of not waiting for the economic impacts to happen because that might be too late then? There’s more philosophical discussion. I think it was MCI or some are commercials where it’s like, well, what TV shows do you have? I’ve got every TV show and every channel and instantly when the guy went to the motel. Thinking back, why some was growing up when he first went to the movie theater, his first reaction is, why isn’t it starting? I’m telling it to start.



I have to watch these. That idea, that concept or construct, while it’s not AI based, it’s a little more technologically deep from it. Is it convenient for us to have instant video on demand of anything we want? Yeah, sure. Is it a good thing. Well, we’re certainly watching more TV. Are we? We’re not necessarily watching more good TV. We’re filling that space.



Yeah, so when we look at, you know, that is maybe an analog to, you know, some of the AI technologies as they move forward. What are we going to fill that extra time with? Are we going to fill it with, you know, we’re all going to start going to the gym and exercising like we say we should?



Or are we kind of spending on watching more videos on demand?



[Ron Schmelzer] Yeah, I think that’s a big question. Right, Kathleen? We talk about this all the time.



[Kathleen Walch] Yeah, you know, and I was going to say too, it also goes back to the panel that you were on. And we’ll make sure to link to that in the show notes in case anybody was not able to attend the April 2023 Gov Future Forum that we had where Scott was on the panel. But, you know, these are really great and wonderful conversations to be having and making sure that we are addressing that. I know on the panel, we had also talked about the use of ethical and responsible AI and putting guidelines in place.



You know, there’s not really like laws right now around that. But how are we going to be crafting that? And who’s coming up with that? And are some of those, you know, guidelines that we put in place, maybe eventually laws going to say we cannot have mass unemployment because of AI and various technology.



But we’re going to need to keep these jobs, keep people employed, keep them working. You know, because it is one thing to have more free time, but then it’s another thing to have too much free time, right? Like in Wall-E.



[Scott Beliveau] Exactly, right. You know, and to the, you know, not to, you know, to expand on that point a little bit as well, it’s, you know, AI is certainly global.



You know, so, you know, I think this was also touched on the panel. You know, our policies may or may not put us in a competitive disadvantage via the other companies, other countries. Is that a good thing? You know, I mean, that maybe, maybe not. You know, it sort of depends a little bit on, you know, some of the questions on, you know, the deeper questions on society as a whole, you know, and how we want to, you know, manage the role of technology within society. But, you know, and I think it’s, have the, are these questions new?



Certainly not. You know, I mean, they’ve been going on for a while. But I think what’s, to me, which is sort of interesting is, you know, I go back to IBM Watson, you know, beating Ken Jennings, it was a big, it was an exciting thing. There was a lot of talk about it for a few months.



And then it kind of died down. This, you know, to me, losing my mom is sort of my, my barometer, as it were. The fact that she’s talking about it tells me this is maybe, maybe this is a deep time for some of those deeper conversations on the role of technology, because we’re getting closer to that precipice than maybe we’re comfortable doing. Or we’ve gotten a little bit ahead of ourselves, you know, vis-a-vis how society ethics and center.



[Ron Schmelzer] Yeah, I think it’s really interesting and bringing it back to this idea of government and what the purpose of public sector is and really the purpose of government is to facilitate all the things that people want out of their lives and to be there for which brings back the very first article in the Constitution, which are all those laws that are necessary and proper. Thinking about AI and all this and how it’s transforming society actually puts the government people in public sector agencies around the world in a very central position to think about not just what should be allowed and what should be regulated and what should be controlled and what should be common, what should be free, what should be open and what should be proprietary and what should be protected. That’s what the USPTO is all about, giving people that protection because that gives them the ability to invest and have some confidence that they’re not putting something out there that will be taken or stolen or used for other purposes.



But a lot of these AI things blur those lines. If I’m fine-training a chat GPT model to do something specific, do I own that? Do I have any ownership stake on that? Does chat GPT own it? Is it open?



Is it protectable? Is that good thing? Is that not a good thing? I think there’s a lot of questions, a lot of questions that remain to be open. I think you’re actually in a great agency at a great time. That’s what you’re thinking about some of these things, right?



[Kathleen Walch] It’s always a good time to be at the PTO. Shameless plug for the PTO.



[Scott Beliveau] Shameless plug.



[Kathleen Walch] You know, this is a wonderful conversation. We had a wonderful conversation at our Gov Future Forum event too. But we like to wrap up this podcast by asking you what do you see or hope to see as the future of technology and innovation in the government? And we’ll let you answer that however you’d like.



[Scott Beliveau] I was joking before. I would love a good cup of coffee in the future of government. Would be nice. Now, but I think from, and this is sort of speaking personally, that having sort of a better coordinated effort within federal government as to this particular technology would be, I think, extremely helpful.



And where, you know, I think, you know, there’s certainly agency, there’s certainly committees and there’s certainly policies and other things being developed. What I think what we need to really look at is this construct of a national like XPRIZE research challenge. To really take some, identify some key core technologies that we see as strategic and then craft this challenge to be specifically to enable or empower small businesses and startups.



And the reason I say small businesses and startups is that that’s really where we see a lot of that challenge innovation coming from. Now, where is the gap there? A gap seems to be you’ve got a basically this huge inequity in compute where with a small business just doesn’t have access to it. They may have a great idea. Or they don’t have this real huge trove of massive trove of data going on.



And there’s certainly policy discussions about sort of trying to create that within the federal government. But, you know, they’re not focused. We’re not focusing that effort on a common challenge across the board within the agency. So, you say that the next part is clear policy like what’s what’s within the regulation guardrails? What’s not you know because anytime you’re going to bring anything to market that uncertainty Certainly becomes an impediment to it and then the final as the final part really is procurement procurement reform within the federal government such that when these companies come up with these cool ideas We’re able to take advantage of them quickly you know more quickly and readily within the federal government to be able to provide You know that benefit back to the large you know the larger public and I think the role of government You know as Ron you were saying is good at these sort of big things You know getting big you know big compute getting big accesses to data piled making them available to You know constituents and helping to sort of frame that problem question You know that big challenge as as it were Would be something I’d hope I’d hope to see



[Ron Schmelzer] Well great well we do too and I think a lot of our listeners who are Government innovators across the whole ecosystem which includes startups that are building things that may be good for the public good to the very Largest of corporations to multinational organizations and agencies. We know we have an international audience So we encourage our international audience to reach out to us. let us know we love hearing from from our podcast listeners and Scott’s fantastically open as well So we reach out to us reach out to Scott and provide your feedback You know we could definitely spend hours on this and when we when we will and in one way or another We will spend all this time But I do want to be mindful of our podcast listeners time So I want to thank you Scott so much for being on our future podcast and sharing your insights and Experiences and thoughts with our audience.



[Scott Beliveau] Thank you. Thank you very much great time.



[Kathleen Walch] Yeah, thank you so much We always enjoy talking to you in the conversations that we have and listeners if you’ve enjoyed listening to this podcast


The post LLMs & AI Generated Content in Government, Interview with Scott Beliveau, USPTO [GovFuture Podcast] appeared first on GovFuture.