Wealth Actually

Wealth Actually


FAMILY OFFICE AI

April 17, 2025

Family Office AI has become a dominant theme at the fancy dinners where families and their advisors chart a course to incorporate new technologies. As wealthy families grapple with the risks and opportunities of AI, institutional rigor and structure hasn’t kept up with the often informal world of family offices. This is a mistake High end governance must play a part in the family office AI space.



https://youtu.be/n_KHB_gOc9M

We’re going to be talking to TIM PLUNKETT, who’s the founder and managing partner of Plunkett PLLC. He advise families on structure, governance and the development of procedure around these exciting, but potentially dangerous concepts. We’re going to be talking about best practices for family offices as they deal with the artificial intelligence theme.


Family Office AI

“When looking at AI adoption in family offices it is important to remain true to the culture, operations, reputation and underlying trust among those who built the Office in the first instance. Remain true to your principles and don’t get distracted by the new toys.” – Tim Plunkett


Family Office AI Transcript

Frazer Rice (00:01)
Welcome aboard, Tim.


Tim Plunkett (00:03)
Hey Frasier, how are you doing? Thanks for having me.


Frazer Rice (00:05)
doing terrific. we’re in the midst of Trump tariff season, so it’s a little crazy, I’m sure for everybody. yeah. so why don’t we, we’re going to talk a little bit about family offices and artificial intelligence, which I think is a theme. both themes are, you know, big unto themselves, but how family offices integrate with the space. I think it’s something where it’s a, it’s an area where family offices can be very informal and.


Tim Plunkett (00:11)
We’re blessed.


Frazer Rice (00:33)
Getting some institutional rigor around them is important. And so to that end, you have a lot of broad experiences advising businesses from a governance perspective. Maybe describe your firm for a few minutes and what you do.


Tim Plunkett (00:47)
Sure, thanks again. I have three pillars in my firm. I can only do certain things well, so I try and limit what I do. My training is as a litigator, and so I consistently think of things always as having to explain them in front of a judge, which helps with a lot of risk, which goes along hand-in-hand with AI and governance.


The second part is I’ve done a lot of government relations work, which is working across disciplines and organizations, trying to advocate for certain outcomes and create business environments that are efficient, compliant, ethical. Again, all that ties back to the same foundations in the world of AI. And the third component of it is, is obviously the AI work I do, which came out of working in data privacy and security over the last 10 years. The natural flow was to move towards this sector. And today my practice is


Mostly helping companies learn how to implement strategies that are fair, equitable, just, but also compliant with the laws and keeping in pace with the technological change, is really at breakneck speed and an incredible place to be right now in the world of opportunities in front of all of us. It’s very exciting.


Frazer Rice (01:57)
So when you’re canvassing companies and families that are invested in them, what are the use cases that you’re seeing?


Tim Plunkett (02:04)
So use cases are, I mean, they’re kind of all over the place. you look at in terms of how do you define the practices, have, there’s operational use cases. so you have use cases that are like document intelligence and automation. Sometimes in places there’s expense tracking and anomaly detection. There’s dashboard creation for organizational purposes.


You have investment use cases for deal sourcing. portfolio risk management, alternative data, source and analysis. You have governance use cases for succession planning, philanthropic impact analysis.


So there’s a lot of different cases that are out there. Each one of those has lots of different levels beneath them. But back office integration in the family office space, like you said.


Some places are single jurisdictions, some are multiple jurisdictions, some are international, some are local, some are really formalized, and some are not. And so you have basically two buckets that everything fits into.


One is AI for adoption and operational efficiency, and one is for investment. And those are viewed and treated very differently. Others overlap, obviously. But when you’re talking about getting down to the fundamentals of building the rigor around these things, and what the institutional rigor looks like. That’s where everything emanates from.


Risks

Frazer Rice (03:31)
Got it. So, you know, it’s difficult to put sort of a roadmap around this. It’s all evolving so quickly. And, you know, just when you think you’ve got everything in mind, there’s some new use case that pops up as a litigator, as someone who is trying to advise companies and families around governance so that they stay safe from the various risks that are out there. How do you group those?


Tim Plunkett (03:54)
Well, the risks are there’s risks that are from compliance. Okay, you have regulatory risk. You have family, know, reputational risks, operational risks. Then you have the obvious investment risks, due diligence, things like that. But and then the fundamental thing about family offices is they’re about family, and they’re about protecting that asset more than anything else, in my mind, at least. And so and so


What are the risks that go with that? Those are family reputation risks that you want to mitigate as much as possible. There’s obvious data risks and security risks. Once you start pulling data in places, then it makes it more of an attractive target.


You have risks that go around that make them more attractive targets because people seem to think that some data family offices don’t have a strong data governance strategies or security strategies that they may have decentralized security. There’s all kinds of risks once you’re inside the office as well between family members, between generations.


One generation looks at technology one way and another generation may look at it differently. That creates a risk from an investment perspective, an operational perspective. the world is fraught with risks, but for every risk, there’s a solution pretty much. And a lot of that comes down to really building the governance strategy properly from day one, focusing on what your foundational documents should look like, your AI governance policy, and that is what your, for lack of a better term, your constitution. That’s what guides you.


Frazer Rice (05:32)
So a client walks into your office and they’ve got some level of complexity, they’ve got an interest in the space, they’ve got wealth and assets in there. It maybe takes us through your process as how you get them to get their arms around the issue and then put structure.


Tim Plunkett (05:50)
I think the first thing to do in talking to anybody is finding some common ground. And there are certain principles that guide people, decent people, professionals that have licenses and things like that or certain mandates to do certain things.


Tim Plunkett (06:08)
I think that when you’re looking at building the bridge, the first thing you have to establish is trust. And trust is something that is in the background of every decision that’s made in the world of AI.


So once you’ve established a level of trust, you can start talking about philosophically what the family is looking for, whether it’s from an investment perspective or a philanthropic perspective. But you have to understand what the family is all about, what the family office is all about and their mission.


Before you can start putting on legal tools or technological tools or anything else you have to have that that trust at the beginning.


Once you do that you start to build your your your frameworks Your legal frameworks and that’s what I said to your AI governance policy becomes your Constitution The good news is that there’s so much information available now on how to set up governance programs.


It’s not that hard depending even if you’re you know small office or a big office foreign domestic whatever, there’s frameworks for everything. But at the foundational level, the first thing is to get the trust together, to get the AI governance policy document together. And that will be comprised, if you go down the line from there, we can get into talking about what the specific core rails are and what you’re trying to accomplish there.


Frazer Rice (07:26)
Sure, and let’s do that. One of the things I think about when we go from paper to operation as many times that, you know, in my world, the trusts or the wills or whatever are well drafted and they stand up to lots of different things. However, the people who are administering them are the weakness on that front. When you’re thinking about the guardrails and the legal structures, how are you advising these families as far as staffing them?


Tim Plunkett (07:45)
Right.


Okay, so staffing, again, This is about knowing your people. It’s about knowing what you have, doing an inventory of what’s inside your organization, who’s good at what. And there’s legal frameworks that you put around those based on what people are good at and what they aren’t. So when you’re looking at staffing in particularly, you basically want to build a structure where there’s accountability.


You have to have, there’s expectations in the office for returns on investments and things like that. And then there’s also expectations on how these places behave and how they’re viewed publicly.


So you have to define the roles and responsibilities very clearly. You’re gonna want an executive leadership team to begin with. That’s a strategic oversight role. That you’re gonna have ethics officers or maybe an ethics committee, depending on the size and structure of your organization. You’ll have technical teams. which would be your data scientists or your engineers or your developers.


You might have a risk management team that identifies very specific AI risks that they want to control or other market risks that they want to account for. And then you have the people in the organization who are actually using things, which I would call the end users, which you want to always be soliciting feedback from.


But what you’re really looking for also in addition to skills are the people qualities, right? Because AI is a team sport.


And that’s the one thing that is really essential. Teams win and lose together. And sometimes teams have role players. And sometimes teams have superstars. But they’re not always gonna like that. But they have to have the same common mission in defining that. And so what you really wanna find is people who can work across your organization that are multidisciplinary.


So in some family offices you have people who wear multiple hats. And so as you start building out your framework, you want to look at the team that you have and say, does Bobby do this really well, then Bobby should do the risk analysis guy should maybe be in touch with the compliance people. know, Sally does marketing really well. Maybe Sally should be talking to the vendors who are going to be doing the marketing. So it just depends on personality a little bit. trust, trusting your people to make decisions and putting those teams together.


It’s critical. I mean, I could talk to you about the roles and responsibilities of each of those roles if you’d like but in a large, you know, in an overview that’s what you want to put in place.


Best Practices

Frazer Rice (10:27)
So from the, a general set of best practices, how do you think about the things, if a family office is walking into your office and saying, okay, I understand the need for sort of constitutional frameworks and legal structures, making sure the right people who understand the difference between an LLM and an MBA, that’s probably a bad example because that could be legal versus business designation.


Tim Plunkett (10:52)
Yeah.


Frazer Rice (10:56)
Making sure that the people are right. But what are the bullet points in your mind that are things that families should really be thinking about?


Tim Plunkett (11:06)
The high level, the highest level thing to me is always the family risk. Now, like I said, you have under that bucket, you have reputational risk. You don’t want to be aligned with certain products.


If you’re investing in AI, okay, let’s take it from that context first. If you’re investing into AI, you’re investing generally, you’re not, a lot of places you’re not building, right? So you’re buying into things. There’s always, you know, buying into funds or whatever else like that.


If you’re making direct investments into companies, that’s again fundamentally about people. You have to align yourself from a reputational perspective with people that you can trust and believe what they’re gonna do. If you’re gonna be sharing data and anyone getting access to your systems, you don’t want your wills, your itinerary, your discussions with a concierge somewhere, any of those things exposed.


So what you have to be doing when you’re doing, when you put yourself in position to be co-investing or side by side with somebody, you have to know what their security profiles look like. You have to understand how they’re audited. You have to understand their history.


Have they been serial litigants? Who are you dealing with here? And on the data side, you have to understand where the data’s coming from, how it’s been tested, has it been…through several iterations or is it a one time? Is this the first time you’re meeting this data? Is it been anonymized?


I mean, there’s a tons of different ways to look at the risk from an investment perspective. But when I think of family office, the first thing I think of is the family itself and the risk around them and protecting them first and then building the business out from there.


Frazer Rice (12:56)
Let’s talk about the concept of the audit for a second. Whether you have a vendor or you’re tracking an investment or in many ways even tracking your family’s whereabouts. How do you think about that audit piece and who should be, in many ways the family’s not doing it, they’d be hiring people to do it. where does the check and balance come from?


Tim Plunkett (13:21)
Yeah, so again, if you’re looking at it from an operational perspective inside the family office, executive team, you want to have somebody’s accountable always, that you can point to, that the family can say, you know, that person is responsible for my AI strategy.


That person’s answers, know, deputy has my role if something goes wrong, but that person is fundamentally in charge. So you always want to have strong executive leadership in the organization. that you can point to from a family’s perspective.


So part of that function will be the audit function of looking at the technology. mean, when you’re talking about vendor contracts and things like that that are under the control of executive leadership, you’re going to have to have clauses in there for data protection for the family.


You’re going to want to know about breach notifications and those kinds of audits that you can do. You also are going to have, if you’re very proactive, and you probably should be, running audits and simulations within your organization.


Tabletop exercises, penetration testing, trying to find where your weaknesses are. All that will be accountable through one person, ideally. And the other part of it is you want to have risk mitigation tools in there. One could be cyber insurance, one could be other forms of insurance, but also education and testing is underrated.


It could become disassociated from their money and trust people to do things with their money. When they have an idea of what they’re actually gambling and what’s happening, whatever, even if they’re very prudent risks that are taking, if they’re not informed about the topics really well, and AI is a hard one because it’s changing so fast, and the world is unfolding so quickly, that training and education programs inside the family can be very, very helpful.


Frazer Rice (15:19)
When the tools are already pervasive, I see it in my practice, we use AI driven document tools, we use it for other types of things. The tools being used by the staff of the family office and then the family members themselves, both in audit process and maybe even sort of an evaluation process as far as the security protocols, but then ultimately,


Getting people to understand the limits of the technology. And to me, that’s an important part of the education. Like you just described, who does that fall under? Is that the chief technology person? How do they wrangle everybody together to make sure that everyone has a good baseline to work from?


Relationships

Tim Plunkett (16:07)
So yeah, again, I think that comes from the people who have the best relationships with interacting with the family are the people who should designate who runs the team. The people that they’re more familiar with. mean, I’d rather get bad news from certain people than others.


And so again, that goes back to who do you trust? But once that person is in place, their job is to build out the team. And you do that with an executive level team that starts out the discussion.and then it flows down to AI specific use cases that you’re after.


You build teams around the use cases and then below them you would have sort of a shadow layer of professionals who have different skill sets. So you would have your legal, your HR, your communications, your compliance people and the different end user people who are gonna take feedback. Then you have your people who iterate the models and improve them over time there, the ongoing monitoring that happens.


And you have this pyramid that goes down like this from top to bottom. When you’re looking at, again, accountability at the top, you want that one person there to be accountable. There’s all sorts of tools that those people can use. You always want to do sort of a pre-adoption exercise so that you can…explain things. Explainability is a big thing everyone talks about in AI obviously. And you can do explainability assessments, know, that’s part of it.


Frazer Rice (17:42)
Maybe take us through a challenge that a family had where you’re brought in to try to help make sense of a situation where they were getting involved in technology and maybe AI specifically where they went in without the requisite understanding and needed to be sort of brought to a better place.


Tim Plunkett (18:01)
Sure, I would say I had an example of somebody that came to me, this was in the educational AI context, some family offices like to do things differently than others and have different mandates. And I had two that had mandates to spend money and try and develop AI tools specifically for education. I had an existing relationship with one of them and then someone else came to me and introduced me to another.


Putting them together sometimes is a good idea, it’s not always a great idea. And parts of those marriages last and parts of them don’t. And so I had a fairly sophisticated existing client and they are very sophisticated companies and making investments globally.


I don’t think that they thought that they were on equal playing field. So there was a personality issue at the beginning. But then we started talking when we started getting into systems and philosophy. You know, there’s personality, there’s philosophy, and then there’s the operations.


And we started getting into the practical implications of doing things certain ways. We found that there wasn’t a lot of alignment between the teams so you basically have a situation there where you can’t force an integration.


But I haven’t had a lot of situations where I’ve had security risks and family offices. I’ve had him in a lot of other context and and in those situations the best thing that you can do is you know there’s obviously breach requirement notices and things like that statutorily mandated requirements are out there, but putting those in place, turning them on, getting them operational, bringing in forensics teams and things like that,


That’s hard. I’ve had other situations where, and this is not technical, this is not AI, but when you bring an AI team or team together under the auspices of investing in AI, you sometimes bring who their vendors are, right? And so we had an accountant who was essentially crooked and we were able to establish that there was some fraud happening.


That’s a reputational risk to the one family who had that person in their backyard, right? They’re bringing that person into another transaction. That doesn’t look good. And it makes you one side question the other. So I’ve had those situations. the best thing you can do is rely on the law and rely on your reputation of who you are as an organization because that will buy you credibility even if you have a hard time with another party.


If you can look around and say that you’ve never had these problems before and you can look someone in the eye and tell them that this is not familiar to you and you’re being honest, then you can buy a little leeway there.


Frazer Rice (20:58)
On the investing side, to take it in what’s called a more of a positive direction where we’re not dealing with chaos and loss and things being crippled, how do you sort of think about advising clients in terms of understanding where AI fits in their investment portfolio and less on the investment side of it, but more on the understanding how to evaluate the appropriateness of the investment, maybe.receiving input from a board that might have outside expertise on it and integrating that into a family’s allocation of capital.


Tim Plunkett (21:35)
I think the concept of an outside board and outside advisors is tremendous. The range of what AI is, again like I said, I’ve dealt with a bunch of family offices that are educationally focused and some others, but that level of expertise, you say education, and then under that there’s so many subparts. If you’re a family that’s interested in drug development,


That’s a massive, massive area. And so bringing in that outside expertise is critical in the world of AI. You have legal expertise, substantive expertise on the actual investment itself, and you have security expertise. There’s so many different levels and you can’t have that all in your own house.


It’s not possible to do. And with the amount of change that’s happening in AI, as fast as it’s changing, you can’t keep pace internally, I don’t think. Even the most sophisticated entities in the world, the largest banks with huge resources can’t keep up. with what’s happening. I’m a huge fan of having a board or advisory board, sounding board really, for you to talk to, for the family to talk to, to develop whatever it is they’re looking to invest in.


Frazer Rice (23:00)
Well, and at the board level too, how it integrates with things like HR or risk or insurance and things like that. Oftentimes I’ve seen in a board situation that people sit in those roles that come from that different avenue. so that they reiterate in your point, you not only attack it from a strictly AI technologies perspective, but from a domain expertise perspective that you don’t necessarily have to have on full-time salary in-house.


Tim Plunkett (23:28)
Yeah, they have to be, they have to,. Everything has to at some point go to somebody who has to make a decision for the family. And, and that’s a board level, you know, there’s an outside board that we’re talking about, but there’s also can be an inside board.


Like I was saying, the executive leadership board, and then there has to be board level review of even the most, of some of the most basic things like vendor contracts and things like that. There has to be, you know, the board has to look at the dashboard review. They have to look at a lot of different things.


They help you think strategically. And again, like I said earlier, AI is a team sport. if you can bring in the better players, then you bring them in. If your family can afford to do that, that’s what you do. You want to make it as transparent between the family and the project lead as possible. And so the board will have the best information that they can have. And that gives the family the best opportunity to make good choices.


Frazer Rice (24:28)
As we wind down here, one of the things that’s exciting to me from the AI theme and sort of the technology bent and the advanced use of data, et cetera, is that it’s a way to get the next generations excited about either the investment process or the overall wealth creation of the family entity. And it’s a way to have older generations be able to speak to younger generations, both on their terms from maybe a financial or business side of things to the technology side of things andthe younger set.


How do you think about this from a next generation discussion and an overall buy-in slash operating feature of a governance structure for a family long term?


Tim Plunkett (25:11)
Yeah, I think you have to facilitate intergenerational engagement in that sense. And it has to be something that’s put in. Again, everything goes back to the foundation document, your constitutional document, your AI governance policy. You have to have that in there. I think also when we touched on before sort of the training and education part of this, having that component gives everybody the same information.


There can then be discussion between the generations about how do they think about this versus how do you think about this? If you ask somebody right now about TikTok, every kid will tell you it’s awesome, everyone uses it. Everybody who’s over the age of 50 will tell you it’s potentially a national security risk and something else. And so there’s a real breakdown in terms of perception of technology and how it can be leveraged. And data is treated more casually. in younger generations and there’s a lot of information about that that’s available out there.


So you have to have some mechanism and that should be built into your documents that requires at least semi-annual, if not quarterly discussions about the technology, how they’re evolving. And I think also at the end of the day, some of this isn’t legal or technical, it’s just basic, which is listen. I think listening to how kids these days. operate and talk about technology.


If I listen to my son tell me about how he drafts prompts for ChatGPT, it’s entirely different than I ever have done it or thought of doing it. And it’s intuitive to him and it’s not to me. And I think just listening to that influence in your family is really important. But there is a way beyond just the warm and cuddly notion of listening.


There is a legal and mechanical way to make it happen and that’s through your AI governance documents.


Frazer Rice (27:02)
Really good stuff. Tim, how do people find you and your firm?


Tim Plunkett (27:05)
I’m Tim at Plunkett PLLC. I’m PlunkettPLLC.com. On Twitter, I’m Tim the AI Lawyer. I’m on LinkedIn. I’ve got a ton of content that comes out there, which is really entry-level posts for people to understand AI, taking it from a very basic level. anybody can call me anytime when we’re around. We’re here to help, and we’re honored to be here. And so thank you.


Frazer Rice (27:33)
Tim, great having you on. Thanks for coming.


Tim Plunkett (27:35)
Thank you.


Outline
Describe practice-
“Bringing institutional rigor to an “informal” space”

Will discuss the first steps in bringing a plan to fruition. 


Discuss the AI Governance Policy as the foundation for “institutional rigor.” Establishes the context for all discussions as this is the bedrock. 


Guardrails in operations
Guardrails in Governance – making FO’s more institutional

Will expand on Policy and then how to ID framework/guardrails; where you can select from, kinds of models that fit from globally available version & looking at the context of the FO platform. Focus on foundation building with core principles. 


FAMILY OFFICE AI in practice – how do you define it?
Operational Use Cases

1. Document Intelligence & Automation


2. Expense Tracking & Anomaly Detection


3. Family Reporting Dashboards


4. Deal Sourcing & Screening


5. Portfolio Risk Management


6. Alternative Data for Public Markets


Governance & Strategic Planning Use Cases

7. Succession Planning Analysis


8. Philanthropic Impact Analysis


Back-Office Integration


AI-Enhanced RPA (Robotic Process Automation


Is the genie already out of the bottle?


Data Privacy and information security


Security of assets and family


Unique Data Privacy Challenges for Family Office AI

1. Blurred Lines Between Personal and Institutional Data


2. High-Net-Worth Target Risk


3. Decentralized Technology Footprint


4. Third-Party & Vendor Risk


5. Global Footprint & Jurisdictional Complexity


6. Next-Gen Privacy Expectations


7. Lack of Formal Data Governance


Solutions

  • Data Mapping & Classification, Access Controls & Encryption, Zero Trust Architecture, Cyber Insurance, Vendor Contracts, Regular Audits & Simulations, Education & Training
  • Larger discussion of data security, legal issues, corporate issues

Lack of Transparency in Tools (are they working correctly/behaving ethically?)

1. Conduct Pre-Adoption AI Risk & Explainability Assessments


2. Discuss a “Human in the Loop”


3. Implement AI Governance Policies


 4. Favor Explainable or Transparent AI Models


 5. Regular Review and Audit of AI Tools


 6. Align AI Use with Family Values


How do you manage FAMILY OFFICE AI vendors?

Natural skepticism meets needs discussion; differentiators; examples of risk presented:


 Data Security Risk


Operational Dependency & Continuity Risk—VENDOR LOCK


Compliance & Regulatory Risk


Confidentiality & Reputational Risk


Misalignment of Interests


AI/Automation Risk


Onboarding/Offboarding & Transition Risk


If I were running a family office, here’s a clear breakdown of best practices and policies I’d adopt when managing vendors:

Establish a Formal Vendor Management Policy


Conduct Thorough Due Diligence


Customize Contracts and Include Key Clauses


Assess & Monitor Cybersecurity and Privacy Controls


Require Annual Vendor Reviews


Integrate AI Risk into Vendor Management


Build Relationships, Not Just Transactions


Optional: Tools & Templates to Use



  • Vendor Due Diligence Checklist
  • Data Protection Addendum (DPA)
  • Vendor Scorecard (to track cost, quality, trust, and responsiveness)
  • Preferred Vendor List with Tiering

AI use in financial decision-making


AI Use in Financial Decision-Making


How it can help:


Guardrails I’d Set:



AI use in strategic and family “qualitative” decision-making


AI Use in Strategic and Family “Qualitative” Decision-Making


Potential Use Cases:


Ethical/Privacy Concerns:


Governance Actions:



FAMILY OFFICE AI as an investment thesis- how do you incorporate due diligence in the investment decision-making process

What I’d Look For:


Due Diligence Process:



  1. Exit Scenarios:

Portfolio Strategy: Non financial advice 


How do you staff the function?


Should there be a Chief Information and Technology Officer


Should there be an outside board member to think strategically?


Executive Sponsors

Biz Leadership


Core AI & Data leadership (AI, Data, Core AI leaders)


Execution Team (Finance, Risk, Legal, Security, HR, Comms, enterprise Portfolio Management)


Working group members on implementation. 


1. How Do You Staff the Family Office AI Function?

Responsibilities:


Should There Be an Outside Board Member to Think Strategically?


My View: Yes, especially for larger or institutional-style family offices.


Why It’s Valuable:


Ideal Outside Board Member:


Hybrid Option: Advisory Board or Innovation Council


How do you think about this with family and next generation discussions?


1. Center the Conversation on Legacy, Not Just Capital


“AI, governance, and innovation are tools—but the goal is family continuity, not just asset growth.”


How to Frame It:


 Include the Next Generation as Strategic Stakeholders


Tactics I’d Use:


Build Governance That Evolves with Generational Input


Use AI & Digital Tools to Democratize Access and Engagement


Treat AI as a Cross-Generational Learning Opportunity


Sample Messaging to Bridge the Generations:


Where to Find Tim

PLUNKETT PLLC


Human Resources AI

https://frazerrice.com/ai-and-human-resources/

https://www.amazon.com/Wealth-Actually-Intelligent-Decision-Making-1-ebook/dp/B07FPQJJQT/