By Ed Rowley, March 2025
I’ve recently posted a new list of Artificial Intelligence (AI) solutions geared towards fund marketing, sales & IR functions (see the list here). This AI list joins the various lists of tech solutions on The Fund Marketer.
AI has been used by investment managers for some time in the investment function to process large amounts of information, detect patterns and make predictions. I don't believe AI usage is as prevalent in client facing functions, but it’s likely to grow.
On one hand AI can do some remarkable things. On the other, there are meaningful hurdles that limit use cases. In this guide I’ll review how AI could be used in asset raising and client service functions and provide a “common sense” framework for approaching these tools.
The new AI list contains platforms that show a total or partial focus on investment manager sales, marketing and IR functions. It’s not a long list at present.
The AI list doesn’t contain AI platforms like ChatGPT, Microsoft Copilot, Google Gemini, Claude and dozens of others that are industry agnostic. While managers may wish to consider these platforms, they are listed elsewhere and most are well known.
In our broader lists of fund marketing and IR tech (here), many of the solutions have some sort of AI functionality layered on top of existing software. The new AI list only contains “AI-native” solutions that start with AI as their primary function.
For example, a CRM platform that offers AI tools to summarize meetings or draft emails would be in the CRM list. In contrast, the AI list would contain an AI platform that can do various tasks and take actions on top of an investment manager’s CRM.
Almost all AI solutions being sold for marketing/IR use cases (and to the public generally) are Large Language Models (LLMs).
LLMs are fed (or “trained on”) large amounts of existing text. The system figures out patterns and relationships between words and encodes those into algorithms. As a user, we enter a question or instruction (a “prompt”) and the LLM uses its algorithms to generate a text response.
Answers generated by LLMs can come from a few types of sources. The LLM can rely on information embedded in the billions of words of text it has been trained on. The LLM can also be told to do a live search of the internet and base its answer on information it finds. Or, it can base its response on content provided by the user, such as a call transcript or a series of emails.
LLMs are part of a larger category of “Generative AI” systems, which generate something (text, images, music, video, etc.) in response to a prompt. The other type of AI I’ve seen for marketing functions is “Predictive AI.” These tools don’t generate new content; they analyze information and predict outcomes based on patterns they find.
AI use cases for client facing functions at investment managers mirror tasks where you ask a human to read or listen to something and deliver a written response. Below are several use cases for AI LLMs:
For Predictive AI, use cases include:
The main drawback of AI LLMs is the issue of “hallucinations.” LLMs can seem human in their responses, but they don’t “think” the way you and I do. They are predictive engines that guess the next word based on patterns and relationships embedded in algorithms.
This works surprisingly well, but it’s not perfect. When LLMs are following patterns and take a wrong turn, they create clearly wrong answers. As humans, we’d immediately recognize the absurdity of the answer. The LLM doesn’t, since it’s just following patterns.
Further, as humans we have a good sense of what we do and don’t know. We decline to provide answers when we lack sufficient knowledge or expertise. LLMs often lack this sense. Instead of responding “I don’t know,” they may give a confident and detailed answer that is completely made up.
For an investment manager, this is a problem. Distributing false information to prospects and investors can create serious legal, financial and reputational risk. Humans can of course check an LLM’s work, but doing this may negate the time saved using the LLM in the first place.
In terms of whether AI will ever overcome hallucination and accuracy issues, there are a few schools of thought. Some say they never will, due to the inherent nature of how LLMs operate. Others predict that improved or fundamentally different models will eventually solve the issue.
Security, intellectual property and quality are other problems often discussed with LLMs.
When you feed an LLM something to review and summarize (such as a transcript, emails or a report), the LLM might “train” itself on that information. This means your proprietary data becomes part of the model’s knowledge base and is available to users outside your firm.
To prevent this, LLMs can be configured so that a customer’s information isn’t used to train the model. Still, guardrails are needed to ensure that sensitive information doesn’t travel within a company.
Another issue is Intellectual Property (IP). If an LLM writes a white paper or researches and summarizes a topic, it may copy language or replicate information from another paper that exposes the manager to claims of plagiarism or copyright infringement.
And finally, LLM-produced content is generally looked down upon as being bland and generic (though it is improving). It may be tempting for a company to inexpensively churn out marketing content with AI, but if the quality is poor, it will fail to have an impact, or worse, may damage the company’s brand.
More recently, leading tech companies are turning their attention to developing and launching “agents.” With a regular LLM, the user enters a prompt, receives a text answer, and does something with that answer.
An agent allows AI to take action directly, removing the human user from the task. The agent may send an email, book a meeting or initiate a process on a firm’s systems. Promoters of agents predict that one day they’ll become electronic versions of employees.
We’re already seeing simple agents in action when, for example, an AI automatically joins a video call, sends everyone a summary and creates a task list on the company’s project management system.
For managers seeking to incorporate AI into their client-facing functions, I see three options: existing tech tools, generalist AI, or specialist AI.
Existing Technology: Fund managers are probably seeing AI functionality popping up as a new feature in their existing tech tools. Since AI providers sell their functionality to others, it’s relatively easy for tech platforms to add AI features.
General-Purpose AI: Managers can use AI tools that are industry agnostic. A generalist approach gives the manager flexibility to find best-in-class LLMs for each use case. These general-purpose AI platforms are also usually less expensive, but they require more work to customize for specific needs, train staff on usage and ensure security of manager data.
Special-Purpose AI: Specialist AI tools, like those in the AI list, are adapted for investment manager tasks. As such, they are purpose-built for client-facing functions and should be easier to implement. They also market themselves as addressing issues around ease of use, accuracy and security.
It’s hard to provide a general recommendation on which type of solution to pursue, since it depends on the use case, the manager’s budget, the resources available to customize and roll out a system, and the amount and quality of AI software embedded in tech systems the manager is already using.
In general, however, managers with limited budgets will focus on general-purpose AI platforms and AI offered for free as part of existing tech systems. Managers that want to customize AI (and have the resources to do so) will probably pursue general-purpose AI systems. Special-purpose systems appeal to managers that want to quickly roll out solutions that are already built for their needs.
When looking at the AI landscape and where things may be headed, a few things stand out. One is AI costs. LLMs are very expensive to develop and run, in terms of staff and computing power. These costs are not being fully passed on to users since the industry is prioritizing growth over profitability. Unless the cost of running models significantly declines, we can expect the price of AI to increase as the industry matures.
Currently I see many possible use cases of AI for asset raising and client service, but not much consensus on what it "must" be used for. As time goes on, there should be more clarity on use cases where AI is a competitive necessity, rather than an option or an area for exploration.
Logic says there will be limited adoption of AI for tasks where the cost of being wrong is high and it’s time intensive to confirm the accuracy of AI answers. But when the cost of being wrong is low or it’s easy to check accuracy, AI can deliver significant time savings or improvements in quality. I also see adoption of AI for specialized tasks on more limited or structured data sets, where AI accuracy should be high.
This guide is based on where AI stands at the present time, while recognizing that tech companies are pouring billions into AI to develop new models. The ultimate goal of some AI companies is Artificial General Intelligence (AI), where AI matches a human's ability to think, learn and perform a wide range of tasks.
Time will tell whether we only see incremental improvements in current models over the coming years, or if tech companies achieve a breakthrough that fundamentally changes how AI works and what it can do.
Information on The Fund Marketer is for general informational purposes only and comes from the relevant organization(s). We don't verify information or provide recommendations. For more information, see About Us and Terms.
< Overviews and Guides< All Articles< View Lists