[Podcast] Private Market Talks: Generative AI, From Theory to Practice with AIx2’s Mohammad Rasouli

Proskauer - Private Market Talks Podcast
Contact

Proskauer - Private Market Talks Podcast

Generative AI is no longer a theoretical technology. It’s here and now, and moving fast. Private funds are moving quickly to figure out how best to use it. During this episode, we explore with Dr. Mohammad Rasouli practical considerations for private funds as they consider how to adopt and implement AI, including issues of risk management, security and confidentiality, and practical implications for human capital.

Dr. Rasouli is an AI researcher at Stanford University and founder/CEO of AIx2, a firm focused on helping funds tackle the transformative nature of AI. During our conversation, we examine how asset managers can responsibly integrate generative AI into their operations and ultimately their investment functions, the challenges and opportunities presented by this new technology, and how increasingly sophisticated AI models will reshape virtually every aspect of how private funds attract, invest and manage capital.

Peter Antoszyk: Hello, and welcome back to another episode of private market talks, where we take a deep dive into the dynamic world of private markets with industry leaders. I’m your host Peter Antoszyk. Today we’re exploring a field that’s transforming not just technology, but also the fundamental ways in which we invest and manage capital: artificial intelligence.

Joining us is Dr. Mohammad Rasouli, whose insights are shaping the future of AI for asset managers. We’ll explore how AI is integrating into private funds, the challenges and opportunities it presents and what the future might hold as these technologies become increasingly sophisticated.

Dr Rasouli is founder and CEO of AIX2. He is a Stanford AI Researcher, where he also co-taught “empirics of marketplaces”. He is a former Microsoft engineer and an ex-McKinsey consultant from the New York Office and in that role he worked with some of the largest PE funds to help with their AI transition. So not only is Dr. Rasouli an active researcher, but he also has the practical experience of working with funds to adopt and implement AI responsibly.

So tune in as we uncover the depths of AI ’s role in revolutionizing the landscape of private capital investing. As with all our episodes, you will find a full transcript of this episode, along with other helpful information at privatemarkettalks.com and be sure to subscribe. And now, my conversation with Dr. Mohammad Rasouli. Dr., welcome to Private Market Talks.

Mohammad Rasouli: Thanks, Peter. It’s great to be with PMT.

Peter Antoszyk: Before we get into sort of the detail of uses and how to adopt and implement AI, perhaps you could take a minute and level set our listeners as to what we mean by AI and the differences between, you know, predictive and generative AI.

Mohammad Rasouli: That’s a great question. That’s a question that I often explain when I talk to fund managers or in my courses to executives. Basically, what we mean by AI is when the machine takes the capability to synthesize data and bring actions to us. That’s different in this definition with what we used to know in last 10, 20 years with digital and digital transformation, which is mostly a dashboard of data. Taking us to the next level and being able to take the action, machine taking over to the next step and a complete result, that’s what the AI means. Now, there has been this generative versus predictive AI that, especially with the growth of the LLMs, large language models, and ChatGPT and topics in others, the generative one has become more commercialized these days.

The main difference is that the Generative AI is around the ideas of large natural language processing, which is basically generating a sentence; generating a piece of something similar to what we had before. Now LLMs are about generating language, but we have Generative AIs for images, we have for videos, for other things. The way these generative technologies work is that they take a sample of a lot of existing like dictionaries and Wikipedias and others about language or many, many photos of cats and dogs and then they make a new essay, a new writing or a new image of a cat or a dog that never existed before. Right? In this sense, they are generative, but there is always these other categories of AI which are less commercialized, because right now, OpenAI has found a way to commercialize Generative AI for especially not natural language processing and languages. But in the future, the predictive AIs will be also potentially commercialized, and those predictives are about predicting something in the future. For our audience, probably predicting the stock market price tomorrow or a week from now is like one good example, right? You can also predict the chance of success of investing in an asset. You can predict the chance of an M&A success that you can do. You can predict an exit strategy, like what is the price in this exit strategy? Everything that is about the future and you want to predict, that’s part of the predictive AI algorithms.

Peter Antoszyk: And so how do you distinguish sort of the creative from the predictive?

Mohammad Rasouli: So, when you go to ChatGPT, I assume most of us have had this experience by now, and you say, “Write an email for me.” The way ChatGPT works, if you follow that, it’s kind of burst-y. It gives you three words, stops, then does three words, then stop. What’s happening in the background is that it’s a creative machine in the sense that it says, “For this question, for this sentence, that someone asked, if I was talking to somebody and say, ‘Write an email for me,’ let’s first create the next three words. What is the answer? Creating them like, imagine what it is. Then create the next three words, then create the next words, embed them and keep creating.” Right? In the same way, if someone asked about “create a new image,” it’s like creating something similar to all the cats and dogs which the machine had access to, but it’s not any of them. Something with similar features that we assign a tag of the cat or a dog to that, that’s creating.

Predicting is basically saying that “If this was a trend, like these prices change in this way in the history, now, what would be the trend in the next day? How would the things change in the future?” A lot of us knows about regressions, for example. It’s a simple prediction. Regression is one line, many dots of information, which predicts what is the trend in the future. That’s the prediction kind of use cases that we talk about.

Peter Antoszyk: I know that there’s been a tremendous pace of change with AI. The development has been going on at incredible pace. Can you put it in some kind of context for us? Because I think this is going to be relevant to how quickly funds have to adopt AI and how they have to adjust to it. So, give us some sense of how fast this is changing.

Mohammad Rasouli: Amazing question. So, we run this Stanford AIx2 survey with the fund managers for which we interview the Fund manager, CEO, CIO, COOs, and we ask a bunch of questions from them and those who participate. We give them the results, existing results in a week. So in that survey, we have this question of, “By which year do you predict that majority of the funds in your business will have AI solutions in house for their day to day job, like writing their due diligence, finding deals, drafting investment memos and others. What is your prediction?”

It’s interesting how we see these answers evolving over time. I asked this same question basically when I was giving a keynote in private equity real estate, and there were 300, 400 fund managers on the floor, and I asked them exact same question, “What year do you think AI is going to take over this industry?” The answers are shockingly near future. Like average answer for us right now has come from end of 2026 to 2025, and it just keeps getting more and more short term because the reason behind that is number one, the community of investors, they’re naturally exposed to AI as an opportunity to invest. So, they hear about it and they learn about it. That’s a good like way, that’s a way that they learn about this technology. The second thing is that the whole community has realized that there are some immediate gains that can happen with AI, especially starting with generated AI, which is more commercialized and accessible for the funds and that cultural shift, that awareness of education is basically what is driving now this faster adoption across the funds. They have far too many funds small and large that they have already started having some AI experience. And again, one of our questions in the survey that we asked the fund managers, “What AI solutions you have experienced with?” “What is the adoption rate across your employees?” “What’s the success?” “What’s the shortcoming?” You see more and more success stories these days. More and more awareness. I think we have definitely passed the first phase of the S-curve in terms of adoption. It’s definitely the place that everyone has learned about AI and they know, and they have heard success stories and they are looking for solutions like where to start and have a road map to have successful AI. That’s what most of the funds are trying to do these days.

Peter Antoszyk: How many of those questions on that survey did you use AI to generate?

Mohammad Rasouli: Like to be honest, there was 40 questions and we did use ChatGPT actually to edit them and like ask help, but no. We did a lot of manual work. I did these kinds of surveys back in the PhD times. The rest of the team are PhDs.

Peter Antoszyk: Speaking of the use, you touched upon two different things. One is how funds are using it today and how you can envision it in the future. Where is it going? I’d like to start with how is it being used today in funds and if they’re not using it today, how should they be using it today?

Mohammad Rasouli: Yes. That, that’s a very good question. Let me tell you a little bit about the history of AI, especially in alternative investment. That’s one thing that they always put in our education package for fund managers. In the first chapter actually, we cover lots of history of AI in this business. So, AI, coming to hedge funds, has been around like 15, 20 years in terms of quantitative rating and others, and the idea of that was predictive AI. Like can you predict the stock market price for certain stocks in the future? I know a lot of my PhD friends were graduated from Electrical Control Engineering. They’re now working in hedge funds in New York and making great progress. Now, some five years ago, I would say the idea of AI came to private equities and the big private equities, the larger ones, started having internal AI teams to develop algorithms for predicting the success of investment in an alternative asset, like your success investment in the company or secondary or investing in debt or credit or any type of strategy they have.

Peter Antoszyk: What you’re saying is they were using the algorithms to, if they bought a company, they would use that algorithm to determine the likelihood of success of that investment in that company they bought.

Mohammad Rasouli: Exactly. Right.

Peter Antoszyk: Got it.

Mohammad Rasouli: So in that sense, the private equities were trying to solve a harder question than what the hedge funds were trying to solve. The harder question was because, first of all, it’s a private market investment as opposed to public investment and there’s more matching features related to that. It’s like if this asset investing in this startup, this company is going to be successful for me as an investor with the features that I have. Not just predicting if the stock price is going up for everyone. It’s a good investment. These last five years, we have definitely seen some success in this case. I have worked with top funds that have done this. I had a dinner with the CEO of General Partners, William. I have worked with partners in EQT and other places. Basically, all of the big funds have these teams right now and they’re doubling the size of the teams, so all the names you can imagine. There is some success in terms of doing this. I heard from William actually over the dinner that he has a team of six people for investment committee for every deal that they should vote, and now he has assigned AI as the seventh seat. So, AI should vote yes/no on this, this and this. So, it shows a lot of trust in these algorithms and progress. Now importantly, what to know is that yes, to have [those algorithms] in house is expensive and these data scientists are expensive and developing software is just not core business of bonds.

We have seen a U-turn kind of movement in the last 18 months in the funds with the progress of ChatGPT and Generative AI, like all the large language models technology. The U-turn we observed is this. If you had asked me two years ago, “Is it fair to start from natural language processing, from large language models Generative AI?”, the answer would be no, because they are complicated algorithms and you want to stay with easier predictive AI. Why do you want to do it like those? Do you have the capability to train the entire terabytes of Wikipedia in your system? You don’t, right? Only someone like OpenAI can do that. So, the answer was no. But now, these algorithms are made commercialized by OpenAI. So now, everyone has access to them.

Now the answer is quite the opposite. These days they say start from Generative AI because they are more accessible and cheaper than predictive AI. That’s the place you want to start. So, this U-turn has even had an impact on the big funds in the sense that some of them have rethought their strategies to focus on Generative AI for a period of time and get back to predictive AI again or just continue with predictive AI and they are basically trying to figure out what works with them with the momentum they have with predictive AI. I encourage people to look at the Forbes article that they had on that topic, which is basically delving into these use cases, these different cases. But now, when a fund talks to me, and a lot of fund managers ask me where to start, I always give them this framework of thinking about the use cases. The framework of thinking is that there are two general use cases for AI. They are operational efficiency and they are alpha generation. What I mean by this is that the operational efficiencies in the sense of doing things we do faster and at higher quality. For example, faster due diligence, faster investment level preparation, faster LP reporting, faster analysis of the emails we got inbound, sims and others. This is operational efficiency.

Peter Antoszyk: Does it include monitoring and diligence?

Mohammad Rasouli: Exactly. Monitoring and diligence, KPI monitoring of portfolio companies, anything that is like routinely done. Other side, the alpha generation, the other camp is finding ‘X.’ ‘X’ could be a good deal, a good LP, a good GP, even a good individual to hire for your portfolio companies or internally.

These two categories of use cases, which was operational efficiency and alpha generation, you can see the difference. I put it this way because the first category is Generative AI kind of technology underlying that, writing documents, writing reports, finding sources of information. The second category, which was alpha generation and finding X in the market, is predictive AI. It’s a recommendation system, recommendation algorithm. With this framework in mind, with the framework of operational efficiency and alpha generation and how they can work, I encourage funds, every fund, to basically think about it, what works best for them. The pattern that is coming out, out of many experiences that funds have done is that Generative AI is starting from those immediate low hanging fruit is more appropriate for the funds to make that success with AI and build that culture of AI adoption in the funds. It’s important that 70% of AI transformation is building that culture. When I was working in McKinsey with the funds, maybe 30% is the solution.

70% is make sure people adopt that solution, cross that barrier. That’s why in AIx2, whenever we give them the software, we also give them education and we also give them a strategy for adopting that solution. Even that strategy will be embed with their engineering data assessment and everything because that’s 70% of the work that you want to do to make sure AI is successful. If you do it correctly, the momentum will keep going. You can embed other use cases in the future. But if you fail in the first step, especially the first step, then the culture is harder to fix and there’s no resistance for the organization to adopt AI.

Peter Antoszyk: What are some of the challenges to, you know, adopting the AI? You’ve mentioned one, which is training. What are some of the other challenges that funds might face in adopting AI, even if it’s just for operational uses as opposed to, as you said, predictive alpha generating uses?

Mohammad Rasouli: Yeah. So, the main challenge we hear from fund managers, and again that’s one of the questions in our survey, is exactly what you said, which is complexity of use, culture and training. That’s number one. The example that we can all relate to is the use of technology over COVID. We didn’t use to use Zoom and this online meeting this much, but the technology was there and it’s not super hard to use Zoom or online meetings, right?

But there was a cultural barrier and mindset shift that because of the pandemic, we had to go through that. We had to all go learn how to use Zoom and then now we are happily using that more efficiently, got used to that culture in work. Same thing with AI. The technology is there, and it’s a matter of cultural shift, mindset shift and a little training to cross this barrier and start using that in a daily basis.

Now, with that said, there are other, other limits that funds, other concerns or barriers that funds face. One of them that I see a lot, and that’s why actually we, I exited McKinsey and went back to Stanford and then started this company, is lack of a good solution. When people think about, let’s say Generative AI, ChatGPT or these large language models, they are the operating system of Generative AI. As an example, consider a dictionary. A dictionary is an operating system for translation. Translating with a dictionary is a time-taking process because we got to go back to the dictionary word by word and translate. Now, product is what uses a dictionary and makes a full translator. That’s now a product that is easy to use.

In the same way, the operating system of Microsoft or Mac OS, these are all operating systems that we have had in the history, and there’s products on top of that built for using these on top of these OS. ChatGPT, large language models are the OS. Using them immediately for these use cases is just hard and time-taking. So, turning them into products is shortcoming in the market that AIx2 has tried to address and we have developed our solution.

There are other barriers other than cultural and the solution limits that we have. One of them is, for example, the regulatory and compliance issues, absolutely important. A lot of times, funds come to us and they want to make sure they have something that is conservative and that helps them to have it in a secure way. So, someone who understands them and works with them and has the secure results for them is important. The other thing we hear less and less these days, to be honest, like through the surveys, our ongoing survey just keeps the pulse of the market, is about job replacement and concerns around that. We have seen a wind down on that because people are realizing that AI is an augmentation. Humans are going to be in the loop, and what it does is just like what the other technologies did historically to us, what a steam engine did, what the electricity did, which is helping us move to things that are more added value for spending our time, right? Now, we can work from repetitive writing documents and writing Excel and working with Excel workbooks and writing Word documents and editing. We can move to next step, that machine does and we can work on real investment, real thinking, real relationship building and these kind of real added values that we have. That’s the other thing that we have heard across the funds.

Peter Antoszyk: You’ve mentioned a, a number of barriers training to risk management, which is, you know, the confidentiality and the, the security of it. Are there technological barriers for the funds beyond just adopting the AI? What I’m thinking is do they just use ChatGPT? Do they develop their own LLM’s?

Mohammad Rasouli: So, there are two general answers to AI adoption, I guess any technology adoption, which is building in house and buying from other sources. Now, the answer to that, which one is correct?

In the case of AI transformation, it’s depending on multiple factors coming more and more clear in the market. When I was at McKinsey and working on AI transformation of the funds, a lot of funds were exactly asking this question. Is it worth it to start investing on an internal AI team and keep hiring those, I guess, high salary AI scientists and state of the art R&D? We are not in the business of software development, so we don’t know how to manage the software development and it’s very different than investment. Even simple problems they faced was misalignment of incentives. They hired data scientists. Naturally, they wanted to become investors into those firms because that was considered the prime scene and there was a misalignment in incentives.

So, building this whole AI successful internal team has some challenges. Now the main thing that I want to mention is first, should you decide in house versus third party? What are the parameters? If you do in house or if you do third party, what you should do? So, the first question of in house versus third party, I mentioned about the challenges of software developers in house, the upside is that if you keep up with that and you do have that, you can have a very tailored result, very tailored product for your own needs and hopefully your AI teams are agile enough if they are good quality AI and you have done a good job in starting to maneuver, they can make new tailored versions for you. Now the rule of thumb answer for AI or third party depends on the size of the fund. Investing in AI development because it’s rapidly changing, you need to hire PhDs and you need to hire multiple AI’s and the infrastructure and everything. That’s going to be costly. So, I would say if a private equity fund is beyond $50 billion, that’s probably where it’s clear that they may want to have AI teams internally. Even that sometimes they see a lot of progresses and they find it hard.

For pension funds and fund of funds, the story is different. We have seen them going with third parties, because they are just less reluctant to have internal IT teams. So, it’s really a function of the type of investment you have and the size of the total asset under management you have.

But nevertheless, what profile you should look for? The people who provide these solutions should have the capability to do advanced AI research and advanced AI engineering. That is because AI is a rapidly changing field, and working on AI without understanding the future and vision is like investing in cryptocurrencies. It’s hyper speculation. Then, you want to have clear vision of what AI is and where it’s going because it gets everyone surprised.

There were a couple of surprises in the last year here in Bay Area that were unexpected. One of them, for example, was exactly what you said. There was a lot of people who invested easily $50 million to build some LLM in house, like we want to have our own large language model that is not garbage in, garbage out. That’s fully for us and for our business. It is proven to be a waste of investment now because it is proven that ChatGPT and large language models with more and more data fed into them and better engineering, they were able to cross the barrier of hallucination and low quality. So, that was the wrong vision that wasted $50 million, if not more, at some funds and some firms investing in AI.

Now, the idea of training LLM from scratch in house is over and it’s like an OS that others are providing. Just like for the computers we have, Microsoft OS, Mac OS and Linux, like game’s over, no more OS needed, right? But what is now becoming more clear is using those operating systems and turning them into tailored versions for yourself.

Peter Antoszyk: One of the challenges of AI previously was there was a question of the quality of the information you were getting out of it. You pose a question and sometimes you weren’t getting accurate answers back or downright wrong answers. My understanding is that has changed dramatically. Can you give some context to that and how accurate it is today compared to what it was even six months ago?

Mohammad Rasouli: Yeah. So, that’s exactly the trend.

There’s this embeddings factor that’s like engineering ways to see how accurate the results are to what they should be. The way they work is that they give a standard result which is the correct answer. Then, they check the answer of the machine and they see how similar they are. How similar the embedding vectors of these words are by turning them into zeros and ones and then comparing the numbers. That’s on the engineering level and it has seen tremendous improvement. It’s like 90% and above and it’s getting better and better as the technology progresses.

There are more technologies, like multi agent, that is trying to bound the sources of insight within sets of documents and don’t have any information out of these documents. That’s important to have control of the quality of inputs. There has been massive developments here and we expect just more to happen. So, I don’t think hallucination or bias is a major factor anymore. That’s why I didn’t mention it when you asked me the challenges for barriers entry. I would say like six months ago probably. But nevertheless, AI still needs a human in the loop. Still, a human should check and they should make sure the solution has an easy way to cross check the sources of the references. What I would say is that there has been this difference in performance, like back to your first question, buy versus build and what is the profile? I said definitely look for a team that has AI research capability and can stay on top of that and has engineering capability; also, understands this domain knowledge and also has the capability to work with the partner. Because whoever comes to your fund and wants to build and work with you to build a software internally, the software development has its own like ways and circle of development interactions with the fund managers and others and that could be a little noisy and distractive. Because it’s different from investment interactions we have in core business as a fund manager. So, get people who know how to work with a partner for development. Maybe like those consulting firms or others who have had this practice.

One main thing when it comes to technology, and it’s back to what you said about hallucination and bias, is the use of closed‑source versus open‑source large language models. What it means is ChatGPT, Anthropic and these names that are like state‑of‑the‑art LLMs, they are closed source. They don’t let you download the entire model and use it in your own server, laptop or computer. You should always plug into ChatGPT, APIs and work with them. There are alternatives which we call them open source. They are like Llama, Falcon or others which you can download literally, put it in your in your computer and run the entire thing offline without access to internet, literally on your computer, very controlled and bounded.

Well, when we get to this question like which one to use, open source versus closed source, the factors of decision are quality and security. In terms of quality, it used to be that the closed source ones like ChatGPT and was 10 times better than the open‑source ones, which Llama is the best one. That gap is closing in terms of performance because all of them now are getting larger and larger amount of data already. So, that’s like one change when you asked about quality of results. Both of them are doing good now. In terms of security on the other hand, people used to be very worried about using ChatGPT and passing their data to ChatGPT in the same way that 10 years ago, people were worried about using AWS for storing their information on cloud on their own databases. That concern has also winded down in the same way that quality of the open‑source LLMs like Llama has improved. The security of ChatGPT has also improved. So, they have tried to address that. They clearly have some terms in their APIs for enterprise contract that they don’t use your data to train their public model. They don’t keep your data on others, so that has also improved. In the end, it’s a decision by the fund and fund managers what works best for them.

Peter Antoszyk: So, you’ve talked about two uses broadly speaking of AI, operational and then alpha generation. I’m curious as to, maybe you could expand on the alpha generation avenue of using AI.

Mohammad Rasouli: Definitely. That’s the part that you can probably talk even longer. That was part of my PhD, designing the recommendation algorithms. That’s a beautiful idea, the idea of finding, matching two sides in a matching market recommendation algorithm. That’s an idea in an AI algorithm that has seen a lot of developments in the last five, seven years.

It’s basically trying to match in the concept of consumer. Amazon has a recommendation that recommends what to buy when you join. To every person, Amazon recommends different goods, consumer goods to buy because depending on your profile, your bank, your history, your price points, what you have done so far, and sometimes it recommends things that you have never thought about and you feel like, ”Wow. That’s exactly what they needed now. How did Amazon know that?” Or in the context of media, for example, Netflix has a great recommendation system that recommends the next movie you want to watch. It’s written about it beautifully. Like you have to watch this movie because this is the actor playing in that or this is the director or this is the genre or some reason that it feels like this and it’s getting better and better. So, the technology of these recommendation algorithms has gone through massive developments and the question is, “Can these recommendation algorithms recommend an investment to an investor? Can they match an asset? If you are a real estate investor, can they find a good real estate for you? If they are a VC, can they find a good startup for you? If they are late stage, can they find the latest stage company for you?” That idea is promising and there’s a lot of work on that. Just the last two years, there’s 200 academic papers, like technical papers, published on these topics of predictions of matching investment to investors. Now, the thing that we observe is that some like five years ago till now, some bigger funds like QT, KKR, Bain Capital, TPG, all of them have started using these kind of recommendation systems in a more active way. But what we see is that these algorithms are still in the hands of those big funds and less adopted by the smaller funds, because they are expensive and they are not willing to share their knowledge. Why should they?

Peter Antoszyk: Right. Sure.

Mohammad Rasouli: Closing the gap between those 200 papers academically and it’s a very beautiful question for an AI researcher to answer. It’s a really like interesting academic question. Closing the gap on what funds want is something that I personally have been working on in my PhD and it’s part of our work in AIx2 plans, part of our solutions and product, that we are working on that.

There’s a lot that should come. I’m excited to see what comes next. I believe that a lot of progress can be made in this space.

Peter Antoszyk: In terms of alpha generating, is it that the AI can sift through more data more quickly and find not only more investments, but investments that the fund or the analysts might not otherwise think of as, I don’t know, adjacent to what they might do?

Mohammad Rasouli: The promise of AI in this space is something like this: Assume a machine that can monitor the entire ecosystem of investment, including all the players which are, for example, fund of funds, LPs, GPs and funds, assets that exist, and even main individual players, and have like all the connections. What is connected to what? Who is doing what? Assume such a knowledgeable agent exists that can monitor everything. Then, that knowledgeable agent can also look at the fund and totally understand what is their investment thesis. What is it that is good investment for them? Look at their portfolio and say, ”Okay. So, this portfolio with this diversification, that’s what kind of new investment they need.” The third thing that genie, that knowledgeable agent, can do is to match from the ecosystem that can observe to the preference of the funds. So, for this fund, if this is the situation right now, then this is a good asset to go after.

Now, such a machine, such a genie, is absolutely an interesting thing to make and build. A lot of AI researchers are on board on that thing, being able to be produced. I can give you codes from like all the directors of AI teams in big funds and some of the faculties in Stanford, MIT, like working on this. But the main thing is the data source here. Where is the data coming from? So, the data here is the mix of alternative data and the structured data. Structured data is Prequin, Capital IQ, Tech Crunch and all of these like databases. The alternative data is like news, a lot of megadata that exists in an unstructured way in social media, in Google reviews and others. Beautifully, if you had asked this question a year ago, that, “Oh. Can we use the alternative data?” The answer would be it’s absolutely hard. How do you want to organize this unstructured, massive data online about news and everything? Because of the progress of ChatGPT, again, there is a way.

One beautiful thing about ChatGPT, that often people don’t see, is the capability to take unstructured data and turn it into Excel sheet or structured data, and that is a massive progress for this problem. Because these recommendation algorithms, to get that full visibility across the industry, you want to pass to them all of that information and they should be able to get the needle out of the haystack and structure that information. That’s what the, now, the ChatGPT can help. But then after that, the capability of machine is to basically swipe the space. No human can track all of that, right? And even if they do, they don’t have the complete like visibility for everything. Even if they focus on an asset like all the information online and everything, they cannot read all of that. The main part is more deals processed faster, more complete processing of the deals and being able to reason about it. That’s the part that’s often missed, like how do you want to reason about a deal? Like if someone gives me a deal, that’s not enough for me. I want to hear why did you choose that? What is upside and what is risk? What is the reason you are putting this in front of me? Like explain. Talk about it. I want to be able to have a conversation with that person. Why did you choose this? That is, again, something that with ChatGPT is becoming more and more available. Now, we can have recommendation systems that we can talk to and say, ”Hey. Tell me more. Why did you choose this? Okay, now this part. Tell me more.” So, that is now the state‑of‑the‑art technology and that’s what we are building in AIx2.

Peter Antoszyk: So, what impact is this going to have on the human capital side of these funds?

Mohammad Rasouli: So, their general answer is that AI is going to be augmentation now at this stage of the technology, which means that humans are going to be in the loop. Like the example I gave you about the general partner, like they have six people and the machine is number seven, right? The machine is added to the team, right? Machine is not replacing the team. So, that’s basically —

But the machine is being basically that one more source of insight and makes sure nothing is going wrong. Everything is checked, no deal is missed, everything is promised timely. And the main part about the machine is that even if it replaces part of our job, there’s like two camps, generally in academia. Machines are going to take over and we’re going to all be without jobs and machines are going to basically provide more efficiency. It’s the idea of the second camp, is the augmentation that, just like previous technologies, the examples I gave, they will bring humans to get out of some regular work and help them just do more creative stuff. That’s what we’re already observing in some of the funds which adopt this. So, there is some reskilling required, like learning how to use AI. There’s some job description updates required, but I think there’s enough to do for the humans in new space that keeps all of us busy.

Peter Antoszyk: Have you seen or been able to actually measure the impact of AI adoption on funds in terms of either alpha or whatever measure you want to use?

Mohammad Rasouli: That’s a, that, that’s a very good question. So, one question we asked in our Stanford AIx2 survey is, “What is the KPI you use for measuring the impact of AI?” We do hear a lot of different answers and we really give them our time to really turn into a hardcore ARPI that can be measured with a ruler.

Definitely time saving is something that almost everyone mentions. Time saving for up to 80% for due diligence and others and time saving in writing documents, time saving finding sources of information, there’s amazing time saving. Once you really get into that, you pass that one two‑hour training that you’ve learned how to use the software. It’s really like a lot, it’s beautiful how you have saved a lot of time now and you don’t want to go back to the old life that you used to do all the things manually. I gave this joke. Once these high schoolers learn that all of their things can be done with ChatGPT, they don’t want to go back and do them themselves anymore because they can always go play PlayStation or whatever. That’s number one, time saving. We do hear about the long‑term return on investments. The challenge with that KPI is that for many funds, it takes multiple years to get there. Nevertheless, some mega funds who have done that five years ago, they already have claimed that they have captured exactly this KPI. I guess AQT is saying that they have ten deals that they exited and they originated. They were originated with the machine and they exited successfully. This is amazing KPI they’re hitting, right? So, that’s a longer term KPI to capture.

But there are some more immediate ones other than time saving and cost saving, which is like a lot of cost involved in like the hours that they spend or other ways. It’s also more complete. Less errors in the writing of the emails, drafts, memos, others. They say, ”Oh. My analysts used to give me a version of these memos and there was always like 20 errors I had to go and tell them to fix. Now, it’s only like two errors or there’s no error.” Right? So, this is the other KPI they use to measure the impact of AI.

Peter Antoszyk: So, you have founded and are CEO of AIx2. So, I’m curious what is that and, and what do you do there?

Mohammad Rasouli: So, AIx2 is the idea of AI for AI, AI for alternative investments. So basically, we are AI squared, AI multiplied by two. So, we are AIx2. It came out of when I was at McKinsey and I was serving as AI manager of AI Transformation for funds. We used to give the funds all this advice on what they should do with AI internally or even with portfolio companies. They asked us how and naturally, the McKinsey like as a firm was not in the business of making a software solution or an engineering solution.

That was the shortcoming in the market and I decided with the team and my friends at Stanford to go ahead and basically build that solution for this business in a correct way. That’s the story of, and that’s based on the PhD research I have been doing for AI, for marketplaces. Same with the other members of the team.

So far, we have tried to close this gap between academia and the state‑of‑the‑art and educating people and what the fund managers need to know that has shown itself through our conference speakership, keynotes or journal publication, CIO’s, Forbes, middle market growth, podcasts like the one that I’m doing with you right now. We have definitely been thought leaders in this industry, but our core success is building solutions, building those softwares that are available to funds to use AI internally in a platform basis. That’s our White Paper, which says that AI starting from immediate use cases and winning them is going to grow and there will be other use cases. For AIx2, we started from Generative AI use cases because of the reasons I mentioned earlier off of our survey of the market and others. So, we started from there. The product has been complete and it’s being used by the customers. We have gotten a lot of good traction and good feedback around it. We have worked on our second product, which is the predictive AI, and around the idea of algorithms. I mentioned a bit of it here. That product is soon coming to the market as well. So, it’s a journey that we are taking here.

We do strongly believe that AI is something that needs to be done in state‑of‑the‑art research. That’s why we are very close with academia. I myself am researching at the General Graduate School of Business in Stanford. So are some of the team members. So, we try to be literally state‑of‑the‑art research in AI, stay close to those core teams of AI and heart of AI in Bay Area, Silicon Valley and try to close this gap by being close to the industry and fund managers. So far, we have these solutions, the Generative AI, the document analysis, and it’s basically helping the funds using these functions of insight extraction, finding source of information and writing reports to faster due diligence, faster investment memories, bringing that alpha to the market.

Peter Antoszyk: Got it. So, this has been fascinating. This is obviously a very hot topic among fund managers and capital allocators generally. One final question for you and I appreciate you taking the time to speak with us. This is going to have an impact for years to come, and I’m thinking of the young professionals, the college students and young professionals entering the workforce and looking, you know, for a career in private markets. What’s your best advice for young professionals entering the asset manager world in the face of this AI revolution?

Mohammad Rasouli: Let me give you a broader answer. Generally, for the next generation, what they should learn and then focus on alternative investment, the next generation, the main skill they should know is how to learn new skills, learning new ways to do things and be able to have updated technologies.

Peter Antoszyk: But that was always something they should know by the way.

Mohammad Rasouli: Yeah, exactly. Like the era of having one job for 30 years is over.

Peter Antoszyk: Yeah.

Mohammad Rasouli: So, we should just realize that. Embrace it. The average year of working in big corporate like Google, Microsoft, others in the area is now 18 months or even less. People switch jobs because of the reality of changing the demands in the market. That’s number one. Then when it comes to alternative investments, I would say that, for juniors who entered this, building that intuition about the market like high level human reasoning, complex reasoning, building that intuition is absolutely important. That’s something that is going to stay and then understand the processes, understand the how things are done at the core and try to do it like a couple times, but realize that these are going to be things that are going to be done with the machine soon. You don’t need to do them yourself anymore. Realize that and basically start building some technology awareness. Like understand AI. Read about it. I don’t expect everyone to be AI scientist, but understanding AI at the core, like the terminology, what is deep learning, what is like neural network, what is the vision of technology, where is that taking us, understanding them helps them to better maneuver through their career and their job cycles.

Peter Antoszyk: Thank you, doctor for, for joining us on Private Market Talks. This has been a fascinating conversation.

Mohammad Rasouli: Amazing. Thanks for having me here, Peter.

Peter Antoszyk: And thank you everyone for listening. If you enjoyed this episode, be sure to subscribe so you don’t miss any episodes of Private Market Talks.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Proskauer - Private Market Talks Podcast

Written by:

Proskauer - Private Market Talks Podcast
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Proskauer - Private Market Talks Podcast on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide