Today we welcome Monica Khurana, CTO at Dodge & Cox, to our “CXO Journey to the AI Future” podcast. She joined Dodge & Cox in December 2017 as a visionary who has led the industry in creating products and solutions that are leading edge and sometimes the first of their kind in their respective spaces. Monica is a seasoned executive with over twenty-one years of management, operations, product, technology, financial planning, digital marketing, and cybersecurity experience.
This is my seventh year. After a stint in technology, I moved into finance back in early 2000. I had to do a lot of data analytics, both in terms of looking at market data, as well as looking at client retention and client data. At this time, I built my first performance attribution system to evaluate how investment funds are doing and what was driving their performance. This was a huge success, and I was asked to manage hundreds of billions of dollars.
I was hired here to do something similar, which was to bring new capabilities from a technology perspective, including cloud capabilities, generative AI, and many different data and analytics solutions. The goal has always been to continue improving the performance of our funds.
I think we’re looking at an inflection point from a technology perspective. You and I have been around for a long time, we saw the internet, saw mobile, saw crypto. I think this is another inflection point.
However, generative AI isn’t new. It’s been around for a while, but ChatGPT made it so accessible to everyone that most of us, by experimenting and trying little projects to see what capabilities we have, saw that the promise of the journey is very real.
You no longer need to be a technological person to do a lot more than what you could do earlier. So we’ve been spending time considering: What value do we want to add? Where do we want to apply it? And, what projects are we working on today that could use it? Do you apply it to looking at investment data? Looking at Alpha generation? Looking at benchmark data and other market data providers?
The second bucket would be clients. Are we retaining our clients? Are we supporting them? Are we answering their questions quickly enough? We definitely need tools around that.
The third bucket is internal productivity. This could be pre-market commentary, code testing, or reviewing disclosures. There are many different ways we can get more productive.
Right now we’re mostly just dealing with where to start and how to prioritize all of these different applications.
Prioritizing all this is very complex. How do you invest and where do you deploy your resources? There are a lot of parameters you can look into, but a good one is savings over time. So, how do you quantify those savings?
Another good metric to consider is time. Where will you redeploy time that gets saved? Is it going towards strategy? Towards driving revenue? Towards product development? So it comes down to who’s going to get the value out of it.
The second piece is around the return on investment. What will it take to build a model? What will it take to maintain it, from the computing, research, and storage perspectives? And then, when do you expect a return on investment? One year, two years, three years?
There are certainly some low hanging fruits like chatbots and helpdesk. In those use cases, we expect a faster return. Things like product will take longer.
The third piece is going to be improving the employee experience. Ideally we want employees to be able to spend less time doing mundane tasks and more time doing creative, innovative, strategic things. The hope would be to see improvement in employee satisfaction and stakeholder satisfaction over time.
So it’s a complex mix, and we have to look at all of these together to figure out how to maximize the value.
I think this is a complex problem. All our vendors and key strategic partners are investing in the space. And we’re trying to see where they’re going and what they’ll be able to offer.
The second component is around data. They’re all building these capabilities on the data they own and the data they have access to. So we are seeing market data providers. We’re seeing auto management systems, all of them building in different capacities.
So our goal here is to see what our vendors are going to provide. And then the question turns to our own data: What do we do with that? Should we provide it to one of our strategic partners and have them build on it? Should we be applying these third-party LLMs and doing some analysis on our own?
Additionally, there’s this whole thing around hallucination, false outputs. We’ll also need to evaluate the output coming out of our vendors and strategic partners, not just our own AI resources. At the end of the day, we’re still held accountable to our shareholders and clients.
We’re already seeing a renewed focus on data. I mean, we’ve already had data governance, and data quality in the works for a bit. I think the importance of data has become more real now on account of generative AI. Understanding your data and having it structured in the right way so that these new AI tools can leverage it is important.
The second issue is around security. Risk of data loss is real. So how do you ensure that your contracts with vendors cover and protect your data?
And then there’s a risk of hallucination as I mentioned earlier. How can you really ensure accuracy around all this? Just like cybersecurity insurance, are we going to see more insurance providers providing protection around hallucinations? There are definitely some tea leaves swirling around that.
You made a good point about the human side of things. We’re going to see more AI operationalization meetings at the management layer and the leadership layer. Everybody needs to understand this. So what are the shifts in skills that are needed? Be it in terms of education, or be it in terms of bringing about some of these tools?
Getting started is also a bit tough. You want to start small, see some success, and then expand. So how do you bring that growth mindset into all this? There’s still a lot of figuring out to be done, and the challenges related to that.
And the last piece is regulation. The EU, Biden, as well as the India Digital Act, and the Canada Act. We’re still trying to figure out where that’s all going to land.
I think that we all have to focus on it significantly both from the perspective of the biases and the data it is being trained on. How much can you rely on the quality of your outputs when trying to apply it? Recruiting is a good example of this: you have to be so careful that no bias is introduced.
The second piece also is around looking at IP infringement. The verdict is still a little unclear on where some of this will go. I mean, is training on publicly paid available data a good thing? Not a good thing? And how do we make sure we’re not negatively impacted?
I think in 2024 we’re going to see a lot more clarity starting to emerge.
Bonus: What do you think about this market opportunity? You said it’s a high priority. But let’s say you’re standing in front of a large audience – your board, your leadership team – How would you tell them to think about the AI market right now?
At the end of the day, we’re at a big inflection point. AI has democratized IT across the workforce, and that is powerful by itself. The question now comes down to how we harness this power.
I think you and I talked in the past about what we would see in the infrastructure layer, the cloud layer, and the data layer. We should expect to see gen AI embedded into every aspect of everything we do, from the consumer side, to the investment side, to the data side. I can’t imagine a facet where we would not expect to see this.
The question is more around timing. How much to expect by 2024? And how much to expect by 2027? Where regulations are heading will be important too. AI is already important today, and I don’t think there’s any going back. It’s more about how far this will take us.
Monica Khurana has a diverse work experience spanning over several industries and roles. She is the Chief Technology Officer at Dodge & Cox and serves on the board Committee for T200 (a non-profit promoting women in technology).
Monica also held leadership positions at Guardian Life Insurance-RS Investments/Victory Capital Management, Cornerstone Research, MUFG, and MNM Partners Inc. She has been in Chief Information Officer and Chief Technology Officer roles since 2007, and has been responsible for various strategic initiatives, such as integrating acquired firms, transforming technology platforms, and aligning technology with business goals. Earlier in her career, she worked at Barclays Global Investors, CareCore/Varian Medical Systems, HP, and the University of Missouri-Columbia Hospital and R&D department, where she led projects in areas such as asset management, healthcare technology, and patient care systems.
Monica holds a master’s degree in computer science from the University of Missouri-Columbia, attained from 1996 to 1999. Monica also has a master’s degree in industrial engineering from the same institution, earned from 1996 to 1998. Prior to her postgraduate studies, Monica completed her Bachelor of Engineering in Industrial Engineering (gold medalist) from the National Institute of Technology (REC Jalandhar) between 1990 and 1994.