This week’s podcast features edited highlights of Matthew chatting on stage with leading figures from AIG, Cytora, WhenFresh and Polar Capital at our live event in The Steelyard.
Everyone’s talking about AI, but what’s really going on? Questions we asked included how far has AI really got in insurance, what are the regulatory issues, how to convince an insurer that the results are validated, how to tell the quality of a company’s AI credentials, where do insurers stand relative to the rest of the world, and a lot more. Our guests talk about the ways they are using AI and Algorithms in their own businesses and how they distinguish reality from hype. Tips for assessing the quality of a company that is claiming to use AI, and thoughts on how to encourage more data scientists to join the industry. What is the Moneyball effect and is your algorithm smart enough and humble enough to admit when it doesn't know?
The usual buzzing crowd and experts on stage made for another great evening.
Reza Khorshidi, Chief Scientist, Global AIG (1:20)
Hamzah Chaudhary, Director of Deployment, Cytora (10:45)
Mark Cunningham, CEO and Founder, WhenFresh (21:10)
Nick Martin, Fund Manager, Polar Capital (29:00)
Both Reza and Nick recommend the book: Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal
Transcript for this podcast
00:00 Reza Khorshidi: The problem starts when many insurers are associating themselves to the kind of AI that is grabbing headlines today. I think that's where there is a lot of hype and noise.
00:12 Matthew Grant: Hello and welcome to the InsTech London Podcast episode number 31. This is Matthew Grant, I'm one of the partners at InsTech London and what follows is the edited highlights from the first half of our recent event on artificial intelligence and algorithms. And in this episode, I'm talking to four of the leading individuals and companies, that are either building products that are successfully using AI or investing in these kind of companies. We cover topics such as, whether an algorithm is humble enough to admit when it's wrong, what's happening outside of underwriting and claims in AI. And how should companies attract more data science into the insurance. And finally, what is the "Moneyball" effect. On the stage with me are Reza Khorshidi, chief scientist from AIG, Hamzah Chaudhary Director of Deployment from Cytora, Mark Cunningham, co-founder and CEO of WhenFresh and finally Nick Martin, fund manager from Polar Capital.
01:18 MG: Reza you're one of the early InsTech London attendees. I think there was probably about a handful of people in the room the last time you came here, it has grown a bit since then.
01:27 RK: Yes, it's a wonderful event, in the early days it was really good to have you guys starting this community. And I think since then, it's been continuing to get a lot of the practitioners in the field together. And in that, I think it was definitely one of the contributing factors to London's success in building this community early for sure.
01:46 MG: It almost seems that every company today you come across in this space has got AI in there. So you've got a background or you can say you’ve got experience in data science, you're the Chief Scientist for AIG. In your mind, has the market got like a bit of ahead of itself, in terms of companies using AI? Is it real, or hype and not much behind that, or are you seeing some real examples of effective use of AI and algorithms?
02:11 RK: I think in general the amount of noise for sure is there. There was a recent article in Financial Times that was looking at the thousands of AI startups across Europe. And almost half of them were using terms such as AI, but were not actually doing anything related to AI. So it's a kind of term that gets mentioned a lot in executive calls, it's a term that gets mentioned in a lot of startups websites, gives them bigger rounds of funding probably. But the reality is that not everybody is necessarily doing it. Warren Buffett has got this famous quote, that only when the tide goes out, you'll see who is swimming naked. I think with AI, you don't need to wait until the tide goes out.
03:03 RK: AI as a scientific discipline has got a lot of communities and if you want to really see if a is company doing AI, you can look at the number of publications they have, the number of real scientists they have. Because it doesn't happen magically through somebody dreaming about AI and tomorrow morning waking up and coding in Python and writing NLP algorithms, it's not like that. So in a way, you can look at the community. One way to separate hype from real signal could be something like that. But of course, that's not the only thing. There could be a lot of other things. So in general, I think insurance, in terms of its potential for using AI, has definitely got one of the biggest potentials as an industry.
03:39 RK: And also not only can it help insurance as we define it today, to improve and become better at servicing its clients. But also AI has the potential to broaden insurance companies' mandates and reach the ‘social good’ and services that it can provide. So generally speaking, AI is up, almost happening in the industry, has got high potential, but still has a long way to go for sure.
04:01 MG: Yes and I just want to make sure you pick up on that point, which is, you're saying that it's not just about the marketing material, but if anybody wants to understand the company, whether an insurance company or a technology company, look at who they've got that are real data scientists or have got a background, look at the papers they've produced, look at the citations they've got and that will tell you if they're real or if they're just putting the label on it. But I'll say the other thing, I think, a lot of us think about AI primarily in the area of underwriting and claims. But you're seeing other areas where AI is also starting to be used for insurance companies beyond maybe the more common understandings.
04:38 RK: I think there's a little bit of an issue with the term AI. It's become almost a catch-all phrase that everybody uses for everything. And I think if you really dissociate, like when you turn on the TV or read the newspaper or go online on certain websites, you hear the term AI and it’s usually when talking about cutting-edge machine learning, it being in applications such as natural language processing or computer vision or strategy learning, reinforcement learning and so on. So there are a huge amount of developments that we have seen in the past, less than a decade ago, due to explosion in data and cheap computing power and so on in the world of machine learning. And that's the kind of stuff that appears in the headlines today.
05:17 RK: So if by AI, we mean that, of course there is a huge amount of hype and scientific papers, and some could be a good metric. But when we talk about a more simple and traditional way of doing machine learning, more like statistical models and stuff like that, or what you can call early days of statistical machine learning and so on, insurance companies are pioneers of that domain. Most insurance companies have been doing a lot of the foundations of statistics, life tables and mortality models and many other things, actuarial models and so on. A lot of these things built a lot of the foundational topics since then. But I think that the problem starts when many insurers are associating themselves with the kind of AI that is grabbing headlines today. I think that's where there's a lot of hype and noise.
06:02 RK: But still I think, as an industry, we can be proud of being quantitative. We have done a lot of great work but we should not think that we've done everything that is out there to be done. Now, back to your second question - is it all about underwriting? Of course, underwriting is one of the most important aspects of the insurance value chain. But of course, if you really look at it and say, "An insurance company made $1 worth of premium," how does that dollar work, in terms of the value chain? There was actually a recent report by Swiss Re that has a nice way of summarising that across a range of geographies and products. But in commercial insurance, for example, typically, that number could be split around 60%-70% for a loss ratio, 60-70 cents goes to loss ratio, 20-30 cents goes to acquisition ratio, 10-15 cents or so goes to internal operational expense and then whatever is left, if it's still anything, that will be your underwriting profit.
07:01 RK: And then the other aspect of the income for insurance companies is through investments. They invest the money and then that returns as some additional income. And if you look at a big insurance company, for example, the likes of the top five or ten brands that we know in the industry, they end up having hundreds of billions of dollars of asset they manage. So if you think about how much impact they can gain through being superior in investment strategies and so on, I think there's huge room for them to stay competitive and still compensate for some other aspects of the value chain.
07:34 RK: The second thing is probably that the biggest part of that $1 that goes towards loss ratio, that 60-70 cents, how do you take care of that? A lot of the time insurers think about underwriting and pricing as the only lever they have for that, but actually there's a huge amount of, for example, risk mitigation that could drive that part of the cost down. For example, recently, we partnered with a company called Darktrace in the cyberspace. Not only do we do insurance in the cyber world, but also we try to use machine learning for understanding the patterns we see in the network and associating them with a risky situation or not, hence hoping for a lower prevalence of claim due to that risk mitigation strategy.
08:21 RK: And I think the story could go even further, in health and life and other lines of business where customers are staying with you for 10-plus years sometimes. They give you so much data on a daily basis, like a watch alone could generate even up to gigabytes of data, and on top of that, you've got hundreds of thousands of genetic markers from a person, on top of that, you can have many social and lifestyle activities recorded. So in that sort of world, I think AI could definitely impact, even in underwriting. But of course, risk mitigation is definitely one of the other areas. But of course, the usual suspects, underwriting and claim, stay relevant, but that should not come at the cost of forgetting about other parts of the value chain, which actually might be more ready and might be, actually, more relevant to the advanced topics of AI.
09:05 MG: Reza you're doing some work, as well, to encourage other people to come into the industry, into the insurance industry, with a background. Can you just say a few words about that, and how people could find out more about it?
09:16 RK: I think the biggest thing we need to do as an industry is really to kind of, we can phrase the story in many different ways, but if you really think about, "What does the planet and the people on the planet care about?", insurance as an industry touches more than any other industry, you could say. And yet, when you look at where does the talent go, if it's a scientist, if it's an engineer, if it's a design person and so on, where do they go? Definitely insurance is not their number one choice, partly because they haven't heard many different aspects of it. Most of them think about insurance as an industry being purely about their home and car insurance or their travel insurance, which a lot of times, they don't know about it, unless they make a claim, and a lot of the time for them it might not be a pleasant experience and so on.
10:04 RK: So in a way, it's kind of unfair to us, as an industry, to not make the effort to portray that real image of the impact on the society that we are truly having. And I think it's about time for us to go out there, tell different messages and better messages to inspire the young talent in AI and various other tech disciplines to consider insurance as a top destination. But of course, anybody who's interested in this, I'm more than happy to talk more.
10:35 MG: Right. Well, I hope you come back, Reza, and talk to us a bit more about that, and we'd certainly be very happy to support you. Well, thank you very much.
10:41 RK: Sure. Thank you.
10:49 MG: Okay. Next up we have Hamzah Chaudhary from Cytora. Most people, I believe, will know about Cytora. But perhaps could you just give us, in a nutshell, what you do? You talk a little bit about AI and what you do at Cytora, but also what you do yourself in your role at Cytora.
11:06 Hamzah Chaudhary: Yes, absolutely. Thank you all for having me here tonight. For those of you who don't know about Cytora, we are focused on the commercial underwriting space. And in a nutshell, what we try and do is help insurers or help to enable them to automate parts of the underwriting workflow. And I use those words very carefully, "help to enable". We don't do the automation of the workflow, and we can get into that in a little bit more detail. All the way through from submission to bind, we're really trying to make underwriters' lives easier by taking away the low-level tasks that are extremely manual, that are good machine learning artificial intelligence problems. Me specifically, I head up our deployment team, which essentially focuses on product delivery. What that means is I work with our customers to try and figure out what is the killer use case for our product stack that can help them derive some value in the business. Ultimately, if it's not deriving value, it's not a real use case, so that's a big part for us.
12:02 MG: So yes, your role is that critical point between the customer and the people that are building the technology, is that right?
12:09 HC: Yes, absolutely. My background is in software engineering, and before that I was a management consultant, so it's a little bit between both.
12:15 MG: And what is it about Cytora that makes you different from other people that presumably have also got access to the same kind of data if it's there on the internet?
12:24 HC: Yes, it's a really good question. So we've been in this game of data collection and building predictive models even before we were in insurance and what we call artificial intelligence, essentially using these machine learning algorithms to try and predict kinds of events and all kinds of different things that happen. We've been doing this for a long time, but really what makes us different is not necessarily the artificial intelligence. Building what Reza mentioned in the previous chat, the actual algorithms that are being used, have been around for 20 or 30 years, nothing is new about that. What's really changed is the access to data and the price of computing that's come down massively.
13:00 HC: But what we add on top of that is we build things into actual programmatic products that can be used by humans. An artificial intelligence model by itself is not particularly useful, that doesn't really do anything. It's when you build it into a wider product that is actually able to be ingested by a company and used to create what we call a positive outcome. So does this product actually let you make a next decision, does it let you make a decision about what you want to do with that risk or whatever you want to do with the rest of your workflow.
13:26 MG: So I guess your key part there, which certainly for me is a way of distinguishing the noise from the reality, is you're providing information to the underwriters that they can take an active decision on. So what happens when the tools you've built or the algorithm doesn't know the answer? Is it smart enough to say and humble enough to say, "I don't know," or does it just give the kind of best guess and hope that's good enough?
13:48 HC: Yes it's a really good question. There's a lot of study about how should machine learning algorithms act when there isn't enough information. What this really boils down to is have you got a good prediction problem? And for those of you who've read Prediction Machines by Ajay Agrawal, and if you haven't, I highly recommend reading it. Most really good machine learning applications are meant to be used with a human there as well. So the way that we work with those kinds of problems is we say, "If there isn't enough information, don't guess." You have to remember, at its core, machine learning is about making predictions. It's never about full certainty in the ground truth. But we have confidence thresholds that we want to meet, and if we can't meet that, then I like the way you put it, the algorithm is humble enough to say, "This one is not for me, you need to apply your best human judgment." But even with that information, the way that we talk about it is you have to enable an underwriter to make those decisions by giving them more information and I think somebody previously said, a bionic underwriter. That's a pretty good way of putting it.
14:47 MG: Yes, I notice the way you've positioned it, the way you're offering it, it's shifted a little bit from “it replaces the underwriter” and the way the Richard Hartley now describes it is, “it enables the underwriter to do more in their day”. So again, it links back to that point that you still need the human intervention, and I guess it's difficult to set a product to an underwriter if part of that pitch is that they're not going to have a job anymore. But the other part of it is of course as well, that yes, rightly so, people are sceptical or at least certainly want to understand how you can prove the data is right. And you've been going now for a number of years and you can start to benefit from lost data, but how do you see your ability to be able to demonstrate to companies that actually what you're designing with the algorithms, however humble they are, are actually validated by either real life or you can provide really compelling evidence that this is the right information to use to trigger a decision?
15:40 HC: Yes, so we've been live with a few customers for a little while now and prior to going live, we run all kinds of validation tests but those are essentially the best guess, the proof is really in the pudding once you go live. So we have a number of customers that are starting to see those results coming through and depending on the application, there are really two things you want to be looking at. The first is the expense ratio and have what you've done and what you've implemented actually made any difference to that? And we have had customers who focused more on the expense ratio side, I'm thinking about one customer recently who's just posted their first results from their implementation with us where their average underwriting time, in one particular sector, went down from six hours to underwrite the risk, down to about 40 minutes.
16:25 HC: And so that was really through the connection of, what we were doing was going out and collecting all the data they required for their underwriting, connecting it all together, extracting what was relevant for them and putting it in front of the underwriter, so it is really about making people do more with their time. So taking away the boring work and making underwriters focus on what's actually exciting. So underwriting the risk, maintaining those good relations with distribution channels. So yes, so the proof is in the pudding I think. We have a white paper coming out in the next couple of weeks with a few case studies in it which will outline some of the results, as well.
16:56 MG: Great. Well, it certainly sounds a bit of marketing pitch; to make underwriting exciting we're going to do away with the underwriter! And what about the tough challenges? What are you finding is actually really difficult to design an algorithm for, or to use AI for, when it comes to making underwriting decisions?
17:12 HC: One of the biggest challenges we've had is actually working with underwriting teams to create something which is really explainable. What I mean by this is, when people hear artificial intelligence, machine learning, they think scary computer robots and automation and quite often things that are nonsensical. The reality is that's a flaw in the way that you design your algorithms. You have to design explainability from day one into your applications and into your programs. It can't be an afterthought. And we're building tools for humans, so it has to be explainable. So some of the ways that we've learned to do that, and it's definitely been an iteration cycle that we've gone through over the last couple of years, number one is we work with the underwriters from day one to understand what information they already use today and the second is we provide things such as key drivers. So if we make a prediction or we give a score about a particular risk, we always make sure we give enough information and context to back it up. You can't expect someone to make a decision without context. So it's not purely science, there's a little bit of art in there as well, but for us, that's definitely been a big challenge.
18:16 MG: Fantastic. Okay, so I just want to leave these couple of minutes for questions. So do we have any questions for Hamzah about what he's up to or what Cytora is up to?
18:26 Audience question: So if the modelling that you're doing is actually based on data, what is the effect of new regulation coming in for people to actually obfuscate their data or move their data from your systems, because that then diminishes the gene pool as it were, from which you can make your assumptions?
18:47 HC: So first off, I'll say that we are GDPR compliant as a company so please don't report us to anyone. But data privacy, when it comes to artificial intelligence, has basically been there since the beginning of when machine learning algorithms started. For us, our focus being on commercial lines, doesn't totally negate us from that, but a lot of the information that we use is publicly accessible. So it's quite rare that we will actually try and get people's personal information to use it within the models. And I think when you start getting into those levels, you start to enter a whole new phase of ethical modelling, and what information is okay to use when you're making a decision about somebody. And that almost takes a step back to when you think about insurance more generally, in that it's there to protect people when things go wrong. So at what level is it not okay to start adversely selecting against people because of information that you may or may not know about them.
19:39 Audience question: I think I heard you saying that your software brings data together in a nice usable format and that it helps people see how important the different data points are, that they wanted. Are there any second order insights that, now that the underwriter has seen this, that they're asking this question?
20:00 HC: Most of our products and new products that are coming out, have come totally from our customers' demands. Our road map is totally influenced by what they want. The example is when we started off by doing one of our first products, it was providing risk scores to underwriters at the point of underwriting. Very quickly after that, they started asking us for information, if they could provide it earlier in the value chain, or earlier in the workflow, where they said, "Hey, when the submission comes in, can you tell me something about the risk, even before I've had to spend 10 minutes looking at it?"
20:29 HC: Then we started focusing on that part of the workflow and say, "Okay well, we can extract information from the submission, do some sort of information parsing, maybe even attach some useful info to it, depending on what that business' goals were, and provide that back to the underwriter in that state." There are many, many second order benefits. Similar to what Reza mentioned previously, I'd say that we're still very much in the infancy stage of the product development within this underwriting workflow and what it's going to look like. But yes, our road map is 100% influenced by what our customers want to do next. We definitely see it as them building things on top of our platform, and we love it when they do that.
21:04 MG: Hamzah, thank you very much for taking time off from what's obviously a very busy job to join us here.
21:12 MG: Okay, next up we have Mark Cunningham, one of the co-founders from WhenFresh. WhenFresh is an example of a company that we are seeing more and more of coming into insurance, where you've built a strong track record outside of insurance, and you're now discovering opportunity to come in from what you've built already in your relationships, to the insurance companies. So perhaps just in a few words, perhaps tell us what WhenFresh does and then what you're looking at doing in the insurance space.
21:40 Mark Cunningham: Sure, WhenFresh is a data supermarket. We line up samples of data side by side, so the insurer can decide what piece of information is useful for them. It is multiple sets of the same thing from different sources where we define the providence, tell you when it was last checked, and then you can access and play with to your hearts content. Everybody here this evening, if you want to, it's my.api.whenfresh.com, and you all have 50 lookups so you can go and play on the API yourselves and see what the data looks like, if it's useful to you.
22:16 MG: So you're sourcing data. I'd like to come back in a minute to talk about some of the clever ways you source data. Can you give an example of how your insurance clients are using that data in their own business?
22:27 MC: Yes, I'll start with our first and biggest customer. It's The Bank of England, they consume a shedload of data from us every day. A shedload is a technical term, it means lots and lots and lots and lots of data, but what they're really looking for is what happened since yesterday in terms of price shifts on properties and property valuations, what's been built since yesterday, what's changed in the housing stock, what's mortgaged, what's not. They're essentially taking a currency risk using the data, but the key is, can we access everything now? And so you're not a tyranny to delay. You need to able to get online and get all of the data that you need and you need to have it available to your underwriters or your algorithmic calculations, or your data scientists, but it has to be now, not later, and that's kind of what we do. For the insurance companies, in a really, really simple format, what you're doing is you're asking the questions of, "What do you need to know in order to give this person a price without actually having to ask them?" You avoid the moral hazards of them being incorrect or the likelihood that they don't know the answer. So what we do is take that uncertainty away.
23:35 MG: You're basically providing pre-filled data for insurers...
23:38 MC: In a really basic format, yes, but the reality of it is…we were invested into by an insurer. Sorry, maybe you guys don't know this, but the history of the company is, we were data processors for Zoopla Property Group. And the business just grew and grew. And then we built a path file for Royal Mail and we do all that kind of good stuff around addressing, and then an insuring underwriter came to us and said, "We really like your software. How much for the company?" And so, "We're not ready to sell yet, but we'll sell you some of it." And they do a lot of reinsurance brokering. And what they wanted to know was, "If I'm looking at a risk book, can you tell me instantaneously how risky this thing is at an individual address space, so that I can value the portfolio and take an arbitrage position?" So that's where it came from. It's, as I said, it's data now to figure out risk now.
24:32 MG: You've been quite clever, so I don't know how much of this you're going to give away to this audience, but they're all very discreet. What has impressed me is the way you acquire data. Because one of the challenges insurance companies have is that you can either go out and get data for free on the web but there's all sorts of challenges in that in terms of validation and processing. You, very cleverly, are part of the company’s…you mentioned Zoopla…I assume you feel the business is going to be a success. Your cost of acquiring the data is less than what you can sell it for so you can make some money on it. Can you talk a little bit about how you do that?
25:04 MC: Yes, so how do we acquire the data? We approach a company that might have interesting data and say, "Can we process it in such a way to make it consumable by a particular industry?" So whether it was talking to the big banks who gave us what their results from surveys were. So they sent people in to survey the houses before they lent on them. We got the copies of the survey. So as long as you anonymise the data as to who did the survey and for whom the survey was done, you can say, "Well, that address, this was the outcome." And then you look at billing data and you could do the same with that, you look at construction records, you could do the same with that. But we went around partner by partner saying, "Put your data in here." So we did two things for them. First thing we made it profitable for them to do this, but out of their waste product, their sort of digital exhaust. But we also made it faster for them to get their own data back from us than we were getting from their own system. So that was a massive win. In fact, some insurers are now contributing data because it's quicker for them to get the data from us than it is for them to go to their own IT departments and get the data. So we built the API. Go and have a look at it and you'll see how fast you can get a hold of your own data.
26:06 MG: That's why I love sitting up here, because someone like you says, "Well, we just went and got every bit data from Zoopla, then we went and got every bit of construction information." And clearly that's really, really hard to get that right. But congratulations for being able to pull that together and make some money about it. And also sharing your stories with us. How do you then validate it for the insurance companies? Again, it's an important question.
26:31 MC: Yes that's a great question. Okay, so the advantage is every time you get a new set of data that gives you some information about a place where you already have somebody else's view of that, you're able to line it up. So insurer X thinks it's a four-bed, 1921 build building of blah size. But the bank thinks it's something else. But the surveyor thinks it's something else or the valuer thought it was this. And as long as you line them all up and go, "Right, that's the truth as far as each of those parties are concerned," then for the risk teams, they can download the data and go, "Okay, which one is most predictive?" When I was talking about the supermarket element to this, it is equivalent to going to Sainsbury's. I don't care which ketchup you buy, I'll put all the ketchups lined up and you decide which one you think is tasty.
27:13 MG: I know you're getting caught up in this POC problem with people wanting to spend forever validating it?
27:20 MC: We're releasing a piece of software to enable the buy-side that will be you guys, throw your data on and have a report that only you can see. We can't see it but it will publish the bits that are meaningful for you. So you could say, here are the places that I suffered a loss, this is the loss that I had. It calculates saying, "Okay, so the data that would have been predictive for you is this, this and this." But we don't get to see the recipe, that's your IP.
27:44 MG: Okay, well before we hand over to questions, I do have one question that I think many people in this room are wondering about. So you started off in the music industry, did some pretty cool stuff there and then you decided to go and work in insurance.
27:58 MG: Does that mean maybe we've all made the cool choice and we're better off being in insurance?
28:01 MC: You've made a phenomenal choice. Yes, I feel like that story about John Major who ran away from the circus to become an accountant. Feeling the same pain sometimes. No, the challenge is that the music industry's full of flakes and I'm a data guy. So it was extremely difficult to make it work.
28:16 MG: Fantastic. Unfortunately we've run out of time for questions but I'm sure you'll be around the break for anybody. Or if they want to find out more about WhenFresh, what's the best way to track you down?
28:26 MC: Whenfresh.com. Find me on LinkedIn, find David on LinkedIn and go play with the API. It's my, M-Y my.api.whenfresh.com. Go have a play.
28:33 MG: Fantastic.
28:33 MC: Thank you very, very much for listening, it's been a pleasure.
28:37 MG: Thanks Mark. Okay, well finally this half is an old friend of InsTech London, Nick Martin. Nick, I think, was in a bar somewhere with Paulo and Robin when they reported about the three of you just staring up InsTech London. So Nick thanks for joining us again. I will let you introduce yourself but you're going to talk to us a little about your experience in AI. But probably just kick it off. Just tell us a little bit about who you are and your day job.
29:04 Nick Martin: Yes, thank you much Mathew. Good evening, everybody. Yes so I'm Nick Martin. I run the global insurance strategy at Polar Capital. Which has been going now for just over 20 years. I've done 17 of those. So invested in a reasonable size of money into largely the incumbent community. So I get a sort of a view form this sort of executive board room when it comes to all the good stuff like Insurtech, innovation, disruption and the like. So always good to get an alternative point of view.
29:36 MG: Good. Well, those of us who spend most of our lives doing this find it hard enough to keep up with all those companies out there. How do you manage to track what's happening in AI in addition to what you're doing, running a successful investment fund?
29:46 NM: Yes, well, with some difficulty in it. It's events like this that really obviously help maintain some of that knowledge. And I think Reza touched on the point earlier. Sometimes AI is used as a sort of catch-all kind of word. And a couple of years ago I actually spend a bit of time looking at what is AI, what is machine learning, what is deep learning and all that stuff. So there is actually an InsTech London podcast, one of the first ones on AI. So I'm sure most of you are very familiar with all the buzz word terms. But if you're not, I suggest sort of checking that out. I think one thing I did realize very sort of early on is that AI is all about prediction.
30:24 NM: And of course, insurance is all about prediction as well. And I think over time, the insurance product's going to move from sort of repair and replace kind of business model to one of prevent and predict. And AI, obviously has a huge relevance in terms of that prediction side of things. We heard earlier from Hamzah of Cytora. He mentioned the "Predictions Machine" book. And I certainly would recommend that. And I was very fortunate to spend about an hour with Ajay who is the author of that book at the University of Toronto when I visited there earlier in April. And I think that's one thing that I would certainly recommend is look beyond what can be the goldfish bowl of insurance and try and have some learnings from other industries and books like that. "AI Superpowers" is another good one I'd recommend in terms of trying to keep up with what is obviously a very fast moving industry.
31:21 MG: And Nick just on that point about the podcast. You were a bit like, bit like Mark or his colleagues with their back catalogue of music artists, you are kind of like the Elton John of InsTech London because your podcast gets downloaded about five times a week. So you are still the expert on AI, even though it was about three years ago. So thank you for that. So I know you can't name names, but as you look out across the insurance incumbents, what sort of range do you see of those that are actually effectively using their own data, where of course they've got a big competitive advantage versus those that have got lots of data but just can't figure out what to do with it?
31:54 NM: Yes, it's something you know, I think about a lot, and I think it's often forgotten that arguably the insurers are the original data companies. They've got a lot of good stuff there. The question is, can they really use that sort of legacy data set for some advantage or not? And I think a lot of it will come down to how fundamental has data analytics been to your business model over time and whether that data is really accessible. There's a lot of companies now where, would offer services to get other unstructured data, you know, for those legacy companies.
32:30 NM: So maybe there's a bit of catch up that can happen there. But many of the leading companies that I talk to would argue, "Actually, you've already got as much data as you can possibly ever want and it's actually more of a challenge in how to put that to some good use." I mean one question that I always like to ask of management teams is, "Which of these two options would you rather have? Option number one is having an abundant data set worked on by a couple of data scientists, newly qualified graduates, or option number two is a much narrower data set worked on by a couple of AI professors from a world leading university." And actually the answer today is really number one. So I know I think you said that the war for the AI talent is probably not as important as it maybe used to be and actually having that data in the first place is actually going to be, you know, a significant advantage.
33:23 MG: We've heard a lot about a lot of support about underwriters here tonight and the role of underwriters is not going to disappear. The industry has got a cost problem and ultimately it's, a lot of it's about people. So where are you on that spectrum of like zero, robotic underwriter to we still need human intervention?
33:40 NM: Yes, I mean it's a big question everyone is sort of talking about. You know, are the days of the underwriter numbered? I would say no, not at all. And I would look at other industries that are affected by technology. Jobs change and underwriters are going to be no different in that. AI is a very powerful tool to be added to a toolkit of any underwriter. And you know there's talk of it, as they said, you know, bionic underwriters or augmented underwriters, whatever term you want to use. But I think that is a very real possibility that underwriters can move slightly up the complexity spectrum. AI will undoubtedly free up time to spend more with clients and develop new products, that kind of thing, and there's going to be parts of the underwriting market which would be fundamentally changed. And SME is probably a good example of that where, you know, you go back a few years, maybe underwriters will underwrite lots of individual risks and today it is a little bit more about portfolio management.
34:41 MG: Good, well that’s a good answer given that I think 50% of our audience tonight are from underwriters. Hopefully we'll see them back again. You talked a bit about "Moneyball" and the relevance to underwriting and insurance, what's that all about?
34:55 NM: Yes, this is the concept that I've been thinking a little bit about. So, "Moneyball" - I'm sure many of you are familiar with the book and the subsequent film. Just in case you are not, this was a US baseball coach that took a whole group of, what looked on paper, to be very flawed players and put them together into a championship winning team. Now the key to all of that was changing the unit of reference from the individual player to the results of the team overall. And I think you know you can draw a very strong parallel there. We've just touched on the SME side. Maybe you do get a lot of straight through processing, or particular risks, underwriters may feel slightly uncomfortable with that. Letting the machine do all the hard work, but actually the more added value is going to be managing those portfolios and considering those returns in an aggregate type of basis rather than individually underwriting every single risk.
35:56 MG: So I guess it comes full cycle back to where we started. Part of your day job then is to really understand how well companies are adopting technology, not just in the individual star underwriter, but how do they deploy it across the whole team. Good, and then finally to wrap up, a theme tonight was a bit around talent and this being a struggle for insurance. It's good to hear that Mark said, insurance is more exciting than the music industry. What do you think the insurance industry needs to do to keep bringing people in that are going to move the industry forward?
36:28 NM: I think what we've seen in the industry is a reasonably good communication of the protection gap, the extent of under insurance in the world particularly around natural catastrophes. But I would argue also there is a communications gap out there. And I think the industry has probably over time not presented itself to the broader society in the way it should. It is actually an extremely important industry, particularly for people in times of trouble, whether that be caused by a natural catastrophe or whatever. And one of my key challenges in talking to my own investors is trying to convince them that insurance is actually hugely important in what it does, and I think that any initiatives out there need to really communicate those big wider goals. Insurtech is not a zero sum game. It can be used to expand the overall industry pie and should be actively encouraged and hopefully, we'll hear more about Reza's initiative at another time. And fortunately, I think the industry itself is starting to realise this to some extent.
37:37 NM: A couple of years ago we saw the establishment of the insurance development forum by a number of the leading companies. They had some great news not so long ago that the Secretariat is based here in London which I think is a very positive for the London insurance market. There is a new London centre being developed around climate change and analytics. And I think some of those big industry initiatives could probably better communicate some of the problems that are out there to attract that top talent into the insurance industry. You're not going to get the top AI data scientist if all you can say to them is, "Please come in and cleanse my data." It needs to be bigger problems than that for them to address.
38:22 MG: Well Nick, we've just run of time but that was really helpful. Thank you very much and for those of you that don't know or haven't seen Polar Capital, it's great to hear from somebody that actually puts their money where their mouth is and is actually looking at real companies investing and has been successful at it. So Nick thank you very much for joining us.
38:40 NM: Always a pleasure, thank you.
38:46 MG: Well, if you enjoyed that then look out for our next episode, which is the second half of the evening. Robin Merttens is joined on stage by five companies talking about also what they are doing with AI and algorithms. More details about this event, our past events and our future events, as well as our corporate membership program are on our website at www.instech.london. And if you want to be sure of not missing out on registration, when that opens for our future events, we do recommend you sign up for our newsletter on the website.