Syril Smith and Adam Salvatori
BLACKROCK STORY

THE FOURTH INDUSTRIAL REVOLUTION

Never Done podcast
Never Done podcast /
THE FOURTH INDUSTRIAL REVOLUTION
Episode description:

Syril Smith, Head of Product for Alpha, Research and AI platforms and Adam Salvatori, Head of AI, Alpha, and Digital Assets Engineering, share more about the combination of humans and machines, the augmentation experience through AI, upskilling and reskilling as well as the accessibility of new technologies.

Syril Smith, Head of Product for Alpha, Research and AI platforms and Adam Salvatori, Head of AI, Alpha, and Digital Assets Engineering, share more about the combination of humans and machines, the augmentation experience through AI, upskilling and reskilling as well as the accessibility of new technologies.

READ THE FULL TRANSCRIPT

[SYRIL SMITH]

What do you think is a fad? What would you not bet your money on in this fourth industrial revolution?

[ADAM SALVATORI]

Yes, that is a spicy question.

[SYRIL SMITH]

What is the fourth industrial revolution? And why is it so different?

[ADAM SALVATORI]

There's not really one piece of technology you can point to and say, this is what's causing the fourth industrial revolution. That's important because sometimes as new technologies go through what's called the Gartner hype cycle, everyone's usually talking about the thing at the peak of the hype cycle, any one point in time. But I think it's important to note that this is being driven by-- what I would call, in previous research, the innovation drivers, or these things that are transcending every industry. One of them, of course, is AI. But there's others, which you can imagine this all wouldn't be possible without hyperscale cloud technology. Things like blockchain, for example, would be another thing driving this.

And how do we know it's not-- to answer your question, how do we know it's not a fad, I would say there's one thing I distinctly remember from computer science class to this day was being taught about what the Turing test was. And it was this test that Alan Turing wrote a famous paper, back in 1950, around how do you tell the difference between basically what a machine content is and a human-generated content.

And the fact we're having that conversation now-- there's still debate on whether current generative AI technology is passing that test-- but I would say the fact we're having that debate 73 years later, when that was the moonshot, I think to me, is an indication that there's something real going on here, and it's not just all hype.

[SYRIL SMITH]

Yeah. And if we look at the last just year, how this has changed-- you and I speak all the time about how-- let's think about November last year to where we are now, August 2023. The number of people that have really started to use AI in their day-to-day, that have accessed AI in some sort of way or capacity, is wildly different than where it was.

So, we're thinking about this not as a one-time fad, but really this trend and this change in this fourth industrial revolution. Is this the iPhone moment? What is an iPhone moment? And what does that really mean?

It wasn't really the iPhone itself that was actually the transformation. It took one or two years, then we get this App Store. Now we have all of these other apps and capabilities that sit within our hand or in our pocket. How do you think about AI within that context as well, and of the moment that we're in. Now with this big explosion, and it's the number one topic everyone wants to talk about. Is this an iPhone moment to you?

[ADAM SALVATORI]

Short answer, yes, I would say. And there's a Mark Twain saying of history never repeats itself but it does often rhyme, I think is an indicator of the moment we're in because we've seen this a little bit before, with the internet and the dotcom boom. The late '90s, early 2000 really changed-- it was a huge technology paradigm shift. We saw it with the iPhone, to your earlier point, when that came out.

I think this is that moment where it's that big of a technology shift, meaning when the iPhone came out, all the user experience basically moved to touchscreen. And that became table stakes. Now I think the moment we're in, it's hard to predict what's coming as much as everyone is trying to right now, because it wasn't until one or two years later, I believe, after the iPhone, that the App Store came out. And then the App Store was the huge unlock, really, of the iPhone, in some ways. So, I think we're still in those early stages of the iPhone just came out, but we don't know what's the App Store going to be here.

[SYRIL SMITH]

Yeah. I think about it too as like electricity. Everything got electricity. You have your toothbrush. You have your tea kettle. You have everything that got electricity. Some of those things worked and stayed. Some didn't. Now you and I were talking, Alexa is everywhere. We have Alexa in my oven, in my refrigerator, in my car. We have to find the right moments for that technology as well. I think of AI as that. We're in that moment of AI where we're going to get all of these applications. Some of them are really cool. Some of them maybe make sense. Some don't. I think I'm really excited to see where that's going to go and what ultimately ends up sticking in a year or two from now, that becomes that real technology.

So, we just spoke about this iPhone moment. We talked about this impact that AI is going to have in transforming our society. Why now?

[ADAM SALVATORI]

It's interesting because AI's been around since the beginning of computer science, almost. And I think the big unlock has been-- it's worth almost rewinding back to 2017. Google created this really infamous paper, now called “Attention is All You Need”. That was basically the origination of the GPT model, or what's called the generative pre-transformer models. So, that was the origins of this type of model that we're seeing. When everyone's saying generative AI today, for the most part, that's the kind of model that people are interacting with.

And what happened is, it was almost-- there were milestones that happened since 2017 until today. But they're happening one in every two years. I think what happened was we hit this moment where the models had enough parameters, now billions or potentially even more of parameters now, hyperscale compute, so the compute sort of just continued, meaning we have some of the world's largest cloud supercomputers now, with tens of thousands of GPUs or graphical processing units that are powering this AI, and the data explosion.

All those things simultaneously came together. And these models really crossed the point where they became-- we talked about the Turing test a bit. There was almost this moment where they got large enough, trained on enough data, where it had that tipping point, which people felt since, really, November, December of 2022, when ChatGPT came out and really became the fastest-- I believe it was one of the fastest growing consumer applications ever.

[SYRIL SMITH]

I think from a product perspective, what I find so interesting about that is not just was the product ready, not just was the model ready, but they had to make a decision of when is it good enough to then release it to everyone. And then once they've released it to everyone, it was so impressive that it really caught on like wildfire.

I think had they done that at an earlier stage, this maybe wouldn't have been the case. But it was a matter of the model being ready and then making it accessible. And I think the combination of those two things ultimately drove this massive adoption. They have all of these later versions now, that are so much better. But we didn't need that to feel impressed when we opened it up back in November.

[ADAM SALVATORI]

Right. And I think the other insights I would give here is in some ways, these models, at least a large language model, is you could describe it slightly differently, which

is just if you called it a next-word prediction engine. And it's just really, really good at it, based upon the preceding words. But all these foundational elements of-- we talked a lot about almost the basics of computer science, of linear algebra and probabilities. All these things still hold true. These models are just much better at understanding attention-- or context, which is why they called the paper “Attention is All You Need”. They understand the context around sentences, paragraphs, large amounts of text, more than previous models that came out. And they're able to parallelize the compute and process the text in a more parallelized way, faster than previous models that existed. But I would just say in the end, it's still all underpinned by foundational computer science at the end of the day.

[SYRIL SMITH]

So you and I've talked a lot about this idea of AI is coming in. It's now going to augment our lives. It's now going to be involved in everything we've done, AI now becoming part of your day-to-day process, now becoming part of your day-to-day work, helping you create PowerPoints, for example. How would you think about this augmentation era that we're in and this fear that now AI is going to be part of everything I do? And what does that mean?

[ADAM SALVATORI] I think it's interesting because I think from my perspective, we've been in this era for a while now, in terms of if everyone thinks about how many times they're already using a search engine or Googling something, that's a form of, I think, augmenting our intelligence already. So, I would say this trend has been playing out. And it's more about accelerating it.

And so, I think-- we discussed earlier how copilots have been in finance for many years. I think copilots have been in our daily life for many years. And it's really an acceleration on that. And as humans, we think very linearly. And I think we're in this moment of really exponential change, where at least for my career, it feels like five years of change are happening in about one year.

So, I don't know about you, but it feels very-- I think that that's the moment we're all in. We're trying to grapple with hey, the internet came out and iPhone came out. What do we what's going to happen? And the reality is not everyone knows what's going to happen in the next couple of years. We can have some future-state mental model. But I think the thing we have to all be prepared for is there's certainly a seismic change happening at the moment.

[SYTIL SMITH]

Yeah. And I think that seismic change is something to be embraced. You're completely right. Having my phone on me at all times, being able to ask a quick question, or even pull up Google maps-- where am I going to go? Having that augmentation or view on top of my physical world is something that we're used to. Now it's a matter of understanding how AI can upskill and reskill. If I was going to use AI to reskill and upskill, where would you say I should spend my time as an engineer?

[ADAM SALVATORI]

So it's interesting. There's this notion of an AI engineer or AI engineering that's really becoming commonplace in the industry. And I would describe it as-- a few years back, it was about you building the model from scratch. So, you were doing what's called fine tuning, actually creating the base model weights-- or whether you're doing traditional quantitative modeling and building the Black-Scholes model from scratch and those typical quantitative modeling techniques.

What's now shifted is now you can get third-party models off the shelf. And so those models are called pre-trained models or foundational models, is what they're called. And so that changes the game a little bit too, from let me spend months or years, potentially building a new model, and taking a pre-trained model, which by definition, has, in some cases, millions or hundreds of millions of compute baked into it because it has billions of parameters, model weights, already predefined, trained on lots of-- in some cases, internet content or other things.

Now you do have to make them domain aware, to be clear. There are ways to what's called prompt engineer or prompt tune them, where you're actually giving custom prompts and responses in your own data to what's called retrieve augment generate. You have to augment the models with your own data. But I think that changes the game a little bit. And that's why we call it AI engineering, because it's much more of an engineering challenge to get something to market, meaning how do you get these models into your cloud infrastructure? How do you feed them your custom data? How do you connect them to your search engine?

So, the other trend that we're observing is search. And these GPT models, generative pre-trained transformer models, are becoming one and the same thing in terms of user interaction. So, what I mean by that, if you look at Bing as a search engine, for example, they've baked in the ChatGPT-like functionality into the search engine because that's how you give it the live data.

So, without the live data, you can't ask the question to the model, for example, what happened yesterday because they're trained point in time. So, it's taking these pre-trained models, augmenting with your own information, and then how do you do the engineering user experience, product design? It's a classic it takes a village sort of moment.

[SYRIL SMITH]

That's exactly what I was going to say is, it gives us the opportunity to spend on the next order problem. How do we really solve problems for our users? I think that's a really important mindset, going into building products that have AI in them, is focusing more on what are the outcomes that we're driving. That's always been a true principle of product development, is what are we able to achieve. But using AI with that same lens as well, our team, our AI engineers, are now focusing on those problems as well, how do you apply it, if you will.

[ADAM SALVATORI]

Yeah, and I would emphasize that it takes a village to do any kind of AI engineering today. It takes network engineers, security engineers. It takes legal compliance officers. It takes cloud engineers. It takes many, many specialties, machine learning engineers, of course AI researchers, product managers, user experience--

I think some of these white papers that have come out on these large models, if you scroll to the back of them, it's almost like a movie credits, where you have all the teams involved. And I think that people would be surprised to see how many actual engineering teams are involved in this technology.

[SYRIL SMITH] I think that's been the fun part about what we're doing.

[ADAM SALVATORI]

Engineering and product design, not just engineering.

[SYRIL SMITH]

Yeah. For me, that's what's been fun about the work that we're doing here, within Aladdin, is having all of these people driving towards the same mission and working together, really being able to partner across the entire end-to-end ecosystem. You spoke about horizontals before. I think that's really where this comes to life, and nothing more than AI to force that.

[ADAM SALVATORI]

I agree.

[SYRIL SMITH]

We've talked a lot about this idea of being able to look at the end-to-end workflow, being able to understand how all the pieces fit together, almost the ability to really understand how someone is in their day to day. That means that all of these technologies also have to be connected. We can't have jumps between each of them. We need to bring all of these things together into one ecosystem. How do you start to think about that?

[ADAM SALVATORI]

We discussed a lot of these technologies, emerging technologies. I think we're spending a lot of time, how do they interconnect with one another? One example is still an emerging field in very early stage R&D, is called quantum AI. How do you use quantum computers or techniques for classical machine learning or AI algorithms? So, it's in some ways a classic computer-science challenge or a problem that's being explored there. But I think it's another example of not thinking about these technologies in isolation. And how do they interconnect together?

[SYRIL SMITH]

Yeah, it's your Lego-block idea, of there are some Lego blocks we'll build, some we're going to buy. But you're right that they each keep building on each other. I think you teed this up before, which is the third industrial revolution. The fourth is just building on that, right? So, in that same way, I think we'll continue to see that build and build and build within this fourth--

[ADAM SALVATORI]

Yeah, for example, a lot of those could be running-- you could have some parts that are running in the cloud, for example. Or in the future, it's not unforeseeable that that would happen. So, I think exactly. I think it's the interconnectedness of almost a network effect going on as well. And that's really one of the key tenets of what defines the fourth industrial revolution. It's not any one technology. It's this many of them, simultaneously landing all together.

[SYRIL SMITH]

So, Adam, we look at a year from right now. You and I are going to sit down and have this conversation again. What are the things that you expect-- or maybe two years from now. What are the things that you expect to change or be different?

[ADAM SALVATORI]

Yeah, so I think we discussed a little bit about how we're in this era of exponential change. As humans, we think very linearly. So, in some ways, it's very hard for us to get that future-state mental model right now. I think that's why everyone's so excited about-- everyone feels it every day. There's lots of change going on. And it's almost hard to, in some ways, look too far out.

But I would say the couple trends we're watching, beyond the ones we mentioned around cloud and AI and digital assets, is also quantum is another area where lots of change is happening. There are some milestones that have been hit, that people thought were going to take many more years than they have.

Although quantum's always been this thing, I think, that's been on the horizon, I think it's becoming much more front and center. I think there's the one prominent-use case, which is what are the implications to security and cryptography algorithms themselves, in terms of how you quantum-proof those? I think that's probably the most top of mind.

But then there's, as it relates to finance, how could these models be used for doing classical financial modeling, things like calculating Black-Scholes or Monte Carlo based simulation models. And I think that's an area still being researched. But I think again, it's hard to almost put your finger on where that's going to head because everything's changing so fast. So, whether that's a few years or one year or five years, it's almost anyone's guess in some ways.

[SYRIL SMITH]

Ok, so then maybe the inverse of that, a little bit of a spicy question for you. What do you think is a fad? What would you not bet your money on in this fourth industrial revolution?

[ADAM SALVATORI]

Yes, that is a spicy question. I would almost look at it as not having an interconnected strategy would be-- so that's not picking a technology, which I know you might be getting at.

[SYRIL SMITH]

No.

[ADAM SALVATORI]

But I almost think probably the thing that would be more concerning-- or-- that's the right word-- but would be not having a strategy that takes into account the fact that we're headed into the cross-domain knowledge era, as well as this is many technologies converging all at the same time. And so not having a horizontal strategy, I think would be probably the biggest miss in my view, there.

[SYRIL SMITH]

Yeah, the one thing that came to mind for me, as I think about that question, is-- maybe two, actually-- one, the applicability of AI. It's going to infuse everything very quickly. It'll become saturated. And I don't think all of those use cases are going to be relevant or pertinent or are going to make sense for how we live our lives.

And so I think there'll be a period of time where we're going to feel inundated and overwhelmed with all the different use cases. But eventually, I think those are going to start to shake out. So, it's not one specific technology either. But it's really-- there's going to be all of these different applications, dot.com boom-esque, where you're going to get a bunch of different tools that are not going to make sense.

The other thing that I think is really interesting, and you and I've talked about quite a bit, is this idea of the open-source models and the accessibility of models. In a year from now or two years from now, do these models all start to look similar? How do you start to differentiate these models? They're now so big that thinking about finding new models, new use cases, is something that's interesting to me as well.

[ADAM SALVATORI]

Right. I think that's a super-interesting lens that you applied there. It gave me a thought around-- there's a bit, also, of this iOS, Android moment too, which we're seeing in the market as there's lots of open-source large language models or just large models in general. And it's to be determined on how that will shake out in terms of how much this future world will be around these closed-sourced models versus these open-source models. And so, I think that's something, certainly, I think everyone should pay attention to. But again, I think-- we talked before about studying history as being-- at least that's the way I usually try to learn from the past, is we've seen these things play out a little bit before. So, I think how do we learn lessons from analogous periods?

[SYRIL SMITH]

Yeah, those analogous periods are now truncated because you talked about this growth rate of change. So going from having that period be 5 years or 10 years to now being 1 or 2 years, I think really paying attention to that because it's going to happen. We have to make decisions quickly. And I think that's the other thing that I find really interesting here, is you have to figure out. Ok, do we do open source do we do closed source? Do we build our own model? Who are the right partners? Who are the right ecosystem? We talked about systems a lot here. What is the right system to surround yourself with? We don't have the luxury of time to wait to see. You have to make educated decisions on what you think makes the most sense.

[ADAM SALVATORI]

Right. I think what we're saying is opportunity cost is also never higher than any point in history. And maybe that's a point to get your view on, is this buy-build strategy. So, we're in this world of lots of change. You can't be doing everything yourself. And how do we think about-- how do we use all the open source? How do we use partners? How do we partner with the ecosystem so we can focus on, really, our differentiators and really use-- what's the utility functions in a cloud or elsewhere?

[SYRIL SMITH]

Yeah. I'm going to bring it back to this idea of augmentation because I think that's exactly what it is, in my mind, is we're finding partners that are able to augment or build functionality that they're fully focused on a very specific capability or a piece of technology that we aren't the experts in, and probably shouldn't be owning or investing in. But they are. And so, bringing them into our ecosystem, to augment the talent that we have here, to enable our talent to be able to focus on those higher-order problems that are specific to the technology that we own, that we build, that we lead, especially within the finance space, I think that's been our model. We don't need to solve the same problem that somebody next door to us has already solved and has solved it probably better than we could because that's 100% of their focus.

Our focus is, how do we enable our investment teams? How do we enable our clients to generate more and more financial well-being for our clients at the end of the day?