The King of Quants: A Conversation with Emanuel Derman
In his book, My Life as a Quant: Reflections on Physics and Finance,
Emanuel Derman describes his journey from South African physicist to Wall Street financial engineer. An expert in two fields – theoretical physics and quantitative financial analysis – Derman’s book was filled with stories, anecdotes and insights into two worlds that are largely incomprehensible to the average person.
Derman is responsible for some of the pioneering work on modeling financial behavior, being co-creator of the Black-Derman-Toy model, among the first interest rate models, as well as Derman-Kani local volatility, one of the first effective models of the volatility smile.
Earning his doctorate in theoretical particle physics at Columbia University, Derman began his career on Wall Street in 1985 at Goldman Sachs, where he worked in the fixed income department. From 1990 to 2000, Derman headed the Quantitative Strategies division of the Equities division at Goldman. In 2002, Emanuel Derman retired from Wall Street and took a position at Columbia where he is currently director of financial engineering.
TradingMarkets CEO and co-founder Larry Connors spoke with Emmanuel Derman this spring about his work at Goldman Sachs, his early days modeling interest rates and volatility, the battle between the objectivity of mathematical models and “the world as it really works” and his thoughts on where the financial engineers of tomorrow are coming from (hint: most will need visas to work on Wall Street).
What follows is Part One of Three, all to be published as part of our Big Saturday Interview series.
Enjoy!
Connors: Your primary responsibilities early on at Goldman. What were you hired to do?
Derman: I was hired into what was then called, “financial strategies,” which was like a fixed income research group.
And when I got there, I started working on building a model. The exotic product in those days was simpler than it is now. It was options on bonds, which were a bit like options on stocks that Black-Scholes had developed.
Connors: Alike, but different?
Derman: Yes. They were more complicated because bonds take coupons and they have yields. The stock price can do anything it likes for the next 50 years. But a bond is going to mature in 30 years and be worth $100 no matter what.
And so it was a harder problem. In a sense it should have been easier because you knew much more about the future behavior of bonds than you did about stocks. But in a way, it was harder mathematically because you had to incorporate all that knowledge into a consistent model.
Anyway, it was a trading desk. We were trading options on bonds and making markets in it and there was a lot of demand for those options because interest rates were coming down.
A lot of fixed income investors were getting less and less interest. What they wanted to do was to sell options on their bonds to collect premium. So they would sell us calls to get premium; and we would hedge those calls by buying listed options on the Chicago Exchange.
Connors: Yes.
Derman: But they didn’t match perfectly so you had to hedge the difference. So they needed a good model for doing that and that’s what I worked on.
Connors: That was basically the beginning of your financial engineering.
Derman: Yeah, it was. I did that alone at first and then it got more important and Fischer Black was upstairs in equity and not in fixed income, but he working on it, and eventually we built a much more sophisticated model — him and me and a guy called, Bill Toy; and everybody calls it “BDT,” which stands for “Black-Derman-Toy.” It was one of the early models to value bond options consistent with a current yield curve. It was sort of, it was an extension of Black-Scholes.
Connors: How did you know early on that you were correct? When was that “ah-ha!” moment where it’s like you really saw that you had something that no one else had?
Derman: That’s a good question. What everybody was doing before that was using Black-Scholes in a slightly hooky way to value … how much do you know about options?
Connors: I’ve got some pretty good knowledge of options.
Derman: Okay.
Well, so every time you value a call on a different stock, you just put in the stock price and the volatility. And so people did the same thing for bonds. They put in the bond price and the volatility. But you can’t really do that.
The trouble is that you want to model them out to maturity, but two years from today the five-year bond is going to be a three-year bond.
So there are all sorts of inconsistencies that developed from the fact — you know, if you write an option — if you value an option on IBM
(
IBM |
Quote |
Chart |
News |
PowerRating) it’s not gonna be Sun Microsystems
(
JAVA |
Quote |
Chart |
News |
PowerRating) tomorrow, you know? But if you value an option on a five-year bond it’s gonna be a three-year bond tomorrow.
Connors: Of course.
Derman: So the model in order not to get nonsense and arbitrage violations has to know what five-year rates are, what four-year rates are, what three-year rates are and all their volatilities so that it doesn’t produce nonsense between — or conflicts between two different values. The model we built was able to value all bonds in a consistent way.
Connors: And you were the first to come up with that.
Derman: Yeah, there were other people — there were two people — Hoo and Lee — that did something similar around the same time. I think ours was a little more realistic in the sense that there approach allowed interest rates to go negative and ours didn’t.
But yeah, it was partly a point of view in the sense they had been very clever bond option models before — or yield curve models before by a guy called Vasicek, who’s pretty famous — he’s part of KMV, which was bought by Moody’s.
He had done some very clever stuff before, but people looked at it through the wrong lens. Those guys thought they were modeling the yield curve and the future stochastic or a random behavior of the yield curve. They weren’t interested in bond options. They were interested in trying to explain the future yield curve.
Connors: And that was not your project.
Derman: We were sort of working backwards. We were not interested in predicting the yield curve. We were interested in just being consistent with the yield curve and then valuing options. Am I sorta making sense?
Connors: It makes a lot of sense.
Derman: See, if you value a call on a stock, you want to make sure that the stock price is right. You don’t want to quibble about whether you’ve got the wrong or right stock price; that’s just a given. And in the same way, if you value a call on a bond, you want your model to start off by having the prices of all bonds right.
And that’s what we kind of did.
So we were sort of – it’s an inverse problem. Those guys were going forward, which is start from some assumptions and calculate the yield curve. And we were going backwards with the idea of: given the yield curve, what assumptions do you need to match it.
Connors: Interesting. What would you say goes into creating a great model like what you did back in the ’80s? What do you look for to create a “great model?”
Derman: I’ve changed my mind a lot over the years. In physics if you want to explain something that’s sort of superficial that you see in the world, it makes sense to drop down several levels and build something very fancy, like quantum mechanics or discover something very fancy like quantum mechanics or Newton’s laws or general relativity – not discover, sort of have an active inspiration, you know, about what the laws are and then come back and apply them to it. You understand what I mean?
Connors: Yes.
Derman: I think in finance that doesn’t work as well. People do that but the financial world is sort of much more changeable and nonpermanent and — what’s the word I want — ephemeral. So I think dropping down to a very low level to write something really fundamental and then come up again, doesn’t work very well because you don’t know how things work at that low fundamental level.
And active inspiration about what the fundamental laws of finance are, well, you don’t get them right most of the time because the world is changing the whole time. Maybe there aren’t any fundamental laws.
So I think one thing I look for is models that don’t get you too far away from the things you can already see, that use as inputs things you can see.
Connors: Most of our readers are going to be equity readers, so what would you — what would those things be in the equities markets?
Derman: Okay. Implied volatilities of equity options?
Connors: Yes.
Derman: Equity index prices, interest rates, obviously, but yeah, sort of those things. People build stochastic volatility models where volatility itself is a unknown quantity, which is correct. But when you drop down too low, you can write very pretty models, but you don’t really know that much about how volatility varies, so that’s only dropping down level.
It makes sense to do it, but you can’t really get it right. I mean, you want two things: 1) You want consistency, which means no arbitrage violations. You don’t want to write models where two things that have the same cash flow sell for different prices.
Connors: Explain it. Explain further for the readers —
Derman: What I like to say is that in physics there are a whole bunch of laws, not many, but they explain almost everything. In finance, there’s only one law and it’s kind of a trivial one.
People say it very fancily by calling it the “law of one price.” But it really says is, if you want to know the fair price of something illiquid — whose value the market doesn’t tell you — your best guess is to find a portfolio of other securities that behave as closely as possible like the thing you’re interested in.
They should be listed and have liquid prices.
And so what all the successful models do is say: here is a bunch of stocks and bonds and maybe a couple of other things that when put together, have exactly the same behavior as this weird exotic option that you’re trying to price. And if it has the same behavior under all future circumstances then it’d better have the same price today. All that modeling consists of is: 1) specifying what you mean by all future circumstances, and then, 2) showing that the portfolio you’ve created out of liquid things whose prices you know, behaves pretty much in the future like the illiquid thing whose specification you know.
Am I making sense?
Connors: Yes. It sounds like volatility encompasses a great deal of that assumptive behavior.
Derman: Yes, that’s exactly right because what you have to do is specify all the scenarios which could happen in the future, which is related to the volatility.
Connors: Why implied volatility instead of historical volatility?
Derman: That’s a good question too. None of these models are perfect; they’re all missing things. When you build Black-Scholes, you assume stocks are infinitely liquid, that you can short stocks, that there are no transaction costs, that stock prices never jump. None of those things are strictly true. All of these prices are subject to supply and demand that sort of go beyond–a little beyond the model.
So in the end, people take the model as a sort of platonic thing and then force it to fit the data and the number that comes out is implied volatility. So it’s sort of implied in the sense it’s saying, “If the market price is right, then this is what the future volatility would have to be.”
Connors: Right, right.
Derman: Then they use the model for hedging or for valuing other things.
It’s a sort of a weakness of all financial models in that they don’t predict the world, they are forced to fit the world by adjusting some of the parameters.
Connors: So what happens? Now we’re at the point of creating the model and so we have a model versus what is the real world, meaning that we can’t accurately or fully predict …
Derman: Yeah. So what do you do?
Connors: So what do you do — yes.
Derman: Well, I think what you do if you’re sensible is you pick out the salient features that are important to the thing you’re trying to calculate. So for example, if it’s an equity option the most important thing is volatility.
Connors: Yes.
Derman: And you assume interest rates aren’t going to change and dividend yields aren’t going to change or bunch of things like that, and you kept to the biggest — the thing that’s most important and try to model that; and then if you’ve got energy and an ability you and it becomes important in that the market spreads become tighter.
Then you start trying to add small corrections for other facts of life that you’ve ignored. So for example, if you live in Brazil and interest rates are very volatile, you might start to take account of that.
Connors: Yes. Yes.
Derman: Or if dividend yields are changing all the time, you might take account of that. Or if you’re in a period where stock prices are–have some possibility of really, you know, crashing, you might make corrections for that.
But it’s hard to build a perfect model, so I think what you always do is try to model the dominant feature and then add corrections for less important things.
Connors: So you’re on a go-forward basis. The model is being adjusted to reflect new knowledge that you gain between the time you created the model and the time you’re making those adjustments?
Derman: Yes, for sure, and also let me give you an example like that Black-Derman-Toy model that we built. It’s not bad for pricing options, but it has a constant volatility for the volatility of interest rates. So it’s not bad for pricing options on bonds, but sometimes people are interested in options on the slope of the yield curve.
It’s a bad model for that because – it’s not a bad model, but it’s a limited model for that because there isn’t a second random variable that represents the slope of the yield curve, and so if you’re writing options or making a market in options on the ten-year yield minus the two-year yield, that wouldn’t be a good model to use because it’s missing that. That’s not so important for bond options, but it’s important if the underlying thing you’re looking at is actually the ten-year minus the two-year.
So you’ve got to be a little bit pragmatic. You’ve got to take your model really seriously and build a consistent model that captures the biggest effect or maybe some of the smallest effects that affect the value of the thing you’re pricing.
You want to be very mathematically consistent, but then at the end you want to step back and say, “Is this the way the world really works?”
And it’s not, you know? So you’ve got to allow for the fact that you’re not discovering general relativity. You’re just trying to engineer some solution that’s approximate, that’s reasonably okay in some sort of range.
Be sure to catch Part II of “The King of Quants: A Conversation with Emanuel Derman” next weekend, available Friday evening on May 16.
Correction: An earlier version of this interview incorrectly suggested that Mr. Derman had received his Ph.d from the University of South Africa. We regret the error.