Tim: Yes, happy to be here.
Mary: You’re a Ph.D. economist that loves data science. What attracted you to EA?
Tim: EA created data science for commercial real estate and has been perfecting it for decades with the strongest CRE dataset available. CBRE’s vision of bringing data science to commercial real estate is what attracts me. Having been a major player for decades, we have troves of proprietary data. As I’ve said repeatedly, the algorithms are now open source and the computation is essentially free, so the data is the most important component in driving superior outcomes for clients.
Mary: When you consider CBRE’s proprietary data, what are you most optimistic about?
Tim: CBRE’s unparalleled granularity across the lifecycle over time. From brokerage and valuation to capital markets and workplace, we observe the market across the full spectrum. Our data, then, is wide—not long. By that I mean, for CRE, the real gems don’t come from scouring billions of data points. It’s having the opportunity to observe almost every relevant attribute over time. From that standpoint, I have great optimism on how our ETL process and algorithms deliver value today and into the future.
Mary: What does it take to get there?
Tim: A fully-formed data strategy and system of governance. Under Bob and Chandra’s leadership, with EA’s seasoned expertise, CBRE is well on its way.
Mary: I read the paper you co-authored with Huy T. Vo on the emergence of Deep Learning in Finance. Your study found that deep learning doesn’t perform better than traditional models. Should we all be disappointed?
Tim: Well, let’s step back a second. Deep learning is the current term used for the application of nonlinear statistical models. It isn’t new, but open source tools and so-called clustered computing have made it much easier to apply. And it has been enormously successful in the application of image recognition by machines. It was the ease of application that was the thrust of our paper, because there was a time that the traditional approaches were also difficult to apply. Deep learning didn’t work better in the two use cases we present. Personally, I’m neither surprised nor disappointed. This is the third wave of attempting to apply it in finance, that I know of. Now, it may perform better for more sophisticated applications, but it comes with enormous computational costs. And it’s up to the finance community to make that trade-off. The magnitude of managed assets at this point is so large that even the smallest performance improvement is worth a lot.
Mary: What’s next on your research agenda?
Tim: With regard to this paper with Professor Vo? We already know what the response from the deep learning community is going to be: you didn’t tune your parameters properly. So we will have to address that before it moves much further. But the data and tools are open source, though most people will be challenged to replicate the computing architecture that Professor Vo has.
Mary: And more generally at EA?
Tim: We’re working on new forecasting tools that put clients in the driver’s seat, and we’re developing an app that reveals data layers never accessible before...
Mary: What’s your advice for us technophiles at CBRE?
Tim: At the core, machine learning is a tool, like a hammer and nails. And like any tool, it needs to be properly applied. Our goal is to use it to empower CBRE professionals and our clients. This isn’t Brave New World or the Terminator. It’s just making us better at our jobs.