Jose Martinez

Jose Martinez

Breaking the rules to reimagine computer architecture

According to José Martínez, all successful research must cycle through two phases: There’s a disruption phase, where researchers suddenly crack a problem open by thinking differently about it. Then there’s an incremental improvement phase, where researchers progressively squeeze more and more juice from that disruptive discovery. “Eventually, however, the cycle must repeat itself, or you end up with a dry lemon,” says Martínez, an associate professor in the School of Electrical and Computer Engineering at Cornell University.

“I think it’s critical to realize that disruption and incremental improvement are both absolutely necessary for progress, and that as a researcher one should be willing to embrace both. Building upon prior art allows you to make incremental progress, which is a very necessary part of the process, but at the same time, you may be missing the ability to make a bigger difference by fundamentally challenging your assumptions,” says Martínez. “So while the majority of a researcher’s effort is probably on the incremental phase, one has to keep looking for an opportunity for disruption and then seize it.”

With his focus on computer architecture and, in particular, microprocessor architecture, Martínez designs the inside of microprocessor chips to execute user programs as fast and as energy-efficiently as possible.  “Computer architecture is a lot about deciding which instructions to process and how to process those instructions at each point in time,” he says. “Traditionally, computer architecture research is very intuition- and experience-driven—really a bit of a craft, and I find that very exciting. However, for the past several years, I’ve felt that this approach hasn’t made the best use of more formal, methodological approaches that other engineering disciplines have embraced once their problems reached a certain level of complexity.”

Over the past 30 years, microprocessor chips have moved from comprising tens of thousands to billions of transistors. Just from the sheer number of components, the size of the problems is immensely large. “What we see,” says Martínez, “is that over the past three decades, computer architects have essentially used the same intuition- and experience-based approach to trying new things, tweaking this or that and then evaluating the idea using detailed simulations.” 

Realizing that one of the main problems in computer architecture is now figuring out a way to make effective use of so many transistors, he questioned convention: Instead of allocating all silicon to the task at hand, why not devote some of the transistors to help the rest of the chip get the job done in the best way possible? Martínez turned intuition on its head and looked elsewhere for more methodological answers.

“We approach computer architecture problems by inserting right into the chip technology that has a sound theoretical foundation and that has a proven track record elsewhere, like machine learning for example,” says Martínez. “We have to be willing to put the time and the effort to look to other disciplines, to try to see our problem from their perspective, and to produce a solution that is workable on silicon. In the case of efficiently allocating on-chip hardware resources to not just a handful but potentially hundreds of tasks, for example, we decided to look to the way financial markets work.”

The concept of market-based mechanisms clearly works well in real life, and Martínez noticed that computer scientists at Cornell and elsewhere had successfully applied this idea to solving certain challenges in data centers. He concluded that a market mechanism could enable the many tasks running on a many-core microprocessor to grab the hardware resources they need both competitively and efficiently. This would be similar to customers buying into a market—fundamentally a supply-and-demand optimization problem.

“In the market of chips and transistors, tasks are bidding for multiple finite resources at the same time, be it for cache, for on-chip power, or something else” says Martínez. “The idea is that, if you have a finite budget, by participating in the market you would eventually figure out what your best bet is based on the available supply and what other players want. And it turned out that, in the context of our problem, our experiments tell us that this really works quite well—not only by doing significantly better than other proposals out there, but by doing so for a much larger number of players, with very modest overhead.”

So, what does he think for the future of computer architecture? “The shift toward mobile computing has changed the landscape dramatically since I started in the field,” says Martínez. “Mobile devices have ceased to be small computers playing catch-up with your desktop. Today, mobile phones have turned into two-headed beasts that are not only amazingly capable as standalone computers, but at the same time have become a portal to incredibly powerful data centers without regard to physical location” he explains. “This newfound duality presents a great opportunity for disruption: How can we integrate these two functions seamlessly and synergistically into a wonderful experience? Perhaps it's worthwhile asking how current mobile and data center architectures should be re-thought around this new paradigm."