AMD Lisa Su: On Competing with NVIDIA and Intel

Lisa Su, CEO of AMD, can undoubtedly be regarded as a prominent figure in the chip industry. Especially with the advent of the AI era, her reputation has reached its highest point in a decade.

Reviewing her growth history, after earning a Ph.D. in Electrical Engineering from MIT (where she spent eight and a half years earning three degrees in electrical engineering), Su began her career at Texas Instruments, where she played a significant role in developing silicon-on-insulator technology. Subsequently, Su worked at IBM for 12 years, leading the development of semiconductor copper interconnects, heading the team that developed the Cell microprocessor used in the PlayStation 3, and serving as the technical assistant to CEO Lou Gerstner.

After a period as the Chief Technology Officer at Freescale Semiconductor, Su joined AMD in 2012 and was promoted to CEO in 2014.

During her ten years leading AMD, Lisa Su has achieved remarkable success. After decades of trailing Intel, AMD developed the world’s best x86 chips and continued to capture significant market share in the data center market from Intel. Besides the traditional PC business and graphics chip business, AMD is also a major player in the console gaming sector. With AMD competing against Nvidia in the data center GPU market, the GPU business is now increasingly in the spotlight.

Earlier, Su was interviewed by foreign media Stratechery, where she talked about her career path, including the lessons learned during her rise, and then discussed why AMD has achieved such great success during her tenure. In the interview, Su also shared her views on how “ChatGPT” is changing the industry and how AMD is responding.

The following is the full text of the interview:

Q: I know you don’t want to talk too much about yourself, but I need some fact-checking. We just talked about how you were born in Taiwan, immigrated to the United States at a young age, and eventually attended MIT. It’s said that you were torn between computer science and electrical engineering, ultimately choosing electrical engineering because it was more challenging. Is that true?

Lisa Su: It is true. I was always involved in math and science, and my parents always said, “You have to do these difficult things.” When I went to MIT, I was deciding between electrical engineering and computer science. In computer science, you only needed to write software programs, while in electrical engineering, you had to build things. I wanted to build things.

Q: Your Ph.D. focused on silicon-on-insulator technology, and then you went to IBM. You pioneered the use of copper interconnects on chips. Regarding your experience at IBM and what you might have learned, I have three questions: First, speaking of copper interconnects, you’ve mentioned in interviews that after completing this technological development, you wanted to move on to a new chapter, but your boss asked you to stay. You felt like the actual learnings you accumulated in that time when you thought you were done were some of the most impactful. What were those learnings?

Lisa Su: I learned a lot during my time at IBM; it was early in my career. When you go to school and get a Ph.D., you think the most exciting things are the research you do and the papers you write—we all write papers and such.

When you actually join a company and get involved in a project, these projects typically take years to complete. But the sexy part of it is the early stages when you’re coming up with new ideas.

What I learned is that one of the first products I helped develop was a microprocessor using copper interconnects, and it turns out that the last 5% of what it takes to get a product out is probably the hardest, where most of the secret sauce is. If you learn how to do that, frankly, that’s it.

Q: All the software engineers are saying, “Hey, it’s the same for us. Didn’t you know that?”

Lisa Su: (Laughs) That may be true; that may be true. But we all have our own view of the “secret sauce.” When issues arise, it’s about yield, reliability. When you’re trying to produce millions of units rather than just five, you learn a lot, and I certainly did.

Yes, as a young researcher, you think, “Hey, I’m ready to start my next research,” and you realize how rewarding it is to see your product actually ship and hit the shelves. You can walk into a Best Buy and buy it—that’s very gratifying. These are the things I learned.

Q: Even today, do you feel there’s a significant balance between your time and energy spent on the goals you’re building towards and the actual execution and fulfillment of your commitments?

Lisa Su: Certainly, today I personally spend a lot of time looking ahead at the future roadmap and technology.

Q: This question is purely out of curiosity. How deeply do you need to get involved in things like this—not specifically you, but AMD as a whole? Given that you’re now a fabless company, how much do you need to be involved in the actual final mile? What is the interaction with TSMC, your packaging partners, or any other company like, and how do you actually improve yields?

Lisa Su: As a fabless company or a design company, we do this indeed. We are actually involved in end-to-end development, so you can imagine from day one of the product concept—even before that—we are thinking about which technologies will be ready, what the next big bet should be. This runs all the way through. Sometimes it may take five years or even longer for the technology to really come to fruition. We are also involved in the final stages, ensuring that products are delivered with high quality, the right yield, the right cost structure, and in high volumes.

So, it is truly end-to-end; the difference is that it is not all done by one company, which is more typical in a traditional integrated manufacturing model, but rather through partnerships. We find that it is actually very effective because you have experts from different areas working together.

Q: The second question I’m curious about from your IBM phase: you were involved in developing the Cell processor for the PlayStation 3. This chip was a technical marvel, but the PlayStation 3 is considered the least successful PlayStation, which led to a real shift in Sony’s strategy from hardware differentiation to exclusive content. I think this question could be in two parts: first, what did you learn from that experience? And second, related to this, how did that experience impact you later? I’m more curious about whether all the work you put into the Cell processor and its actual market performance brought about management insights.

Lisa Su: Yes, it’s interesting you mention that. I’ve been involved in PlayStation development for a long time, and if you think about it, PlayStation 3, 4, 5…

Q: It’s like a thread throughout your career.

Lisa Su: Yes, across multiple companies. To be frank, those decisions were more around architectural choices. From that perspective, whether it was the PlayStation consoles or other collaborative projects we did (we’re AMD, but at the time, it was similar for IBM), it really was about close collaboration with the customer or partner trying to achieve.

At the time, the Cell processor was ambitious, considering the type of parallelism it was trying to achieve. Again, from a business perspective, it was definitely successful. When you rank things, history might tell you there are different rankings.

Q: My point is, the console era has gone through phases. In the PlayStation 1 and PlayStation 2 era, they made smart hardware decisions that differentiated their approach from Nintendo. But once you moved to HD, the creation costs skyrocketed, developers were keen to support multiple processors, and game engines emerged. Suddenly, no one wanted the burden of differentiation on the Cell; they just wanted to run it.

Lisa Su: Maybe someone would say, if you look back, programmability was so important.

To achieve real commercial success from day one, we must consider both hardware and software. As we have seen over the past decade or so, one thing I am very proud of is the work AMD has done or completed in PlayStation 4 and PlayStation 5, where we have had new leaps in hardware.

And they are backward compatible with previous generations, which helps a lot.

Q: The third question about IBM is, you served as Lou Gerstner’s technical assistant for a year. What did you learn from him?

Lisa Su: (Laughs) You’ve done your homework, haven’t you? The year with Lou was one of the most educational experiences of my career. IBM is a very strong company in terms of talent development, so they identify talent early in their careers and ask them, “Hey, what kind of experience do you want?”

In my case, they asked if I wanted to go down the technical route or more of a management route; the terminology is IBM Fellow or IBM Vice President. Honestly, I didn’t think I was smart enough to be an IBM Fellow like Bob Dennard.

There were a lot of great people there, so I thought, “Okay, let me try management and the business side of things.” They gave me an opportunity to spend a year with Lou, and he was amazing. If you think about it, for someone who’s been out of school for five years, who had only done pure engineering, then you essentially go to the best MBA in the world.

What was most interesting to me was really understanding where he spent his time. He was always spending time learning, very focused on external things, understanding market dynamics, understanding customer dynamics. How does this change your strategy? How does this change the way you guide your leadership team?

Q: One thing I’ve always admired about Lou Gerstner, as you said, is that he didn’t just observe the market from the outside and understand what was happening. He truly understood IBM, its inner capabilities, and unique differentiations. Essentially, my point is, IBM was big, and what did that actually mean? What kind of impact could you bring in some way? The whole middleware revolution was about showing that we could solve this internet problem for companies older and bigger than us, which was a differentiator. But then, obviously, everything fell apart. IBM should have done cloud computing. Lou actually wrote in his book that he didn’t know this in hindsight. If you succeeded him, could you have taken IBM to greater heights?

Lisa Su: I don’t know if I would have gone down that path. I was a semiconductor person, and I am still a semiconductor person. If you think about it seriously, IBM was a great career for me, but if I wanted to continue being a semiconductor person, I had to go to a semiconductor company. So, I went to Freescale (a semiconductor company) and took on more business roles.

Q: Did you personally acknowledge, “Okay, now I’m a businessperson”? Or did you choose this path and just go in that direction?

Lisa Su: I’ve always straddled both technical and business roles. At Freescale, I started as the Chief Technology Officer. I joined as CTO, and a few years later, I ended up running the networking and multimedia business. That was definitely a choice, and that choice ultimately was that I wanted to drive results, and driving results requires, yes, great technology, but you also need to have the right business strategy.

Q: Does this limit many technologists? Do they underestimate all the result-driven factors that aren’t related to technology?

Lisa Su: I think it’s something technologists have to learn. By the way, there are many outstanding CTOs who really understand this. My current CTO, Mark Papermaster, was my partner at IBM, we grew up together, and later we became partners at AMD. He truly understands that great technology is wonderful, but you also need to drive business results. This is why I love what I do because, yes, I can integrate great technology with an outstanding team, but there’s also the opportunity to drive very significant business outcomes.

Q: Let’s talk about AMD. I previously mentioned the console strategy, which was a major shift in focus after you joined. Was the idea at that time like, “Look, this is an easy win, high volume, and we can get back into the game”? What was the thinking back then?

Lisa Su: Well, I would never say anything is an easy win.

First of all, I want to mention that when I first joined AMD, we had a market share in the PC market of over 90%, by the way, I really love the PC market. I believe we will discuss this more. But let me remind everyone of this. The PC market is cyclical, and the cycles can be very intense.

They can be very dramatic. So, from a business strategy perspective, it was very important for us in the early days of AMD to diversify and create a strategy based on high-performance computing as a fundamental principle. We are a computing company, good at building computing capabilities, so which markets can really utilize these capabilities now? Gaming was one of them, and we were very fortunate that both leading console manufacturers, Sony and Microsoft, chose us.

Q: Who drove the shift to x86 in consoles? How much did Sony learn from Cell? Did you approach them saying, “Look, this is the way to go”? How did the general architecture evolve?

Lisa Su: Yes, I think it was a series of choices, so it was a choice between x86 and other architectures. If you think about software development and the developer ecosystem around x86, I think that’s a very critical part, but I don’t know if the architecture itself is enough. I think the incredible graphics capabilities and graphics, especially if you want to customize graphics, are something very few companies can do, and AMD is one of them.

Q: How integrated were the CPUs and GPUs you provided? AMD acquired ATI in 2006. So my question is, before you joined AMD, were there other companies that could truly offer what you did for consoles?

Lisa Su: I think we were able to achieve this for two reasons. First, we had the foundational IP, which is the combination of what we call the CPU or microprocessor core with the graphics IP capability, and we were willing to customize. Frankly, we had large teams dedicated to these projects, working on customization.

Q: Do you see this as a pattern: initially, everything revolves around cutting-edge technology to achieve the best performance, but as it (I don’t want to say slows down, but as functionality becomes commoditized), customization becomes more important? For example, you acquired Xilinx.

Lisa Su: I think the best approach is to have a few principles. First, the fact is the world needs more semiconductors. Semiconductors and chips are now the foundation of much of what we do. Many things we do are what we call standard products that fit broad use cases. But you will find those high-volume applications, like consoles, like some of the work now done in the cloud, and some AI work I believe will be customized. In these cases, because of the high volume, customization makes sense. This is something I’ve always believed in. It’s part of our strategy and part of our deep partnership strategy. So if you have the right building blocks, you can work with a broad set of customers to truly figure out what they need to achieve their vision.

Q: But isn’t there a situation where, as the process curve continues to drop, design costs become increasingly high, and there’s a baseline for customization that only AMD has the scale to achieve? Isn’t that somewhat contradictory?

Lisa Su: I think it’s important to look at which markets truly warrant large-scale customization, but it’s not everything. For example, you wouldn’t want to do that with your IoT devices because the return on investment isn’t there. But for large computing capabilities, I think it requires a combination of the right IP and the ability to work closely with partners. By the way, it doesn’t always have to be hardware customization; we can do a lot on the software side too, which I think is one of the important trends for the future.

Q: So I have to ask, you came to AMD, stayed there for a few years, and then took over as CEO. Was this another example of choosing the harder path?

Lisa Su: I think so. I can say that when I joined AMD, my real thought was, I have spent my whole life working on high-performance processors; that’s my background. In the US, there are very few companies where you can do this kind of work. I always had great respect for AMD, seeing it as an important company, but I felt I could make a difference. So when I joined, I realized, “Wow, I have a lot to learn.” In the first few years, I really learned a lot about the market dynamics of the world, but it was also a great opportunity to make changes.

Q: Where did you see breakthroughs? We can see the differences—just look at the stock chart, we can see how your chips performed. So in that sense, it might be hard to go back to your exact mindset 10 years ago, but what was your plan then? What did you say, “Look, I can do this, there is a way, here’s a path, I see it”? What path did you see?

Lisa Su: I very clearly saw that we had the foundation needed to build an incredible roadmap. We were very unique in those foundational aspects.

Q: What were those foundations? Was it intellectual property or customer relationships?

Lisa Su: High-performance CPUs and high-performance GPUs are our pillars, and if you think about it, these are really incredible cornerstones. What we were missing was a very clear strategy about what we wanted to become when we grew up and an execution machine capable of achieving that goal.

So, from a strategic standpoint, I think we had some choices. If you remember, it was 2014, and at that time, the most exciting thing was mobile, like application processors. So we discussed, “Should we enter the mobile market?” Our answer was, “No, we shouldn’t, because we are not a mobile company. Other companies are better at that. We are a high-performance computing company, so we need to create a roadmap that leverages our strengths, which means reforming our architecture, design, and manufacturing methods.” I knew how to do that, but it takes time. You can’t do that in 12 months; I felt it would take five years. It did take five years, but clearly, we had those elements. We just needed to build the execution engine very systematically.

Q: You just mentioned manufacturing. We know that before you took over, AMD had already spun off GlobalFoundries. I want to use a technical term here: how much of a hassle was the constantly renegotiated wafer agreement with GlobalFoundries? Was this something you had to continually manage while trying to execute your strategy?

Lisa Su: Yes, AMD and GlobalFoundries used to be one company.

That wafer supply agreement was also signed before my tenure, but if you think about some of the major strategic things we had to do, if you want to make high-performance processors, you need the best technology partner and the best manufacturing partner, and GlobalFoundries is a great company and was a great partner at that time. But you need scale to manufacture at the leading edge, and that scale wasn’t there.

When they realized that and said, “We’re not going to develop 7nm technology,” it was a very good decision for both GlobalFoundries and AMD, and from a financial standpoint, AMD had to return all the money originally obtained.

While there was business cooperation between the two, from a technical standpoint, it was absolutely the right choice. As I said, GlobalFoundries has been a fantastic partner. I have great respect for [GlobalFoundries CEO] Tom Caulfield as a partner, and I think focusing on what each of us does best benefits both companies.

Q: You were the first high-performance chip manufacturer to move to chiplets, and now everyone is moving in that direction, so you were certainly ahead of the curve here. Were you forced to do this because of the wafer agreement so that you could do some volume with GlobalFoundries and TSMC while still delivering chips?

Lisa Su: Not at all. Actually, I think this was clearly one of the best decisions we made. Of course, we couldn’t foresee everything at that time.

What we considered was where Moore’s Law was heading and how we could stand out. Frankly, we thought we needed to bring something different to the processor market, so making these giant chips with low yields and high costs wasn’t the answer.

I remember spending time with Mark and our architects trying to decide, “Is now the time to move to chiplets? Is now the time to bet the company on chiplets?” We said, “Yes, because we will get higher performance, more cores, and better cost points,” which gave us great flexibility, and we learned a lot in the process.

The first generation of Zen 1 chips was good, but we encountered some programming model issues that needed to be addressed, which were improved in Zen 2 and really advanced in Zen 3.

Q: In 2014, when you took over the company and felt you could make a difference, I saw several major shifts. For example, you had already moved to chiplets, and TSMC was beginning or transitioning to EUV. To what extent did you see the long-term changes in the market and make the decision, “Look, I can do something here”?

Lisa Su: Yes, we did closely examine the technology roadmap and TSMC’s advancements at the time, as well as packaging technologies, and decided it was time to make a bet. I would say that in our world, we have to make bets that sometimes take three to five years to come to fruition.

Q: Yes, I don’t mind asking you about decisions made in 2014 because today’s important decisions were often made back then.

Lisa Su: Exactly, and there were risks involved, such as, “Can we really achieve the performance we expect by adopting chiplets?” But we learned a lot, and I think history will show that we made the right choice. At the time, some of our competitors called it glue, saying we were just gluing chips together. It was like, “We are not just gluing chips together.”

Q: Now they are doing the same thing. Over the past 10 years, AMD has truly achieved performance leadership in the x86 space. Between the design decisions and TSMC’s leading processes, who deserves more credit? How have the returns been?

Lisa Su: I do believe they are deeply interconnected.

TSMC has been an outstanding partner in this area. When you take on a lot of design risk, you want to know that your technology is reliable so you know where to spend your time and effort.

Q: This is what TSMC and ASML have done, like first adopting 300mm wafers and then EUV. This cooperation has proven feasible, and both parties can move forward together.

Lisa Su: Exactly, I think it’s been a highly synergistic partnership.

Q: Before your tenure, the most important moment for AMD was, as we discussed earlier, the shift from x86 to 64-bit, pushing Intel into a corner. This was a hardware and software story. That was before your tenure, but I think one of the ongoing criticisms of AMD has been the need to improve on the software side. Where is the software? You can’t just be hardware cowboys. When you joined, was there a sense of, “Look, we have this opportunity, and we can build on it over time”? What has been AMD’s cautious approach to software, and how have you worked to change that?

Lisa Su: Well, let me be clear, there’s been no holding back.

I think we have always believed in the importance of hardware and software integration, and the key with software is making it easy for customers to use all the incredible features we put into these chips, and that’s absolutely clear.

I think you’ll see that we have actually been on several technology development arcs. So, the CPU arc and everything we did to build the Zen portfolio. Now, we just previewed Zen 5 for data centers at Computex, and then it will be launched in client products. That specific arc is one arc.

Now we are on the next arc, which is AI and GPU.

Q: I wanted to ask you about something else. In terms of this trend, we talked about the chiplet trend and the EUV trend. How important has the rise of HPC been to your success? Because what I see is that they are buying in large volumes, doing LTV calculations to say, “Look, yes, these AMD processors are worth it in the long run.” Secondly, if there are software gaps, they will work to fill them because they can see the long-term benefits. Did this influence your thinking when considering what we could actually win here? Was this a driving factor?

Lisa Su: Yes, you have a keen observation. When you consider high-performance computing and how things have changed, the fact is that HPC is a very important part of the entire market, and we have spent a lot of time there. You are absolutely right in saying that—you want to believe that products always win in every market, but that’s not necessarily correct. In the hyperscale market, the best product does win.

We were able to prove that. Frankly, the key to this market is that winning once isn’t enough, and a temporary win isn’t enough either. You have to win the roadmap, and that’s exactly what we did at that specific point in time.

As it turns out, there were indeed customers who would buy according to the roadmap.

By the way, they will ask you to prove it. With Zen 1, they said, “Okay, this is good,” Zen 2 was better, and Zen 3 was much better. The execution of the roadmap has put us in a position where we now have very deep partnerships with all the hyperscalers, for which we are very grateful. When you think about the AI journey again, you’ll see that it’s a similar journey.

Q: One more question about x86. How do you view the consumer space in relation to all this? You can imagine, for example, a company like Intel has to keep its fabs running at full capacity, so they need to maximize chip utilization for all demands. The fab issue is that Intel wants integration, whereas AMD is in a different position and can meet hyperscaler demands, excelling at making great chips. But do you consider volume just because you want to leverage design costs and IP investments? I’m curious how these calculations are made in a world where it’s not your fab, not your billions of dollars in capital expenditure. How do you see things differently from the integrators?

Lisa Su: We think it’s about scale. In 2014-15, we were a $4 billion company, and in that scenario, you can invest a certain amount in R&D. Last year, we were a $22 billion company, so you can invest significantly more in R&D.

It’s the same calculation about how we leverage.

Q: But if you overinvest in fabs, the risk of bankruptcy might be lower.

Lisa Su: Well, I think the key is leveraging IP. It’s the engine we have, the compute engine. Our top priority is definitely to put these compute engines on a very aggressive roadmap and then build products based on that.

Q: What was your reaction when ChatGPT appeared in November 2022?

Lisa Su: Well, it was really the crystallization of AI’s essence.

Q: Clearly, you’ve been in the graphics gaming industry for a long time, always thinking about high-performance computing, so the idea of GPU importance is not new to you. But did it change the perception of those around you, and what happened afterward, were you surprised?

Lisa Su: We take high-performance computing and the development of GPUs for AI very seriously. In fact, this is perhaps a very important arc we started, which we can trace back to the timeframe post-2017. We’ve always been working on GPUs, but the real focus has been—

Q: What happened in 2017 that made you realize, “Wait, we have these, we thought we bought ATI to play games, but suddenly there’s a completely different application”?

Lisa Su: It was the next big opportunity, and we knew it was the next big opportunity. This is something Mark and I discussed, that by placing CPUs and GPUs in the system and designing them together, we would get better answers. The first near-term application was supercomputing. We focused a lot on these large machines that would reside in national labs and deep research facilities, knowing we could build these large-scale parallel GPU machines to achieve this. On the AI side, we also always believed it was clearly a combination of HPC and AI.

Q: You’ve said before that AI is the killer app for HPC.

Lisa Su: Yes.

Q: But when you talk to people in the high-performance computing field, they say, “Well, it’s a bit different,” to what extent is this the same category versus adjacent categories?

Lisa Su: They are adjacent but highly related categories, and it all depends on the precision you want in computing, whether you’re using full precision or some other data format. But I think the real key, and where we had a real vision, was that because of our chiplet strategy, we can build a highly modular system that can be called an integrated CPU and GPU, or it could just be the incredible GPU capabilities that people need.

So, to me, the advent of ChatGPT made it clearer—now everyone knows the utility of AI. Previously, only scientists and engineers would think about AI, but now everyone can use AI. These models aren’t perfect, but they are very good. Therefore, I think it’s very clear how we get more AI compute into people’s hands as quickly as possible. Due to the way we build our design systems, we can actually have two styles. We have the HPC-only style, which is what we call the MI300A, and we have the AI-only style, which is the MI300X.

Q: Is this an uncomfortable shift? For example, “Actually, no, we want lower precision because scalability is so important.”

Lisa Su: It’s not uncomfortable. It’s moving very quickly.

Q: Things are happening so fast. AMD has been performing very well, hitting all-time highs a few months ago. But overall, Nvidia clearly dominates, with lots of momentum and room for growth. From your perspective, during that period, what did AMD need to catch up on, and what advantages did Nvidia have?

Lisa Su: I think the way to think about it is to consider the focus areas, relatively speaking—look, I have great respect for [Nvidia CEO] Jensen [Huang] and Nvidia. They have invested in this area for a long time, until things became completely clear. We are also investing, although I’d say we have a few arcs. We have the CPU arc, and then we have the GPU arc.

Q: Hey, you were busy crushing Intel, so I get it.

Lisa Su: I’d put it this way: we are in the early stages of AI. What I find strange is that people always think about technology in short time frames. Technology is not a short-term sprint; we are in a 10-year arc and maybe 18 months into it. From that perspective, I think we are very clear on where we need to go and what the roadmap should look like. You mentioned software earlier, and we are very clear on how to make it very easy for developers to transition. One great advantage of acquiring Xilinx is that we gained an extraordinary team of 5,000 people, including a lot of software talent, who are currently working on making AMD AI as user-friendly as possible.

Q: One thing that really impressed me is that one of Nvidia’s truly smart moves was acquiring Mellanox and its networking portfolio, integrating all those chips together, especially for training. In your Computex keynote, you talked about the new Ultra Accelerator Link and Ultra Ethernet Link standards and the idea of bringing many companies together. This reminded me of the Open Compute Project in the data center space. This makes a lot of sense, especially considering Nvidia’s proprietary solutions with their known and loved high margins, like their other products.

But I guess my long-term question is—do you think, from a Clayton Christensen perspective, that because we are in the early stages of AI, the more proprietary integrated solutions become the focus in many ways, which might not be surprising? In some ways, open and modular makes sense, but it might not be good enough for a while.

Lisa Su: I would put it this way: when you look five years ahead at the market, I see a world with multiple solutions. I don’t believe in a one-size-fits-all approach. From that perspective, the beauty of open and modular is that you can… I don’t want to use the word “customize” here, because they might not all be customized, but you can tailor.

Tailor is the right word—you can tailor solutions for different workloads. I believe no single company can provide all the possible solutions for all the possible workloads. So, I think we will achieve this in different ways.

By the way, I firmly believe that these large GPUs we are building will continue to be the center of the universe for some time. Yes, you will need the entire networking system and reference system to come together. What we are focusing on is that all these parts will become the reference architecture of the future, so I think that will be very important architecturally.

The only thing I’d say is that there is no one-size-fits-all solution, so modularity and openness will allow the ecosystem to innovate where they want to innovate. The solution you want for hyperscaler 1 might be different from the solution you want for hyperscaler 2 or 3.

Q: So, where do you see the balance point between a standard approach and “this is the Microsoft way” or “this is the Meta way”? There are some commonalities, but they all get fairly customized to their use cases and needs. Again, this is not about next year but from a long-term perspective.

Lisa Su: I think, in the next three, four, or five years, you will see more customization for different workloads, and the algorithms will—right now, we are in a period where algorithms are changing very rapidly. At some point, it will feel like, “Hey, it’s more stable, more clear,” and in terms of scale that we are talking about, you can get significant benefits, not just from a cost standpoint but also from a power standpoint. People talk about chip efficiency, and system efficiency is now equally important, if not more important than performance. For all these reasons, I think you will see multiple solutions.

Q: Is this the tailwind that is underappreciated in your x86 business? In your keynote, you mentioned the fact that most CPUs in the cloud are over five years old. You said something like, “One of our CPUs can replace five or six old CPUs.” Do you think this is really the case? Because I think both your company and Intel are currently worried that all spending is going towards AI and no one is buying CPUs anymore. Is this a power wall? If we can take out a bunch of CPUs from the data center, can we save power by placing other CPUs?

Lisa Su: I think both points are correct. I believe data center modernization absolutely has to happen. That will happen, and then the other point is—this might not be happening right now.

I think we are seeing investments coming back to modernization, but another really important thing is, while we love GPUs and they are a huge growth driver for us, not all workloads will use GPUs. You will have traditional workloads, you will have mixed workloads, and I think that’s the key point of the story. In large enterprises, you have to do a lot of things, and our goal is to ensure we have the right solutions for all these capabilities.

Q: How much inferencing do you think can actually go back to CPUs?

Lisa Su: I think a lot of inferencing will be done on CPUs. As you would imagine, the very large models we are talking about obviously need to be on GPUs, but how many companies can really afford the largest models? So, you already see now that for smaller models, they are more finely tuned for these things, and CPUs are fully capable of doing that, especially if you go to the edge.

Q: You mentioned on the last earnings call that MI300 supply was constrained, with growth faster than ever but perhaps below some investors’ expectations, leading to some disappointment in the year-end forecast. How much do you think this shift in constrained demand relates to the launch of the 325, and the fact that Nvidia’s overall supply has increased, as everyone tries to figure this out? Is your long-term opportunity to become this kind of tailored supplier—a customized supplier? Sorry, that’s the word we have to say—and not just, “Look, I don’t want to say buy, but whenever we need GPUs, we’ll buy from anyone”? Where do you think your demand curve stands relative to the competition and the rapid development in this field?

Lisa Su: Again, let me take a step back and ensure we grasp the core of the conversation. The demand for AI compute has exceeded expectations, and I think no one predicted this level of demand. So when I say the supply chain is tight, it’s expected because no one anticipated needing so many GPUs in this timeframe. The semiconductor industry is very good at building capacity, which is what we’re seeing. As we start to forecast—

Q: So you feel this is more about a lot of supply coming online?

Lisa Su: Absolutely, that’s our job. Our job is to make sure you are not limited by manufacturing capacity.

For us, it’s really about ensuring customers can truly scale up their workloads, which requires a lot of deep work and deep partnerships with customers. So honestly, I’m very excited about the opportunity here. We’ve been through this before; it’s very similar to what we saw when we initially ramped up data center server CPUs. Our customers worked closely with us to optimize their software, then they added new workloads and more capacity, and that’s what I expect to happen here.

The difference with AI is that I think customers are willing to take more risks because they want to gain as much benefit as quickly as possible.

Q: Is this a challenge for you? Because being willing to take more risks means they are more likely to accept high margins to get the leading GPU or whatever, or the GPU with the largest ecosystem and developer ecosystem?

Lisa Su: I would say I’m very happy with the progress we’ve made on the software side.

What we are seeing is excellent out-of-the-box performance. The fact is, everything runs well, and many developer ecosystems want to elevate the abstraction layer because everyone wants choice.

Q: Do you think you’ll enter a phase where the elevation of the abstraction layer becomes a common layer across companies, rather than having one company elevate the abstraction layer so they can buy any CPU, which might not necessarily favor your entry into another company, or do you think it will be—

Lisa Su: I absolutely believe it will span the whole industry. Technologies like PyTorch, which I think is widely adopted, and OpenAI Triton as well. These are larger industry things, and frankly, part of the desire is that it takes a long time to program down to the hardware. Everyone wants to innovate quickly, so from that standpoint, the abstraction layer is good for rapid innovation.

Q: You are a second-wave adopter of TSMC’s new nodes, possibly a year or a year and a half behind. Do you feel the pressure to rise to the top tier? Obviously, for some players in this world, you are a relatively smaller company. $22 billion is impressive, but you still have to consider the costs involved. Or do you just feel the urgency to stay at the absolute cutting edge?

Lisa Su: Well, I think from a fabless perspective, in terms of overall volume, we are certainly one of the top five, and having access to the absolute leading-edge technology is helpful. We don’t debate whether we should do this. I think what we debate is from a roadmap perspective, for example, we talked about the one-year cadence of GPU launches.

Q: Unfortunately, for you, it’s somewhat the opposite situation with Nvidia. Is that a bit frustrating?

Lisa Su: No, not at all. Again, one of the most important things for me is that our roadmap is based on what we believe is achievable and what we think customers want and need.

Q: Is there ever a possibility of AMD using Intel’s fabs?

Lisa Su: I would say we are very happy with our current manufacturing relationships.

Q: I was thinking about Intel and AMD being one of the greatest competitors in tech history from the beginning. But when you step back, do you ever think about stepping back in these conversations, is there a sense that you are standing shoulder to shoulder because the real enemy is Arm?

Lisa Su: You make it sound like Arm is the enemy, but I don’t think Arm is the enemy, so let me start with that. We use Arm across our portfolio. I think x86 is an extraordinary architecture and has capabilities, but please don’t view AMD as an x86 company. We are a computing company, and we will use the right compute engines for the right workloads.

That’s related to my thinking—if you look at the semiconductor industry today, you will find we have places where we compete and places where we collaborate. So, regarding Intel, yes, we compete in certain areas, but we also collaborate in some areas. Intel is part of the UALink consortium; they are part of the Super Ethernet consortium.

Q: They are very interested in this modularity and standardization as well.

Lisa Su: We agree with the idea that building a link that can cross different accelerators is actually a good thing. So, I think the whole industry is like that. We are in a place where we compete, and we have places where we can collaborate.

Q: Over the past 10 years, you have achieved amazing things in the x86 space. Your achievements in servers and data centers speak for themselves. Now, it’s like a new champion has emerged. Are you ready for the next round of challenges?

Lisa Su: This is the next arc. I can tell you, what we have achieved today in high-performance computing is amazing. Who would have imagined? It’s like a new world. It’s incredibly exciting.

Q: Do you feel energized and ready to go?

Lisa Su: Absolutely ready. Very ready.

End-of-Yunze-blog

Disclaimer: This article is created by the original author. The content of the article represents their personal opinions. Our reposting is for sharing and discussion purposes only and does not imply our endorsement or agreement. If you have any objections, please get in touch with us through the provided channels.

Leave a Reply