Astera Labs, Inc. Common Stock (ALAB) on Q1 2024 Results - Earnings Call Transcript

Operator: Thank you for standing by. My name is Regina, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs First Quarter 2024 Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After management remarks, there will be a question-and-answer session. [Operator Instructions] I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin. Leslie Green: Thank you, Regina. Good afternoon, everyone, and welcome to the Astera Labs first quarter 2024 earnings call. Joining us today on the call are Jitendra Mohan, Chief Executive Officer and Co-Founder; Sanjay Gajendra, President, Chief Operating Officer and Co-Founder; and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations and the markets in which we operate. These forward-looking statements reflect management's current beliefs, expectations and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in the final perspective relating to our IPO. It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. In light of these risks, uncertainties and assumptions, the results, events or circumstances reflected in the forward-looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the day of this call to conform to these as a result of new information, future events or changes in our expectations, except as required by law. Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and will also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website. With that I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Jitendra? Jitendra Mohan: Thank you, Leslie. Good afternoon, everyone, and thanks for joining our first earnings conference call as a public company. This year is off to a great start with Astera Labs seeing strong and continued momentum along with the successful execution of our IPO in March. First and foremost, I would like to thank our investors, customers, partners, suppliers and employees for their steadfast support over the past six years. We have built Astera Labs from the ground up to address the connectivity bottlenecks to unlock the full potential of AI in the cloud. With your help, we've been able to scale the company and deliver innovative technology solutions to the leading hyperscalers and AI platform providers worldwide. But our work is only just beginning. We are supporting the accelerated pace of AI infrastructure deployments with leading hyperscalers by developing new product categories, while also exploring new market segments. Looking at industry reports over the past several weeks, it is clear that we remain in the early stages of a transformative investment cycle by our customers to build out the next generation of infrastructure that is needed to support their AI roadmaps. According to recent earning reports, on a consolidated basis, CapEx spend during the first quarter for the four largest U.S. hyperscalers grew by roughly 45% year-on-year to nearly $50 billion. Qualitative commentary implies continued quarterly growth in CapEx for this group through the balance of the year. This is truly an exciting time for technology innovators within the cloud and AI infrastructure market, and we believe Astera Labs is well position to benefit from these growing investment trends. Against the strong industry backdrop, Astera Labs delivered strong Q1 results with record revenue, strong non-GAAP operating margin, positive operating cash flows, while also introducing two new products. Our revenue in Q1 was $65.3 million up 29% from the previous quarter and up 269% from the same period in 2023. Non-GAAP operating margin was 24.3%, and we delivered $0.10 of pro forma non-GAAP diluted earnings per share. I will now provide some commentary around our position in this rapidly evolving AI market. Then I will turn the call over to Sanjay to discuss new products and our growth strategy. Finally, Mike will provide additional details on our Q1 results and our Q2 financial guidance. Complex AI model sizes continue doubling about every six months, fueling the demand for high performance AI platforms running in the cloud. Modern GPUs and AI accelerators are phenomenally good at compute, but without equally fast connectivity, they remain highly underutilized. Technology innovation within the AI Accelerator market has been moving forward at an incredible pace and the number and variety of architectures continues to expand to handle trillion parameter models, while improving AI infrastructure utilization. We continue to see our hyperscaler customers utilize the latest merchant GPUs and proprietary AI accelerators to compose unique data center scale AI infrastructure. However, no two clouds are the same. The major hyperscalers are architecting their systems to deliver maximum AI performance based on the specific cloud infrastructure requirements, from power and cooling to connectivity. We are working alongside our customers to ensure these complex and different architectures achieve maximum performance and operate reliably even as data rates continue to double. As the systems continue to move data faster and grow in complexity, we expect to see our average dollar content per AI platform increase and even more so with the new products we have in development. Our conviction in maintaining and strengthening our leadership position in the market is rooted in our comprehensive intelligent connectivity platform and our deep customer partnerships. The foundation of our platform consists of semiconductor based and software-defined connectivity ICs, modules and boards, which all support our COSMOS software suite. We provide customers with a complete customizable solution, tips, hardware and software, which maximizes flexibility without performance penalties, delivers deep fleet management capabilities and matches space with the ever quickening product introduction cycles of our customers. Not only does COSMOS software run on our entire product portfolio, but it is also integrated within our customers' operating stacks to deliver seamless customization, optimization and monitoring. Today, Astera Labs is focused on three core technology standards: PCI Express, Ethernet and Compute Express Link. We're shipping three separate product families, all generating revenue and in various stages of adoption and deployment supporting these different connectivity protocols. Let me touch upon each of these critical data center connectivity standards and how we support them with our differentiated solutions. First, PCI Express. PCIe is the native interface on all AI accelerators, TPUs and GPUs, and is the most prevalent protocol for moving data at high bandwidth and low latency inside servers. Today, we see PCIe Gen 5 getting widely deployed in AI servers. These AI servers are becoming increasingly complex. Faster signal speeds in combination with complex server topologies are driving significant signal integrity challenges. To help solve these problems, our hyperscalers and AI accelerator customers utilize our PCIe Smart DSP Retimers to extend the reach of PCIe Gen 5 between various components within heterogeneous compute architecture. Our Aries product family represents the gold standard in the industry for performance, robustness and flexibility, and is the most widely deployed solution in the market today. Our leadership position with millions of critical data links running through our Aries Retimers and our COSMOS software enables us to do something more, become the eyes and ears to monitor the connectivity infrastructure and help fleet managers ensure their AI infrastructure is operating at fleet utilization. Deep diagnostics and monitoring capabilities in our chips and extensive fleet management features in our COSMOS software, which are deployed together in our customer's fleet has become a material differentiator for us. Our COSMOS software provides the easiest and fastest path to deploy the next generation of our devices. We see AI workloads and newer GPUs driving the transition from PCIe Gen 5 running at 32 gigabits per second per lane to PCIe Gen 6 running at 64 gigabits per second per lane. Our customers are evaluating our Gen 6 solutions now, and we expect them to make design decisions in the next six to nine months. In addition, while we see our Aries devices being heavily deployed today for interconnecting AI accelerators with CPUs and networking, we also expect our Aries devices to play an increasing role in backend fabrics, interconnecting AI Accelerators to each other in AI clusters. Next, let's talk about Ethernet. Ethernet protocol is extensively deployed to build large scale networks within data centers. Today, Ethernet makes up the vast majority of connections between servers and top of rack switches. Driven by AI workloads' insatiable need for speed, Ethernet data rates are doubling roughly every two years, and we expect the transition from 400 gig Ethernet to 800 gig Ethernet to take place later in 2025. 800 gig Ethernet is based on 100 gigabits per second per lane signaling rate, which is facing tremendous pressure on conventional passive cabling solutions. Like our PCIe Retimers, our portfolio of Taurus Ethernet Retimers helps relieve these connectivity bottlenecks by overcoming the reach, signal integrity and bandwidth issues by enabling robust 100 gig per lane connectivity over copper. Unlike our Aries portfolio, which is largely sold in a chip format, we sell our Taurus portfolio largely in the form of smart cable modules that are assembled into active electrical cables by our cable partners. This approach allows us to focus on our strength and fully leverage our COSMOS software suite to offer customization, easy qualification, deep telemetry and field upgrade to our customers. At the same time, this model enables our cable partners to continue to excel at bringing the best cabling technology to our common end customers. We expect 400 deployments based on our Taurus smart cable modules to begin to ramp in the back half of 2024. We see the transition to 800 gig Ethernet starting to happen in 2025, resulting in broad demand for AECs to both scale up and scale out AI infrastructure and strong growth for our Taurus Ethernet Smart Cable module portfolio over the coming years. Last is Compute Express Link or CXL. CXL is a low latency cash coherent protocol, which runs on top of PCIe protocol. CXL provides an open standard for disaggregating memory from compute. CXL allows you to balance the memory bandwidth and capacity requirements independently from compute requirements, resulting in better utilization of compute infrastructure. Over the next several years, data center platform architects plan to utilize CXL technology to solve memory bandwidth and capacity bottlenecks that are being exacerbated by the exponential increase in compute capability of CPUs and GPUs. Major hyperscalers are actively exploring different application of CXL memory expansion. While the adoption of CXL technology is currently in its infancy, we do expect to see increased deployments with the introduction of next generation CXL capable datacenter server CPUs such as Granite Rapids, Turing and others. Our first to market portfolio of Leo CXL memory connectivity controllers is very well positioned to enable our customers to overcome memory bottlenecks and deliver significant benefits to their end customers. We have worked closely with our hyperscaler customers and CPU partners to optimize our solution to seamlessly deliver these benefits without any application level software changes. Furthermore, we have used our COSMOS software to include significant learnings we have had over the last 18 months and to customize our Leo memory expansion solution to the different requirements from each hyperscaler. We anticipate memory expansion will be the first high volume use case that will drive design wins into volume production in 2025 timeframe. We remain very excited about the potential of CXL in datacenter applications and believe that most new CPUs will support CXL and hyperscalers will increasingly deploy innovative solutions based on CXL. With that, let me turn the call over to our President and COO, Sanjay Gajendra, to discuss some of our recent product announcements and our long-term growth strategy. Sanjay Gajendra: Thanks, Jitendra, and good afternoon, everyone. Astera Labs is well positioned to demonstrate long-term growth through a combination of three factors. One, we have a strong secular tailwinds with increased AI infrastructure investment. Two, the next generation of products within existing product lines are gaining traction. And third, the introduction of new product lines. Over the past three months, we announced two new and significant products that play an important role in enabling next generation AI platforms and provide incremental revenue opportunities as early as the second half of 2024. First, we expanded our widely deployed field proven Aries Smart DSP Retimers portfolio with the introduction and public demonstration of our Aries 6 PCIe Retimer that delivers robust, low power PCIe Gen 6 and CXL 3 connectivity between next generation GPUs, AI accelerators, CPUs, NICs, and CXL memory controllers. Aries 6 is the third generation of our PCIe Smart Retimer portfolio and provides the bandwidth required to support data intensive AI workloads while maximizing utilization of next generation GPUs operating at 64 gigabit per second per link. Fully compatible with our field deployed COSMOS software suite, Aries 6 incorporates the tribal knowledge we have acquired over the past four years by partnering and enabling hyperscadeless to deploy AI infrastructure in the cloud. Aries 6 also enables the seamless upgrade path from current PCIe Gen 5 based platforms to next generation PCIe Gen 6 based platforms for our customers. With Aries 6, we demonstrated industry's lowest power at 11 watts at Gen 6 in full 16 lane configuration running at 64 gigabit per second per lane, significantly lower than our competitors and even lower than our own Aries Gen 5 Retimer. Through collaboration with leading providers of GPUs and CPUs such as AMD, ARM, Intel, and NVIDIA, Aries 6 is being rigorously tested at Astera's Cloud-Scale Interop Lab and in customers' platforms to minimize interoperation risk, lower system development cost, and reduce time to market. Aries 6 was demonstrated at NVIDIA's GTC event during the week of March 18th. Aries 6 is currently sampling two leading AI and cloud infrastructure providers, and we expect initial volume ramps to begin in 2025. We also announced the introduction and sampling of our Aries PCIe and CXL Smart Cable Modules for Active Electrical Cables or AECs to support robust and long reach, up to 7 meters copper cable connectivity. This is 3x the standard reach defined in the PCIe spec. Our new PCIe AEC solution is design for GPU clustering application by extending PCIe backend fabric deployments to multiple racks. This new Aries product category expands our market opportunity from within the rack to across racks. As with our entire product portfolio, Aries Smart Cable Modules support our COSMOS software suite to deliver a powerful yet familiar array of link monitoring, fleet management and rack tools which are customizable for diverse needs of our hyperscaler customers. We leveraged our expertise in silicon, hardware and software to deliver a complete solution in record time and we expect initial shipments to begin later this year for the PCIe AECs. We believe this new Aries product announcement represents another concrete example of Astera Labs driving the PCIe ecosystem with technology leadership with an intelligent connectivity platform that includes silicon chips, hardware modules and COSMOS software suite. Over the coming quarters, we anticipate ongoing generational product upgrades to existing product lines and introduction of new product categories developed from the ground up to fully utilize the performance and productivity capabilities of generative AI. In summary, over the past few years, we have built a great team that is delivering technology that is foundational to deploying AI infrastructure at scale. We have gained the trust and support of our world class customer base by executing, innovating and delivering to our commitments. These tight relationships are resulting in new product developments and enhanced technology roadmap for Astera. We look forward to continue collaboration with our partners as a new era unfolds driven by AI applications. With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q1 financial results and Q2 outlook. Mike Tate: Thanks, Sanjay, and thanks to everyone for joining. This overview of our Q1 financial results and Q2 guidance will be on a non-GAAP basis. The primary difference in Astera Labs non-GAAP metrics is stock-based compensation and the related income tax effects. Please refer to today's press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q2 financial outlook as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q1 of 2024, Astera Labs delivered record quarterly revenue of $65.3 million which was up 29% versus the previous quarter and 269% higher than the revenue in Q1 of 2023. During the quarter, we shipped products to all the major hyperscalers and AI accelerator manufacturers. We recognized revenues across all three of our product families during the quarter with Aries products being the largest contributor. Aries enjoyed solid momentum in AI based platforms as customers continue to introduce and ramp their PCIe Gen 5 capable AI systems, along with overall strong unit growth with the industry's growing investment in generative AI. Also, we continue to make good progress with our Taurus and Leo product lines, which are in the early stages of revenue contribution. In Q1, Taurus revenues were primarily shipping into 200 gig Ethernet based systems, and we expect Taurus revenues to sequentially track higher as we progress through 2024, as we also begin to ship into 400 gig Ethernet based systems. Q1 Leo revenues were largely from customers purchasing pre-products volumes for their development of their next generation CXL capable compute platforms expected to launch late this year with the next server CPU refresh cycle. Q1 non-GAAP gross margins was 78.2% and was up 90 basis points compared with 77.3% in Q4 2023. The positive gross margin performance during the quarter was driven by healthy product mix. Non-GAAP operating expenses for Q1 were $35.2 million up from $27 million in the previous quarter. With non-GAAP operating expenses, R&D expense was $22.9 million, sales and marketing expense was $6 million and general and administration expenses were $6.3 million. Non-GAAP operating expenses during Q1 increased largely due to a combination of increased headcount and incremental costs associated with being a public company. The largest delta between non-GAAP and GAAP operating expenses in Q1 was stock-based compensation recognized in connection with our recent IPO and its associated employer payroll taxes and to a lesser extent our normal quarterly stock-based compensation expense. Non-GAAP operating margins for Q1 was 24.3% as revenues scaled in proportion with our operating expenses on a sequential basis. Interest income in Q1 was $2.6 million. Our non-GAAP tax provision was $4.1 million for the quarter, which represents a tax rate of 22% on a non-GAAP basis. Pro forma non-GAAP fully diluted share count for Q1 was 147.5 million shares. Our pro forma non-GAAP diluted earnings per share for the quarter was $0.10. The pro forma non-GAAP diluted shares includes the assumed conversion of our preferred stock for the entire quarter, while our GAAP share count only includes a conversion of our preferred stock for the step period following our March IPO. Going forward, given that all the preferred stock has now been converted to common stock upon our IPO, those preferred shares will be fully included in the share count for both GAAP and non-GAAP. Cash flow from operating activities for Q1 was $3.7 million and we ended the quarter with cash, cash equivalents and marketable securities of just over $800 million. Now turning to our guidance for Q2 of fiscal 2024. We expect Q2 revenues to increase from Q1 levels within a range of 10% to 12% sequentially. We believe our Aries product family will continue to be the largest component of revenue and will be the primary driver of sequential growth in Q2. Within the Aries product family, we expect the growth to be driven by increased unit demand for AI servers as well as the ramp of new product designs with our customers. We expect non-GAAP gross margins to be approximately 77% given a modest increase in hardware shipments relative to standalone ICs. We believe as our hardware solutions grow as a percentage of revenue over the coming quarters, our gross margins will begin to trend towards our long-term gross margin model of 70%. We expect non-GAAP operating expenses to be approximately $40 million as we remain aggressive in expanding our R&D resource pool across headcount and intellectual property, while also scaling our back office functions. Interest income is expected to be $9 million. Our non-GAAP tax rate should be approximately 23% and our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of approximately $0.11. This concludes our prepared remarks. Once again, we very much appreciate everyone joining the call. And now we'll open the line for questions. Operator? Operator: [Operator Instructions] Our first question will come from the line of Harlan Sur with JPMorgan. Harlan Sur : Good afternoon and congratulations on the strong results and guidance post your Q1 as a public company. As you guys mentioned, many new AI XPU programs coming to the market, GPU, ASIC AI chip programs, accelerators. In terms of total XPU shipments this year, I think only half is going to be NVIDIA based, so it is starting to broaden out. The good news is, obviously, the Astera team has exposure to all of these XPU programs. It does seem that the pace of deploying these XPU platforms has accelerated even over the past few months. So how much of the strong results and guidance is due to this acceleration, broadening in customer deployments? How much is more just kind of higher content of Retimers versus your prior expectations? And then do you guys see the strong momentum continuing to the second half of this year? Mike Tate: Thanks, Harlan. This is Mike. We started shipping into AI servers really in Q3 of last year, so it's just in the early innings. Lot of our customers have not fully deployed their AI system. So we're seeing incremental growth just from adding on the different platforms that we have design wins in. But it's on a, in a backdrop where there's clearly growing investment in AIs as well as overall unit growth is also playing out. As we look out to the balance of this year, there's still a lot of programs that have not ramped yet. So we have the highest confidence that the Gen 5 Aries platform has a lot of growth ahead of it, and that continues into 2025 as well. Harlan Sur : And as you mentioned, there's been a lot of focus on next gen PCIe Gen 6 platforms, right, obviously, with the rollout of NVIDIA's Blackwell based platform? And, obviously, with any market that is viewed of as fast growing, you are going to attract competitors. We have seen some announcing by competitors. We know most of the Gen 5 design wins have already been locked up by the Astera team. You've been working with customers, as you mentioned, on Gen 6, for some time now. Maybe how do you compare the customer engagement momentum on Gen 6 versus the same period back when you were working with customers on Gen 5? Sanjay Gajendra: Good question, Harlan. This is Sanjay here. Let me take that. So like you correctly said, Gen 5 is still a lot of legs on it. Let's be very clear on that. Like Mike noted, we do have platforms that are still ramping and still to come. So to that standpoint, we do expect Gen 5 to be with us for some time. And in terms of Gen 6, again, it's driven by the pace of innovation that's happening on the AI side. There is, as you probably know, there's GPUs are not fully utilized. Some reports put it at around 50%. So there's still a lot of growth in terms of connectivity, which is essentially holding it back, right, meaning there's a pace and a need to adopt faster speeds and links. So, with NVIDIA announcing their Blackwell platform, those are the first set of GPUs that have Gen 6 on that. So for that standpoint, we do expect some of those deployments to happen in 2025. But in general, others are not far behind based upon public information that's out there. So, we do expect the cycle time for Gen 6 adoption to perhaps be a little bit shorter than Gen 5, especially on the AI, server application, more so than the general purpose compute, which is still going to be lagging when it comes to PCIe Gen 6 adoption. Operator: Your next question will come from the line of Joe Moore with Morgan Stanley. Joe Moore : Following on from that, can you talk about PCI Gen 5 in general purpose servers? It seems like if I look at the CPU penetration of Gen 5, we're still at a pretty early stage. Do you see growth from general purpose and what are the applications driving that? Sanjay Gajendra: Absolutely. And primarily on the general purpose compute, the main places where the PCIe timer gets used tends to be on the storage connectivity where you have SSDs that are on the back of the server. So to that standpoint, again, it's, there are two things that have been holding it back or three things perhaps. One is just the focus on AI. I mean, most of dollars are going to the AI server application compared to general compute. The second thing is just the ecosystem readiness for Gen 5, primarily on the SSD side, which is starting to evolve with many of the major SSD NVMe players providing or ramping up on Gen 5 based, NVMe drives. The third one really has been the CPU platforms. If you think about it both from Intel and AMD, they're all on the cusp of introducing their next significant platform, whether it is Granite Rapids for Intel or Turin from AMD. So that is expected to drive the introduction of new platform. And if you combine that with the SSDs being ready for Gen 5 and based on the design wins that we already have, you can expect that those things would be a contributing factor as dollars start flowing back into the compute side, general purpose compute side. Joe Moore : And for my follow-up, you just mentioned Granite Rapids and Turin, which are the first kind of volume platform supporting CXL 2. What are you hearing in terms of the CPUs will be out, but what will be the initial adoption and how quickly do you think that technology can roll out in 2025? Sanjay Gajendra: Yes. Let me start off by saying, CXL, every hyperscaler is in some shape or form evaluating and working with the technology. So it's well and alive. I think where the focus really has been in terms of CXL is on the memory expansion use case, specifically for CPUs. And the expansion could be for reasons like adding more memory for large database applications, more capacity memory. And the second use case, of course, is for more memory bandwidth, which are for HPC type of applications. So the thing that's been holding back is the availability of CPUs that support CXL at a production quality level. That will change with Granite Rapids and Turin being available. So at this point, what we can say is that we've been providing chips for quite some time. We've been in preproduction and supported the various different evaluation POC type of activities that have happened with our hyperscaler customers. So, to that standpoint, we do expect revenue to start coming in 2025 from memory expansion use case for CXL. Operator: Your next question will come from the line of Tore Svanberg with Stifel. Tore Svanberg: Yes. Thank you. And let me add my congratulations. My first question is on Gen 6 PCI. So Sanjay, you just mentioned that the design in cycle is going to be shorter than Gen 5 now. Since its backwards compatible for your Gen 5 and especially given the COSMOS software platform, should we assume that you will basically retain most of those sockets that you already had in Gen 5 and then obviously some new ones as well for Gen 6? Sanjay Gajendra: That's the goal for the company. We have the COSMOS software and like I noted, PCI Express is one of those protocols which, unlike Ethernet, tends to be a little messy, meaning it's something that's been around for a long time. It's a great technology, but it also requires a lot of handholding. And for us, what has happened is being in the customers' platforms, bringing up systems that ramp up to millions of devices has allowed us to understand what are the nuances, what works, what doesn't work, how do you make the link perform at the highest rate. So that tribal knowledge is something that we've captured within the COSMOS software that we built running both on our chips as well as customers' platforms. So we do expect that as Gen 6 starts to materialize, lot of those learnings will be carried over. Now you're right that there's been a lot of competition that has come in as well. But we believe that when it comes to competition, they could have a similar product like us. But no matter what, there is a full time that's essential when it comes to connectivity type of chips, just given the interoperation and getting the kings out and so on. Meaning you could have a perfect chip yet have a failing system. The reason for that is the complexity of the system and how PCI Express standard is defined. So to that standpoint, I agree with what you said in the sense that we have the leading position now in the Retimer market for PCIe and we expect to build on that both with the new features we have added in PCIe Gen 6 or the AEC product line and also the tribal knowledge that we have built by working with our partners over the last three, four years. Tore Svanberg: That's a great perspective. And as my follow-up, I had a question on AEC. It sounds like that business is going to start ramping late this year. First of all, is that with multiple cable partners? And then related to that, are you the only company today that have, an AEC at 7 meters? Sanjay Gajendra: I don't know about the only, customer. I would probably request maybe you need to do some research on it on where the competition is. But from a Retimer standpoint, which goes on this, we do have a leading position. So based on that, I would imagine that we are the main provider here, both based on that and the customer traction that we're seeing. So, this one is an interesting use case. So far, PCI Express, as you know, was defined to be inside the server. But what is happening now, and this is why we're excited about PCIe, AECs, is that now we are opening up a new front in terms of clustering GPUs, meaning interconnecting accelerators. That is where the AECs will play, and that is a new opportunity that goes along with the Ethernet AECs that we already provide, which are also used for interconnecting GPUs on the backend network. So, overall, we do believe that combining our PCIe AEC solution and Ethernet AEC solution, we're well set for some of these evolving trends. And our revenue we expect to start coming in for the latter half of this year. And on PCIe, again, we do believe we are the only one just to make sure I clarify what I initially said, just that I don't know if there is someone else talking about it that's not yet in the public domain. Operator: Your next question will come from the line of Blayne Curtis with Jefferies. Blayne Curtis : Maybe first one for you, Jitendra. Just curious, you mentioned the right architectures, I think, Harlan asked on it. I was just kind of curious about obviously, you have a lead customer and it's a lot of CPU to GPU connections. That's the nature of the market who has the volume. But I'm curious you mentioned back, the backend fabrics a bunch. Kind of curious is that still conceptual? Are you seeing designs for it? And maybe just talk about the widening out of just applications for what the Retimers are being used for? Jitendra Mohan: Great question. So, there are many applications where we use the Retimers. Of course, we are most known for the connectivity from the GPU to the head node. That is where a lot of the deployments are happening. But these new applications also speak to how rapidly the AI systems are evolving. Every few months, we see a new AI platform come up and that opens up additional opportunities for us. And one of those is to cluster GPUs together. There are two main protocols that are used in addition to NVLink, of course, which are used to cluster GPUs that is PCI Express and Ethernet. And as Sanjay just mentioned, we now have solutions available to interconnect GPUs together, whether they are for PCI Express and/or Ethernet. Specifically, in the case of PCI Express, some of our customers who want to use PCI Express for clustering GPUs together are now able to do so using our PCI Express Retimers, which are offered in the form of an active electrical cable. So this business is going to be in addition to the sustaining business that we have today in connecting GPUs to head nodes. Now we are connecting GPUs together in a cluster. And as you know, these are very intense, very dense mesh connections. So they can grow very, very rapidly. So we are very excited about where this will grow and starting with some revenue contributions later this year. Blayne Curtis : And then maybe a question for Mike. The gross margin remained quite high. You said it was mix. I mean, maybe you're just being kind of conservative with the IPO, but I was just kind of curious did the mix come in? I mean, I think it's mostly Retimers. I know as the other products start to ramp that will be the headwind. So, I'm just kind of, how do you think about the rest of the year? Should we kind of just have it kind of come down with mix gradually as those new products ramp off this 70% that you're guiding to? Mike Tate: Yes. So just to remind everybody, our standalone ICs carry a pretty high margin relative to our hardware solutions. So when the mix gets a little more balanced with hardware versus standalone ICs, we're expecting our long-term gross margins to 10% to 70%. In Q1, we were heavily weighted to standalone ICs, very favorable mix and that's how we enjoyed the strong gross margins. As we go through the balance of this year and into next year, we will see an increasing mix of our modules and also add in cards for CXL as well. So, we think we'll have a gradual trend down towards a long-term model over time as that mix changes. Operator: Your next question will come from the line of Thomas O'Malley with Barclays. Thomas O'Malley : Mike, I just wanted to ask, I know you may not be giving segment details specifically, but could you talk about what you're able to, what contributed to the revenue in the quarter? And then looking out into June, could you talk about from a revenue mix perspective, maybe some sequential help on what's growing? Obviously, the non-ICs business is growing just given the fact that gross margins are pressured a bit, but just any color on the segments would be helpful to start? Mike Tate: Sure. So as I mentioned, we started shipping into AI server platforms in volume in Q3 and a lot of our customers are still in the ramp mode to the extent we've been shipping for the past couple of quarters. But there's still a lot of designs that haven't even begun to ramp. So, we're still in the early phases that if you look out in time, we see the Gen 5 piece of it in AI continue to grow into next year as well. So as you look into Q2, the growth that we're guiding to is still largely driven by the Aries Gen 5 deployment in AI servers both for existing platforms with increased unit volumes, but also the new customers begin their ramps as well. Thomas O'Malley : And then just a broader one. In talking with NVIDIA, they're referencing your GP-200 architecture becoming a bigger percent of the mix, NVLink 72 being more of the deployments that hyperscalers are taking. When you look at the Hopper architecture versus the Blackwell architecture and their NV72 platform, where they're using NVLink amongst their GPUs, can you talk about the puts and takes when it comes to your retiming product? Do you see an attach rate that's any different than the current generation? Jitendra Mohan: Let me take that. Great question. First, let me say that we are just at the beginning phases of AI. We will continue to see new architectures being produced by AI platform providers at a very rapid pace, just match up with the growth in AI models. And on top of that, we'll see innovative ways that hyperscalers will deploy these platforms in their cloud. So as these architectures evolve, so do the connectivity challenges. Some challenges are going to be incremental and some are going to be completely new. And so what we believe is given the increasing speeds, increasing complexities with these new platforms, we do expect our dollar content per AS platform to increase over time. We see these developments providing us good tailwinds going here into the future. So now to your question about the GP-200 specifically, well, first of all, we cannot speak about specific customer architectures. But here is something that is very clear to see. As the AI platform providers produce these new architectures, the hyperscalers will choose different form factors to deploy them. And in that way, no two clouds are the same. Each hyperscaler has a unique requirement, unique constraint to deploy these AI platforms and we are working with all of them to enable these deployments. This combination of new platforms and very cloud specific deployment strategies, it presents great opportunities for our PCIe connectivity portfolio. And to that point, as Sanjay mentioned, we announced the sampling of our Gen 6 Retimer during GTC. If you look at our press release, you will see that broad support from AI platform providers. And to this day, to the best of our knowledge, we are still the only one sampling agnostic solution. So, on the whole, given the fact that speeds are increasing, complexity is increasing, in fact the pace of innovation is going up as well, these all play to our strengths and we have customers coming to us for new approaches to solve these problems. So we feel very good about the potential to grow our PCIe connectivity business. Operator: Your next question will come from the line of Quinn Bolton with Needham. Quinn Bolton : I just wanted to follow-up on the use of PCI Express in the GPU to GPU backend networks. I think that's something historically you had excluded from your TAM, but it looks like it's becoming an opportunity here and starts to ramp in the second half of this year. Wondering if you could just talk about the breadth of some of the custom AI accelerators that are choosing PCI Express as their interconnect over, say, Ethernet? And then I've got a follow-up. Jitendra Mohan: Again, great question. So just to kind of follow-up the response that we provided before. There are two, three dominant protocols that are used to cluster GPUs together. The one that's most well known, of course, is NVLink, which is what NVIDIA uses and is the proprietary interface. The other two are Ethernet and PCI Express. We do see some of our customers using PCI Express and I think it's not appropriate to say who, but certainly PCI Express is a fairly common protocol. It is the one that's natively found on all GPUs and CPUs and then others data center components. Ethernet is also very popular and to the extent that a particular customer chooses to use Ethernet or PCI Express, we are able to support them both with our solutions, the Aries PCIe Retimer family as well as the Taurus Ethernet Retimer family. We do expect these two to make meaningful contributions to our revenue, as I mentioned starting with the end of this year and then of course continuing into next year. Quinn Bolton : And my second question is you guys have talked about introduction of new products as new TAM expansion activity and I'm not going to ask you to introduce them today. But just in terms of timing as we think out, are these new products timeline sort of introduction later this year or 2025 with revenue ramp in 2026? Is that the general framework investors should be thinking about the new products that you've discussed? Sanjay Gajendra: Again, I think we, as a company don't talk about unreleased products, the timing of it. But what I can share with you is the following. First, we've been very fortunate to be in the central seat of AI deployment and enjoy a great relationship with the hyperscalers and AI platform providers. So we get to see a lot, we get to hear a lot in terms of some of the requirements. So clearly, we are going to be developing products that address the bottlenecks, whether it is on the data side, network side or on the memory side. So we are working on several products, as you can imagine, that would all be developed ground up for AI infrastructure and enable connectivity solutions that will deploy the AI application sooner. There is a lot going on, a lot of new infrastructure, a lot of new GPU announcements, CPU announcement. So, you can imagine given the pace of this market and the changes that are upcoming, we do anticipate that this will all start having meaningful impact and incremental revenue impact to our business. Operator: Your next question will come from the line of Ross Seymore with Deutsche Bank. Ross Seymore : I wanted to go into the ASIC versus GPU side of things. As ASIC start to penetrate this market to certain degrees, how does that change, if any, the Retimer, TAM that you would have? And I guess even the competitive dynamic in that equation considering one of the biggest ASICs suppliers is also an aspiring competitor of yours? Jitendra Mohan: So, great question again. Let me just refer back to what I said, which is we will see more and more different solutions come to the market to address the evolving AI requirements. Some of them are going to be GPUs from the kind of known AI providers like NVIDIA, AMD and others. And some others will be custom built ASICs that are built typically by hyperscalers, whether they are AWS or Microsoft or Google and others. And the requirements for these two systems are common in some ways, but they do differ. For example, what particular type of backend connectivity they use and exactly what are the ins and outs that are going to each of these chips. The good news is with the breadth of portfolio that we have and the close engagement with the several ASIC providers as well as the GPU providers, we understand the challenges of these systems very well. And not only are we providing solutions that address those today with current generation, we are engaged with them very closely on the next generation, on the upcoming platforms, whether they are GPU based or ASIC based to provide these solutions. Great example was the Aries SCM, where we enabled using our trusted solution for PCI Express Retimers. We enabled a new way of connecting some of these ASICs on the backend network. Sanjay Gajendra: And just maybe if I can add to that, one way to visualize connectivity market or subsystem is the nervous system within the human anatomy. Right? It's one of those things where you don't want to mess with it. Yes. There will be a sick vendor. There are options or off-the-shelf. Once the nervous system is built, tested, especially like what we have developed where the nervous system that we've built is specifically done for AI application. And there's a lot of qualification, a lot of software investment that hyperscalers have done. And they want to reuse that across different kinds of topologies, whether it's ASIC based or merchant silicon based. And we do see a trend happening, when we look at customers that we're engaged with today. And for protocols like PCI Express, Ethernet, and CXL, and especially where as Taurus plays, these are standards based. So to that standpoint, whatever end possible architecture is being used, we believe that we will stand to gain from that. Ross Seymore : I guess as my follow-up one quick one for Mike, how should we think about OpEx beyond the second quarter? I know there's a bigger step up there with a full quarter of being a publicly traded company etcetera, but just walk us through your OpEx plans rest of the year or even to the target? Mike Tate: Yes, I mean, thanks Ross. We are continuing to try to invest quite a bit in headcount particularly in R&D. There's so many opportunities ahead of us that we love to get a jump on those products and also improve the time to market. That being said, we're pretty selective on who we bring into the company. So that will just meter our growth. And we believe our OpEx although it's going to be increasing will probably not increase at a rate of revenue over the near- and long-term. And that's why we feel good about a long-term operating margin model of 40%. So over time, we feel confident we can trend that direction even with increasing investment in OpEx. Operator: Your next question will come from the line of Suji Desilva with ROTH MKM. Suji Desilva : Hi Jitendra, Sanjay, Mike congrats on the first quarter here. On the backend, the addressable market here that's non NVLink, I'm trying to understand if the PCIe and Ethernet opportunities there will be adopted at a similar pace out of the gate or whether PCI would lead that adoption, in the non NVLink backend opportunity? Sanjay Gajendra: It's hard to say at this point just because there is so much of development going on here. I mean, you can imagine the non NVIDIA ecosystem, they will rely on standards technologies, whether it is PCI Express or Ethernet. And the advantage of PCI Express is that it's low latency. Right? Significantly low latency compared to Ethernet. So there are some benefits to that. And there are certain extensions that people consider to add on top of PCI Express, or when it comes to the proprietary implementation. So, overall, we do see this, from a technology standpoint, PCI Express will have that advantage. Now Ethernet also has been around, so we'll have to wait and see how all of this develops over the next, let's say, 6 to 18 months. Jitendra Mohan: Yes. Add to what Sanjay said. I think the good news for us in some ways is that we don't have to pick, we don't have to decide which one. We have chips, we have hardware, and we have software. So we have customers that come to us and say, hey, I need this for my new AI platform. Can you help me with that? And then that's what we've been doing. Suji Desilva : And then a question perhaps for Mike. The initial AEC programs ramping maybe a few customers this year, few customers next year or maybe perhaps all of them this year. But do you perceive that those will be larger lumpier program based ramps, Mike? Or will those be steady kind of build out to the servers grow? Mike Tate: I think the product ramps will mirror other product ramps well. They'll gradually build over a few quarters till they hit steady state. And if you layer them on top of each other it just continues to build a nice growing revenue profile. So as you look at Taurus in 2024, we're shipping 200 gig right now. And then in the back half we start to ship 400 gig. And if you look into 2025, 800 gig which is ultimately the biggest opportunity in a much broader set of customers, will be when the market really becomes very large. Operator: Your next question will come from the line of Richard Shannon with Craig-Hallum. Richard Shannon : Hi, guys. Thanks for taking my questions. Well, congratulations on coming public here. I guess I want to follow-up on a couple of topics here that have been hit on, including Suji's question here about the PCI Express, AEC opportunity. Are these design wins, or are these kind of pre-design win ramps you're talking about this year? And I guess ultimately my question on this topic here is, is can this opportunity, these PCI Express AECs become as big as your Taurus family in the foreseeable future? Sanjay Gajendra: Yes. So these are design wins to clarify. We have been shipping this. We announced this. We've demonstrated this at, at public forums. So to that standpoint, it's an opportunity that we're excited about and like we noted early on. We expect to start contributing revenue for later half of this year. Richard Shannon : And the second question is on CXL. I think you've mentioned a couple of applications here. Maybe if you can kind of express the breadth of interest here across hyperscalers and other customers, for the ones you mentioned? And then also for the, next ones that are a little bit more expansive in nature, how do you see the testing and specking out of those? Are those coming to market at the time you're hoping for or is there a little bit more development required to get those to market? Sanjay Gajendra: Yes. There are two questions. Let me take the first one, which is the CXL side. For CXL, there are four main use cases to keep in mind, memory expansion, memory tiering, where you're trying to go for a TCO type of angle, memory pooling, and what is called as this memory drives that Samsung and others are providing. We believe memory drives are more suitable for the enterprise customers. And whereas the first three are more suitable for cloud scale deployment. And there, again, memory pulling is something that's further out in time is our belief just because it requires software changes. So the ones that are more sort of short-term, medium-term is memory expansion and memory tiering. And like I noted early on, all the major hyperscalers, at least in the U.S., are all engaged on the CXL technology. But it is going to be a matter of time with both CPUs being available and dollars being available from a general purpose compute standpoint. Okay. And then in terms of, your second question was, was that more on new products? Was that the context for it? Richard Shannon : Yes. Sanjay Gajendra: Yes. So again, we don't talk about the exact time frame, but you can imagine our last product we announced was a little over a year ago. So our engineers have not been quite, so they've been working hard. So to that standpoint, we are working very diligently and hard based upon, lot of interest and engagement from customers that we've already been working with. Operator: There are no further questions at this time. I'll turn the call back over to Leslie Green for closing remarks. Leslie Green: Thank you everyone for your participation and questions. We look forward to seeing many of you at various financial conferences this summer and updating you on our progress on our Q2 earnings conference call. Thank you. Jitendra Mohan: Thank you, guys. Operator: This concludes today's conference call. You may now disconnect.
ALAB Ratings Summary
ALAB Quant Ranking
Related Analysis

Barclays is Bullish on Astera Labs

Barclays analysts started coverage on Astera Labs (NASDAQ:ALAB) with an Overweight rating and, setting a price target of $85 on the stock. The analysts highlighted that Astera Labs is well-positioned as an early leader in a variety of data center connectivity products, which are crucial for the evolving AI and cloud computing sectors. The company's offerings focus on enabling high-speed data transfers and expanding system bandwidth within data centers, supporting multiple protocols such as PCIe, CXL, and Ethernet.

With the surge in AI investments and the need for higher bandwidth driving the adoption of next-generation platforms, Astera’s innovations in interconnect technology are becoming increasingly vital. The company’s growth is further supported by its integrated software solutions, strong ties with major hyperscalers, and a leading operational approach.