Microsoft Corporation (MSFT.US) FY26Q2 earnings call: Cloud business revenue exceeds $50 billion for the first time. Q3 capital expenditures expected to decrease on a quarter-over-quarter basis.

date
09:02 30/01/2026
avatar
GMT Eight
Microsoft (MSFT.US) holds FY26Q2 financial report conference call.
Microsoft Corporation (MSFT.US) held the FY26Q2 financial report conference call. The company's quarterly revenue reached $81.3 billion, a year-on-year increase of 17% (15% growth calculated at fixed exchange rates). Operating profit increased by 21% year-on-year (19% growth calculated at fixed exchange rates). Earnings per share were $4.14, with a 24% year-on-year increase after adjustments (21% growth calculated at fixed exchange rates). Microsoft Corporation's cloud business revenue surpassed $500 billion for the first time, reaching $515 billion, a 26% year-on-year increase (24% growth calculated at fixed exchange rates). The gross margin was 67%. The company's commercial order volume in the second quarter increased by 230% year-on-year (228% growth calculated at fixed exchange rates), primarily driven by large multi-year commitments such as OpenAI. Commercial RPO increased to $625 billion, a 110% year-on-year increase, with approximately 25% expected to be recognized as revenue in the next 12 months (a 39% year-on-year increase). Approximately 45% of the commercial RPO balance comes from OpenAI. Looking ahead to the third quarter, the company expects revenue to be between $80.65 billion and $81.75 billion (a 15%-17% year-on-year increase). Revenue costs are expected to be between $26.65 billion and $26.85 billion (a 22% year-on-year increase), with operating expenses of $17.8 billion to $17.9 billion (a 10%-11% year-on-year growth). Capital expenditure is expected to decrease quarter-on-quarter due to normal fluctuations in cloud infrastructure construction and leasing delivery rhythm. Short-lived assets in Capex are expected to account for a similar proportion as in Q2. Company executives stated that the overall strategic focus is on the three levels of the technology stack - cloud and Token factory, Agent platform, and excellent Agent experience. The impact of AI dissemination on GDP and TAM growth is just beginning. The Agent platform is the new generation application platform, and Agents are the new type of applications. To build, deploy, and manage Agents, customers need model directories, tuning services, orchestration tools, context engineering services, AI security, management, observability, and security protection capabilities. Q&A Questions and Answers Q: Investors are concerned about the ROI issue as CapEx is growing faster than expected, while Azure growth is slightly slower than expected. How does the expansion of computing power affect Azure growth, and how do you assess the ROI of these investments? A: The growth guidance for Azure should be seen more as a guide to the capacity allocated to Azure. Our CapEx (especially GPU/CPU, etc.) is based on long-term demand decisions. We need to support first-party application sales like M365 Copilot and GitHub Copilot, invest in R&D and product innovation, and allocate GPUs to AI talent to accelerate product development; the remaining capacity is used to support Azure demand. If all the newly launched GPUs were allocated to Azure, its revenue growth would exceed 40%. The most important thing to understand is that investment is to benefit customers at all levels of the technology stack, which is reflected in revenue growth throughout the business and in the OpEx growth brought about by our investment in talent. Q: Servers have a depreciation period of 6 years, while the average term of RPO is only 2.5 years (2 years in the previous quarter). How can investors be sure that AI-centric CapEx can generate enough revenue within the 6-year hardware usage period to achieve steady revenue and gross margin growth? A: The average term is a result of a combination of different types of contracts. For example, business contracts such as M365 have shorter terms (e.g., 3 years), which lowers the overall average. The remainder consists of longer-term Azure contracts, with the term extending from about 2 years to 2.5 years this quarter. Most of the current capital expenditure and purchased GPUs are locked in by contracts for most of their usage period. Thus, the risk mentioned does not exist. Looking at Azure alone, the RPO term is longer. The GPU contracts we refer to (including some of the largest customers) cover the entire usage period of the GPU, so there is no such risk. In addition, we continuously optimize the entire hardware fleet through software, including older models, and we annually update equipment under Moore's Law and optimize globally through software. Furthermore, our delivery efficiency will increase over the hardware usage period, and the profit margin will actually improve over time. This has always been visible in the CPU fleet. Q: With regards to 45% of the unfinished orders (RPO) being related to OpenAI, can you comment on its sustainability? Some concerns exist about the related risks, what is your perspective? A: We mention this number precisely because the remaining 55% (about $350 billion) is related to our wide-ranging business portfolio, covering a variety of solutions, Azure, and a broad customer base across industries and regions. This is a massive RPO balance, larger than most peers, and more diversified. We have great confidence in this. This portion has grown by 28% individually, showing the continuous growth of our business in various customer segments, industries, and regions. As for the OpenAI partnership, it is a great relationship. We continue to serve as their scale provider and are excited about it. We support one of the most successful businesses and remain optimistic. This allows us to stay at the forefront of building technology and application innovation. Q: Can you categorically comment on the scale of capacity increase? The significant increase of 1 gigawatt in the last quarter is remarkable, and capacity expansion is accelerating. Investors are particularly interested in the Fairwater projects in Atlanta and Wisconsin and would like to know the extent of capacity expansion over the next few quarters (regardless of the allocation method). A: We are doing our best to increase capacity as quickly as possible. The specific locations you mentioned (such as Atlanta or Wisconsin) are multi-year delivery projects, so the focus should not be solely on specific sites. Our core mission is to increase capacity globally, with most of it concentrated in the United States (including the two locations you mentioned) and other regions worldwide to meet customer demand and growing usage. We will continue to increase the construction of long-term infrastructure, ensuring access to electricity, land, and facilities, so that GPUs and CPUs can be deployed as quickly as possible once completed. At the same time, we will strive to improve construction and operational efficiency to achieve the highest possible utilization. This is not just about two locations; they are multi-year delivery timelines. The key is to complete this work as quickly as possible at all sites currently under construction or about to start. Q: The performance achievements of the Maia 200 accelerator in inference seem very significant, especially compared to existing TPUs, Trainium, and Blackwell. How do you view this achievement, and to what extent is the chip becoming a core competitive advantage for Microsoft Corporation? Also, how does this affect the future gross margin outlook for inference costs? A: We have accumulated significant expertise in self-developed chips, especially in the performance achieved when running GPT-5.2. This proves that when new workloads emerge, you can innovate end-to-end between the model, chip, and the entire system - it is not just about the silicon chip itself, but also the rack-level network and memory collaboration optimized for specific workloads. We work closely with our AI Superintelligence team, and all models we build are optimized for Maia. Overall, this is still in the very early stages, with ongoing innovation. Low-latency inference is now being talked about by everyone. We make sure we are not locked into any single technology, and we have good partnerships with NVIDIA, AMD, who are innovating alongside us. We want our fleet to have the best total cost of ownership at any given time. This is not a one-off product game; you need to stay ahead continuously. This means you need to integrate a lot of external innovation into your fleet to have fundamental advantages in terms of total cost of ownership. So, we are excited about Maia, Cobalt, our DPU, and NICs, and we have a strong system capability for vertical integration. However, being able to vertically integrate does not mean we only do vertical integration; we want to maintain flexibility, as you see. Q: Can you elaborate on the momentum of companies embarking on frontier transformations? We have also seen customers achieve breakthrough profits after adopting Microsoft Corporation's AI technology stack. As they progress in becoming "frontier firms" with Microsoft Corporation, how far do you expect their spending to expand? A: We see continued adoption of our three suites (M365, security, GitHub) by customers. They generate a compound effect. For example, Work IQ is crucial because the most critical database for any company using Microsoft Corporation services is the underlying data of Microsoft 365, which includes all implicit information (personnel, relationships, projects, results, communications). This is a super important asset for any business process and workflow. Now, the Agent platform is truly changing companies. Deploying these Agents can help businesses coordinate their work and have a greater impact. Additionally, companies are leveraging services in Fabric and Foundry, as well as GitHub tools or low-code tools, to transform themselves in areas such as customer service, marketing, finance, etc., by building their own Agents. The most exciting part is the convergence of new Agent systems such as M365 Copilot, GitHub Copilot, and Security Copilot, combining all data and deployment benefits, which may be the most transformative impact currently. Q: How has Azure performed in terms of CPUs (considering operational changes)? From a more macro perspective, are customers realizing that proper AI deployment requires moving to the cloud, and how is this driving the momentum of cloud migration? A: First, AI workloads should not only be seen as liabilities for using AI chips. Because, any Agent calls other containers through tools, and containers also need to use general-purpose computing resources. When planning our fleet, we also consider the balance between AI compute and general-purpose compute, as even training tasks require a lot of general computing power and adjacent storage. The same goes for inference; the inference of an agent model inherently requires configuring general-purpose computing resources for agents - they may not necessarily need GPUs, but they need compute and storage resources. This is what is happening with Shanghai New World. Furthermore, cloud migration is ongoing. For example, the latest SQL Server as an IaaS service on Azure is continuing to grow. This is why we must consider our commercial cloud and balance it with the AI cloud because as customers migrate workloads or build new workloads, they need all these infrastructure elements in their deployed region.