Oracle Corporation (ORCL.US) FY26Q3 conference call: Embracing AI vigorously, 1000 AI intelligent agents have been deployed in Fusion.

date
15:34 11/03/2026
avatar
GMT Eight
Recently, Oracle (ORCL.US) held its FY26Q3 earnings conference call.
Recently, Oracle Corporation (ORCL.US) held its FY26Q3 earnings call. In response to investor concerns about the topic of "SaaS software is dead, AI will replace it," Oracle Corporation stated that while AI is indeed disruptive, Oracle Corporation is the disruptor because the company has actually embedded AI as a complete functionality directly into its applications at no additional cost. These functionalities are included as part of quarterly upgrades, as part of the regular update rhythm, and are included in the application suite. Oracle Corporation stated that they are very satisfied with the company's position in this area and are embracing AI heavily. The company has already launched 1,000 AI agents in Fusion. The banking suite alone includes hundreds of AI agents. The company also mentioned that they are building an entire ecosystem: automated healthcare, automated financial services, automated retail. This is the ability that AI gives the company, expanding its vision, broadening the scope of building SaaS software suites, allowing the company to automate entire ecosystems. Both the multi-cloud database and AI infrastructure businesses are growing at a very fast pace. Multi-cloud database revenue increased by 531% year-over-year, and AI infrastructure revenue increased by 243% year-over-year. These businesses are also in high demand, and Oracle Corporation has a clear execution plan to quickly convert this demand into high-profit recurring revenue. The company is constantly innovating in its business model. In the last earnings call, the company shared various scenarios on how to achieve incremental growth in AI infrastructure without increasing debt or issuing stock. Oracle Corporation mentioned that since then, the company has signed over $29 billion worth of contracts using this new model. This hardware-included and customer prepayment-based model allows the company to continue expanding without consuming any of Oracle Corporation's cash flow. This $29 billion is in addition to other deals signed in the current quarter. Question and Answer Session Operator: We will now begin the question and answer session. Our first question comes from John DiFucci of Guggenheim. Please go ahead. John DiFucci, Analyst: Thank you. Wow, a lot of information. Regarding the question about AI infrastructure, I'll leave that for others to ask. But we've heard Doug talk about the "halo effect" the AI infrastructure business is having on your other businesses. The performance this quarter has been very strong, and you mentioned that the growth in RPO is coming from large AI contracts. At the same time, what we're now hearing on the ground is that this halo effect is actually translating into other businesses outside of AI infrastructure. In particular, we're seeing substantial growth in business activities related to more traditional cloud workloads, especially in pipeline management, including dedicated region clouds, sovereign clouds, and even Alloy deals that we're just starting to hear about. Besides the types of transactions Mike mentioned that are regularly associated with applications like upgrades, are there signs of potential momentum building in these businesses? Is my thinking on this correct? Also, if possible, could you give us some insight into the CapEx outlook for fiscal year 2027? Mike Sicilia, Chief Executive Officer of Applications: Alright, John. It's Mike, and I'll take this question. Yes, we definitely see the halo effect, and I'll add some details to it. Regarding the applications business, we have trained so many AI models on OCI, and these models are so close to the deployment of our applications that we can embed high-quality AI services directly into our applications. So, we're not only serving these customers and providing training power for model vendors, we're also embedding a large amount of output directly into the applications. Of course, we do prompt engineering to make it specific to the business. But the key point is that we, at Oracle Corporation, are the custodians of customers' critical data, responsible for a significant amount of business data; our resource allocation is very tight, very close to these models, and when combined, it allows customers to quickly extract value from AI. If you've heard any criticism of AI in the world, it's often complaints like: "I can't get value quickly." However, when you package AI as a service and expose those private data we safeguard to AI systems (obviously at the application level), we see excellent results. I just mentioned some of the vertical industries, but I think this is universally applicable across all industries. Another very interesting halo effect is when using our infrastructure -- pure OCI infrastructure -- as a "budget creator" for customers. As you've heard us say before, we're faster and cheaper than anyone else. When customers are contemplating transformations to large applications or infrastructure, we can often help them create budgets by simply moving their workloads to OCI, helping fund the transformation, because we can run these workloads faster, more efficiently, and more cost-effectively than competitors. Finally, before passing the CapEx question to Doug, another interesting halo effect is around sovereign AI. Our sovereign strategy is neither new nor a reflex to what's happening in the world. Combined with our Alloy strategy, we see growing sales pipelines globally. Our product form is very different and has a high differentiation advantage: no matter how many racks are involved (whether it's 3 racks or 500 racks), we can not only provide a smaller form factor product, but also deliver full OCI services on top. We see this as a huge competitive advantage in the market. So, by combining applications and OCI's AI services and sovereign clouds, yes, this is quite a significant halo effect. Doug Kellin, Chief Financial Officer: Yes, John, first of all, I have to admit that it's always creative when you ask two questions at once, and it's always interesting. Regarding CapEx, we'll provide more information to everyone after the end of this fiscal year, and discuss the CapEx situation for next year at that time. But I can say a few things. Obviously, from what Clay just introduced, the most important thing for you to consider is the "decoupling" between CapEx and Oracle Corporation's funding needs. Clearly, with these additional financing mechanisms, there may be additional CapEx, but this doesn't require Oracle Corporation's cash outlay, which is very interesting. On top of this, we're still committed to our goal we discussed in the last quarter, which is to maintain Oracle Corporation's investment-grade rating and to control the amount of financing within the range we discussed. Obviously, as we announced, our financing amount for this calendar year is $50 billion. So, John, for more information on CapEx, we'll provide that after the end of the next quarter. John DiFucci, Analyst: Thank you for the detailed background, Doug. And Mike, your logic in the prepared remarks about AI and how Oracle Corporation is addressing it is very clear, and everyone should take a look. Thank you all, impressive work. Operator: The next question comes from Mark Murphy of Morgan Stanley. Please go ahead. Mark Murphy, Analyst: Thank you, and congratulations on the accelerated growth. Clay, as Oracle Corporation transitions to a deeper stage of AI reasoning, what do you believe is the right strategy to optimize the locations of data centers? For example, if you have these huge centralized data centers in Texas and Wyoming, they may be close to power resources but quite far from population centers and the numerous fiber networks on the East Coast. We can't help but wonder if users and devices are too far away. So, as you move into the reasoning business, do you believe it's necessary to shift the location of these data centers to where the users and network traffic are located? Clay Magouirk, Chief Executive Officer of Cloud Infrastructure: Good question, Mark, I'm Clay. First, I want to emphasize our view on reasoning and how it impacts the deployment of data centers. The first thing I want to say is that we've been doing a lot of model training primarily in the past. But the demand for reasoning is rapidly increasing everywhere. I think this is because the utilization rate of models themselves is increasing, and new use cases are emerging - anyone who has used Claude recently in software knows how incredible these tools are. They are changing the way we do things. Therefore, reasoning will create huge demand. Now, when you talk about the location of data centers, you mentioned latency. In fact, there are several reasons for choosing a location: it could be for cost, overall availability, or data sovereignty. Therefore, the basis for choosing a location varies. But let's focus on your point about latency. One thing you need to understand is that latency is relative. That is, if you're trying to do ultra-low latency trading in the stock market, waiting for a 100-millisecond round trip between the East and West Coasts of the US is a bad idea. But if you're asking a question for your business, and the AI model needs a few seconds to think over it to give an answer, then an additional 40 milliseconds of latency from New York to Wyoming will not have any negative impact on you. Therefore, when you really talk to customers who need low latency use cases, you'll find that the root cause of latency issues currently is not actually the location of the hardware, but the type of hardware deployment. That's why you see so much innovation around these AI accelerators. If you look at the roles Grok 3 or Positron play, all these different types of customers are asking, "How can we not only reduce the cost of reasoning but also significantly reduce the latency?" I think if you pay attention to NVIDIA Corporation's GTC event next week, you'll see their related announcements. But overall, I think as an industry, to integrate and reduce latency, we must start with different reasoning architectures. Luckily, the location of data centers actually plays a very small part in this. This allows us to be more flexible in finding places to deploy data centers where there is ample power supply and vast land to truly optimize to meet this growing demand. Mark Murphy, Analyst: Thank you. Operator: The next question comes from Siti Panigrahi of Mizuho. Please go ahead. Siti Panigrahi, Analyst: Great, thank you for taking my question. I want to ask about the opportunities in your AI database and AI data platform. With the recent excitement around AI, enterprises are now starting to adopt cutting-edge large language models (LLMs) tools. What are you hearing from customers about using their private data for training and building private large language models? How confident are you about the "inflection point for AI database growth" discussed at the Analyst Day in October? Clay Magouirk, Chief Executive Officer of Cloud Infrastructure: Thank you, I'm Clay. I think this question can be divided into two parts. One is, how much adoption rate are we seeing in building private LLMs, and the other is how much demand are we seeing in utilizing AI to handle private data. In the early days, many thought most customers would do very specific training for their large language models. But, for the most part, it's proven otherwise. Instead, I think what's very popular and increasingly popular is using the best models and combining them with private data in a secure way. We see a huge demand for this approach. If you've heard Mike's remarks just now, you'll understand how we're embedding these AI models into our applications, which is one use case. But unfortunately, not everything runs in Oracle Corporation's applications, and customers also write many custom applications. Therefore, we've added a lot of features in the Oracle AI database to easily connect through the MCP server (Model Context Protocol) or via natural language to SQL for using these models. At the same time, we have the AI Data Platform product, designed to address this issue. You have a lot of data, whether it's application data, custom data in different data lakes and warehouses, or data in structured databases. All of these combined provide you with an intelligent platform where you can quickly build applications and access all the best models from multiple providers. Across the entire tech stack, we see great momentum. This is why I mentioned the growth shown by our multi-cloud database business in the prepared remarks. What we see is that to enable customers to benefit from the latest and best AI, they must be in the cloud first, but there is still a lot of data not yet in the cloud. Therefore, we see customers accelerating the migration of the most critical private data to the cloud environment for later use of the most advanced AI technology. Siti Panigrahi, Analyst: Excellent, thank you for providing that context. Operator: The next question comes from Mark Moerdler of Sanford Bernstein. Please go ahead. Mark Moerdler, Analyst: Thank you, and congratulations on a truly outstanding quarter. Clay, now that you have completed a large debt financing, could you explain how confident you are in the value created by the AI data center business itself, considering the cost of building AI data centers and the capital costs of financing them? As a related question, could you please talk more about sovereign clouds? Could you discuss how you plan to transform the AI data center business into a role as a sovereign cloud AI provider and how this should impact Oracle Corporation's value? Clay Magouirk, Chief Executive Officer of Cloud Infrastructure: Alright. I think we need to answer this question in two parts. I'm Clay, and I'll answer the first part, then I'll ask Mike to talk about sovereign clouds. When you think about the overall profitability of these AI data centers, there are mainly two aspects. First, how is the profitability of the accelerators themselves? In the past, we've provided guidance and expect the gross margin rate in this part to be between 30% to 40%. This still holds true for us. And as we operate these data centers, reduce delivery costs, optimize networks and hardware expenses, and electricity costs, we expect this number to continue to increase. Therefore, we are very satisfied with this. Another thing you need to know is that within these AI data centers, both for inference and training workloads, you need to purchase not just AI accelerators. There are also a lot of general computing resources, high-performance block storage or large-scale blob storage, load balancing, authentication, security products, and so on. Usually, about 10% to 20% of total spending goes towards purchasing adjacent services. When you consider these factors depending on the service mix, the margins of these adjacent services are higher -- the overall profitability will continue to rise. This doesn't even consider the much higher margins, around 60% to 80%, and very rapid growth of our multi-cloud database business that I mentioned earlier. So, when you combine all these factors, Oracle Corporation's overall profit margin situation is improving and growing rapidly. An overlooked potential issue I'll address is what's currently limiting our profit margins, and the fact is it's not the deployed compute capacity we already have. For example, if I'm building a data center with four data halls, when I deliver the first data hall, it's profitable. Although our EPS and other metrics are steadily increasing, the reason our profit level hasn't reached a higher level is simply that we have too many projects under construction simultaneously, and these projects under construction do incur some costs. Of course, we've been doing well in shortening the construction time, doing excellently in lowering costs during this period, but these costs are not zero. Therefore, as our business is in this high-speed growth phase, this is the only drag on profitability. But thankfully, we're getting better and better at delivering capacity. When we deliver these capacities, they've already signed high-margin contracts. Combining these factors, we are confident in the delivered compute capacity and the increasing profitability of the AI business. I think I'll leave it to Mike to talk about sovereign clouds. Mike Sicilia, Chief Executive Officer of Applications: Yes. Regarding sovereignty, as I just mentioned, I think we are in a very advantageous position. A year ago, people were mainly talking about "sovereignty" in terms of data sovereignty, and there are indeed some solutions in the market that have achieved sovereignty at the main data layer. However, in areas like disaster recovery (DR) or certain other processes, like the backup of data in another country, these solutions are not accepted anymore. Now, sovereignty means data sovereignty, operational sovereignty, and even contract sovereignty. Our Alloy model is fully capable of achieving these three points. By delivering an integrated solution, we have a huge differentiation advantage over competitors in the sovereign cloud space: we not only provide a sovereign region on the edge, we offer the full stack of OCI. It includes all our OCI services, and as you mentioned, the profit rate mix enables us to run all our business application suite and AI data platform in the sovereign area. Certainly, some services have different profit margins than the standard infrastructure profit margins. We believe we have the greatest flexibility in terms of contracts and delivery, as I mentioned. Furthermore, the most important thing is that we offer all of Oracle Corporation's capabilities within these sovereign areas - not just a subset, not a few edge devices, but the entire OCI ecosystem. Mark Moerdler, Analyst: These two answers are very helpful, thank you very much. Congratulations again. Mike Sicilia: Thank you, Mark. Ken Bond, Investor Relations Officer: The replay of this conference call will be available on our investor relations website for 24 hours. Thank you for participating today. Now, I will hand the conference back to Regina to conclude. Operator: Today's call has concluded. Thank you for your participation, you may now disconnect.