Oracle University Podcast
Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.
info_outline
Encore: Cloud Data Centers - Core Concepts Part 1
04/28/2026
Encore: Cloud Data Centers - Core Concepts Part 1
Curious about what really goes on inside a cloud data center? In this episode, Lois Houston and Nikita Abraham dive into how cloud data centers are transforming the way organizations manage technology. They explore the differences between traditional and cloud data centers, the roles of CPUs, GPUs, and RAM, and why operating systems and remote access matter more than ever. Cloud Tech Jumpstart: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Hi there! We're hitting rewind for the next few weeks and bringing back some of our most popular episodes. So, sit back and enjoy these highlights from our archive. 00:12 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:37 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Today, we’re covering the fundamentals you need to be successful in a cloud environment. If you’re new to cloud, coming from a SaaS environment, or planning to move from on-premises to the cloud, you won’t want to miss this. With us today is Orlando Gentil, Principal OCI Instructor at Oracle University. Hi Orlando! Thanks for joining us. 01:13 Lois: So Orlando, we know that Oracle has been a pioneer of cloud technologies and has been pivotal in shaping modern cloud data centers, which are different from traditional data centers. For our listeners who might be new to this, could you tell us what a traditional data center is? Orlando: A traditional data center is a physical facility that houses an organization's mission critical IT infrastructure, including servers, storage systems, and networking equipment, all managed on site. 01:44 Nikita: So why would anyone want to use a cloud data center? Orlando: The traditional model requires significant upfront investment in physical hardware, which you are then responsible for maintaining along with the underlying infrastructure like physical security, HVAC, backup power, and communication links. In contrast, cloud data centers offer a more agile approach. You essentially rent the infrastructure you need, paying only for what you use. In the traditional data center, scaling resources up and down can be a slow and complex process. On cloud data centers, scaling is automated and elastic, allowing resources to adjust dynamically based on demand. This shift allows business to move their focus from the constant upkeep of infrastructure to innovation and growth. The move represents a shift from maintenance to momentum, enabling optimized costs and efficient scaling. This fundamental shift is how IT infrastructure is managed and consumed, and precisely what we mean by moving to the cloud. 02:52 Lois: So, when we talk about moving to the cloud, what does it really mean for businesses today? Orlando: Moving to the cloud represents the strategic transition from managing your own on-premise hardware and software to leveraging internet-based computing services provided by a third-party. This involves migrating your applications, data, and IT operations to a cloud environment. This transition typically aims to reduce operational overhead, increase flexibility, and enhance scalability, allowing organizations to focus more on their core business functions. 03:29 Nikita: Orlando, what’s the “brain” behind all this technology? Orlando: A CPU or Central Processing Unit is the primary component that performs most of the processing inside the computer or server. It performs calculations handling the complex mathematics and logic that drive all applications and software. It processes instructions, running tasks, and operations in the background that are essential for any application. A CPU is critical for performance, as it directly impacts the overall speed and efficiency of the data center. It also manages system activities, coordinating user input, various application tasks, and the flow of data throughout the system. Ultimately, the CPU drives data center workloads from basic server operations to powering cutting edge AI applications. 04:23 Lois: To better understand how a CPU achieves these functions and processes information so efficiently, I think it’s important for us to grasp its fundamental architecture. Can you briefly explain the fundamental architecture of a CPU, Orlando? Orlando: When discussing CPUs, you will often hear about sockets, cores, and threads. A socket refers to the physical connection on the motherboard where a CPU chip is installed. A single server motherboard can have one or more sockets, each holding a CPU. A core is an independent processing unit within a CPU. Modern CPUs often have multiple cores, enabling them to handle several instructions simultaneously, thus increasing processing power. Think of it as having multiple mini CPUs on a single chip. Threads are virtual components that allow a single CPU core to handle multiple sequence of instructions or threads concurrently. This technology, often called hyperthreading, makes a single core appear as two logical processors to the operating system, further enhancing efficiency. 05:39 Lois: Ok. And how do CPUs process commands? Orlando: Beyond these internal components, CPUs are also designed based on different instruction set architectures which dictate how they process commands. CPU architectures are primarily categorized in two designs-- Complex Instruction Set Computer or CISC and Reduced Instruction Set Computer or RISC. CISC processors are designed to execute complex instructions in a single step, which can reduce the number of instructions needed for a task, but often leads to a higher power consumption. These are commonly found in traditional Intel and AMD CPUs. In contrast, RISC processors use a simpler, more streamlined set of instructions. While this might require more steps for a complex task, each step is faster and more energy efficient. This architecture is prevalent in ARM-based CPUs. 06:47 Are you looking to boost your expertise in enterprise AI? Check out the Oracle AI Agent Studio for Fusion Applications Developers course and professional certification, now available through Oracle University. This course helps you build, customize, and deploy AI Agents for Fusion HCM, SCM, and CX, with hands-on labs and real-world case studies. Ready to set yourself apart with in-demand skills and a professional credential? Learn more and get started today! Visit mylearn.oracle.com for more details. 07:22 Nikita: Welcome back! We were discussing CISC and RISC processors. So Orlando, where are they typically deployed? Are there any specific computing environments and use cases where they excel? Orlando: On the CISC side, you will find them powering enterprise virtualization and server workloads, such as bare metal hypervisors in large databases where complex instructions can be efficiently processed. High performance computing that includes demanding simulations, intricate analysis, and many traditional machine learning systems. Enterprise software suites and business applications like ERP, CRM, and other complex enterprise systems that benefit from fewer steps per instruction. Conversely, RISC architectures are often preferred for cloud-native workloads such as Kubernetes clusters, where simpler, faster instructions and energy efficiency are paramount for distributed computing. Mobile device management and edge computing, including cell phones and IoT devices where power efficiency and compact design are critical. Cost optimized cloud hosting supporting distributed workloads where the cumulative energy savings and simpler design lead to more economical operations. The choice between CISC and RISC depends heavily on the specific workload and performance requirements. While CPUs are versatile generalists, handling a broad range of tasks, modern data centers also heavily rely on another crucial processing unit for specialized workloads. 09:07 Lois: We’ve spoken a lot about CPUs, but our conversation would be incomplete without understanding what a Graphics Processing Unit is and why it’s important. What can you tell us about GPUs, Orlando? Orlando: A GPU or Graphics Processing Unit is distinct from a CPU. While the CPU is a generalist excelling at sequential processing and managing a wide variety of tasks, the GPU is a specialist. It is designed specifically for parallel compute heavy tasks. This means it can perform many calculations simultaneously, making it incredibly efficient for workloads like rendering graphics, scientific simulations, and especially in areas like machine learning and artificial intelligence, where massive parallel computation is required. In the modern data center, GPUs are increasingly vital for accelerating these specialized, data intensive workloads. 10:11 Nikita: Besides the CPU and GPU, there’s another key component that collaborates with these processors to facilitate efficient data access. What role does Random Access Memory play in all of this? Orlando: The core function of RAM is to provide faster access to information in use. Imagine your computer or server needing to retrieve data from a long-term storage device, like a hard drive. This process can be relatively slow. RAM acts as a temporary high-speed buffer. When your CPU or GPU needs data, it first checks RAM. If the data is there, it can be accessed almost instantaneously, significantly speeding up operations. This rapid access to frequently used data and programming instructions is what allows applications to run smoothly and systems to respond quickly, making RAM a critical factor in overall data center performance. While RAM provides quick access to active data, it's volatile, meaning data is lost when power is off, or persistent data storage, the information that needs to remain available even after a system shut down. 11:26 Nikita: Let’s now talk about operating systems in cloud data centers and how they help everything run smoothly. Orlando, can you give us a quick refresher on what an operating system is, and why it is important for computing devices? Orlando: At its core, an operating system, or OS, is the fundamental software that manages all the hardware and software resources on a computer. Think of it as a central nervous system that allows everything else to function. It performs several critical tasks, including managing memory, deciding which programs get access to memory and when, managing processes, allocating CPU time to different tasks and applications, managing files, organizing data on storage devices, handling input and output, facilitate communication between the computer and its peripherals, like keyboards, mice, and displays. And perhaps, most importantly, it provides the user interface that allows us to interact with the computer. 12:31 Lois: Can you give us a few examples of common operating systems? Orlando: Common operating system examples you are likely familiar with include Microsoft Windows and MacOS for personal computers, iOS and Android for mobile devices, and various distributions of Linux, which are incredibly prevalent in servers and increasingly in cloud environments. 12:54 Lois: And how are these operating systems specifically utilized within the demanding environment of cloud data centers? Orlando: The two dominant operating systems in data centers are Linux and Windows. Linux is further categorized into enterprise distributions, such as Oracle Linux or SUSE Linux Enterprise Server, which offer commercial support and stability, and community distributions, like Ubuntu and CentOS, which are developed and maintained by communities and are often free to use. On the other side, we have Windows, primarily represented by Windows Server, which is Microsoft's server operating system known for its robust features and integration with other Microsoft products. While both Linux and Windows are powerful operating systems, their licensing modes can differ significantly, which is a crucial factor to consider when deploying them in a data center environment. 13:55 Nikita: In what way do the licensing models differ? Orlando: When we talk about licensing, the differences between Linux and Windows become quite apparent. For Linux, Enterprise Distributions come with associated support fees, which can be bundled into the initial cost or priced separately. These fees provide access to professional support and updates. On the other hand, Community Distributions are typically free of charge, with some providers offering basic community-driven support. Windows server, in contrast, is a commercial product. Its license cost is generally included in the instance cost when using cloud providers or purchased directly for on-premise deployments. It's also worth noting that some cloud providers offer a bring your own license, or BYOL program, allowing organizations to use their existing Windows licenses in the cloud, which can sometimes provide cost efficiencies. 14:58 Nikita: Beyond choosing an operating system, are there any other important aspects of data center management? Orlando: Another critical aspect of data center management is how you remotely access and interact with your servers. Remote access is fundamental for managing servers in a data center, as you are rarely physically sitting in front of them. The two primary methods that we use are SSH, or secure shell, and RDP, remote desktop. Secure shell is widely used for secure command line access for Linux servers. It provides an encrypted connection, allowing you to execute commands, transfer files, and manage your servers securely from a remote location. The remote desktop protocol is predominantly used for graphical remote access to Windows servers. RDP allows you to see and interact with the server's desktop interface, just as if you were sitting directly in front of it, making it ideal for tasks that require a graphical user interface. 16:06 Lois: Thank you so much, Orlando, for shedding light on this topic. Nikita: Yeah, that's a wrap for today! To learn more about what we discussed, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. In our next episode, we’ll take a close look at how data is stored and managed. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 16:28 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/41050855
info_outline
Vector AI Supporting Features: What’s New in Oracle Exadata and GoldenGate
04/22/2026
Vector AI Supporting Features: What’s New in Oracle Exadata and GoldenGate
Hosts Lois Houston and Nikita Abraham are joined by Brent Dayley, Senior Principal APEX and Apps Dev Instructor, to explore the latest vector AI supporting features in Oracle Exadata and GoldenGate 23ai. The conversation begins with an overview of Exadata’s capabilities and then shifts to how GoldenGate is powering distributed AI, real-time data streaming, and analytics with advanced microservices architecture. Brent highlights recent GoldenGate enhancements, including distributed vector support, robust monitoring, OCI IAM integration, and support for next-generation AI workloads via real-time vector hubs. Oracle AI Vector Search Deep Dive: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, and the OU Studio Team for helping us create this episode. Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release. ------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to another episode of the Oracle University Podcast! I’m Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead of Editorial Services with Oracle University. Nikita: Hi everyone! Thanks for joining us! In our previous episode of this series, we took a deep dive into Oracle AI Vector Search and Retrieval Augmented Generation, or RAG, showing how unstructured data can be transformed into embeddings to power smarter, more context-aware AI with Oracle Database 23ai. Lois: That’s right, Niki. We also explored how the OCI Generative AI service can be used with both Python and PL/SQL, and how AI Vector Search enables relevant information retrieval for large language model prompts. 01:21 Nikita: Today, we’re focusing on the latest supporting features for Oracle AI Vector Search. Joining us once again is Brent Dayley, Senior Principal APEX and Apps Dev Instructor. Welcome back, Brent! To kick things off, could you outline what’s new in Exadata with the 24ai release, particularly for AI storage? Brent: So Exadata has ushered in a new era of AI capabilities with 24ai release. Key features of Exadata system software 24ai include AI Smart Scan, Exadata RDMA Memory, known as XRMEM, Exadata Smart Flash Cache, and on-storage processing. In-Memory Columnar Speed JSON Queries, Transparent Cross-Tier Scans, and caching enhancements, including Columnar Smart Scan at Memory Speed, Exadata Cache Observability, and Automatic KEEP Object Load into Exadata Flash Cache. Now, Exadata system software 24ai is a significant release. It ushers in a new era of AI capabilities for Oracle Database users. Now there have been some infrastructure improvements, including the ability to increase the number of virtual machines on X10M and Secure Boot for KVM Virtual Machines. We have also improved and enhanced high availability and network resilience, including improved RoCE Network Resilience and enhanced RoCE Network Discovery. There have been some enhancements for monitoring and management, including AWR and SQL Monitor Enhancements and JSON API for Management Server. Additionally, security enhancement. SNMP Security. Now, Exadata system software 24ai is supported on Exadata database machines and storage expansion racks from X6 and newer. 03:40 Lois: Those are some fantastic advancements for Exadata users. Now, let’s pivot to distributed AI. Brent, can you walk us through how GoldenGate enables distributed AI? Brent: Let's take a look at some common GoldenGate use cases as a refresher. The first use case is multi-active, high availability, and cross-region deployments, spanning on-premises and cloud environments. Another use case includes data offloading and data hub creation in order to support multiple downstream applications. Real-time data stores for Downstream Marts and Analytics. Micro and mini services architecture and an audit history of transactions. Other use cases include migrations and upgrades of databases, including OCI-hosted databases. Another use case would be creating analytic data feeds for various applications, including SaaS and on-premises apps. And finally, stream analytics using application and transaction events captured by GoldenGate Stream Analytics. 05:03 Nikita: We know GoldenGate has long been a staple for enterprise data integration. So Brent, what makes GoldenGate the best choice today, and how has its architecture evolved? Brent: It offers DIY Stream Analytics. GoldenGate does remain the top choice for Enterprise Standard, real-time data streaming. It supports Oracle and third-party databases, vector sources, messaging systems, and NoSQL databases. OCI offers a fully managed pipeline builder for Stream Analytics. This pipeline leverages various OCI services, such as OCI Streaming for real-time event ingestion, OCI Dataflow for stream processing, OCI Big Data for data storage and processing, and OCI Stream Analytics for real-time event processing and analysis. GoldenGate microservices, available since 2017 in Oracle GoldenGate 12.3, is used in over 4,000 deployments in OCI. Benefits of GoldenGate microservices include the ability to employ the same trusted Extract and Replicat processes as the classic architecture. Provides flexible and secure remote administration through a user-friendly web interface or CLI. Deployable on-premises in OCI as a service and in third-party cloud environments. Simplified patching and upgrading process. Now the GoldenGate architecture evolution. First, classic architecture that was deprecated in version 19c and desupported in 23ai. Microservices Architecture introduced in version 12.3 and is the recommended architecture. A migration utility is available to upgrade from classic to microservices architecture. 07:12 Are you ready to create and manage AI Agents in Fusion Applications? Check out the Oracle AI Agent Studio for Fusion Applications courses! Start with the Foundations course to build, customize, and deploy AI Agents, and then advance to the Developer Professional certification. Explore hands-on labs and real-world case studies. Visit mylearn.oracle.com for all the details. 07:39 Nikita: Welcome back! It sounds like the latest GoldenGate updates offer new features and integrations. Could you share more about these enhancements? Brent: There are many new features and enhancements in GoldenGate, along with microservices, including a redesigned GUI for enhanced usability. Integration with StatsD and Telegraf for monitoring and metrics. OCI IAM integration for secure access control. JSON Relational Duality for flexible data handling. Next-generation AI with distributed vector support. PDB Extract Capture for efficient data extraction from Oracle Pluggable Databases. DDL notification on Target Tables for schema evolution management. Support for non-Oracle and Big Data technologies. Online DDL and EBR enhancement for improved performance. Data Streams Pub-Sub for asynchronous data dissemination. Async API support for standardized event communication. High-availability clusters for increased resilience. Trail Files Management for efficient data storage. And support for new features in 23ai database. It also includes integrated diagnostics for improved troubleshooting of IE and IR processes. And 30 or more OS and database certifications for wider platform support. @Dbfunction Mapping for custom data transformations. And lastly, GoldenGate free recipes for pre-built solutions and best practices. New in GoldenGate, distributed AI processing with vector replication. 09:37 Lois: And what type of use cases does this enable? Brent: Migrating vectors into Oracle Vector Database. Replicating and consolidating vector changes. Implementing multi-cloud, multi-active Oracle vector databases. Streaming text and vector changes to search engines. Key considerations include that embedding models must be consistent across all vector stores for effective similarity searches. 10:09 Lois: Now, many organizations wonder if they can use generative AI with their own business data. Brent, how do enterprises typically approach this? Brent: Organizations are using generative AI typically like this. Building LLMs from scratch. Training models on proprietary data for specific tasks. Fine-tuning LLMs, adapting pre-trained models to a specific domain using private data. And prompt engineering with retrieval augmented generation or RAG. Augmenting prompts with relevant information retrieved from a knowledge base to improve the accuracy and relevance of LLM responses. Now it's possible to create a real-time vector hub for GenAI. This hub can ingest real-time data from various sources, including Oracle and third-party relational databases, vector databases, third-party messaging systems, and NoSQL databases, business updates, documents, events, and alerts. 11:11 Nikita: And how does the vector hub work? Brent: DML and DDL changes, vector changes, and prompt or chat history are used to enrich prompts. And embedding model generates embeddings from the text data. Similarity search is performed on these embeddings to retrieve relevant information from the vector hub. The retrieved information is used to augment the prompt, leading to more accurate and trustworthy answers from the LLM. Now, the benefits of real-time data and generative AI include the ability to ensure answers are based on fresh business data. And helps reduce hallucinations in generative AI responses. Actionable AI and machine learning from streaming pipelines allows data from ERP and SaaS applications, databases, event messaging systems, and NoSQL databases to be ingested into streaming pipelines. This data can then be used for AI and machine learning model training, similarity searches, machine learning tasks, external AI, and machine learning integrations, alerts, and data product creation. 12:25 Lois: So if you had to summarize, Brent, why does GoldenGate 23ai stand out for artificial intelligence workloads? Brent: Well, first up, it improves data quality for AI model training and fine-tuning. And secondly, it enhances retrieval augmented generation by providing real-time access to relevant business data, leading to more accurate and trustworthy generative AI responses. Nikita: Thank you, Brent, for sharing your insights and detailing these exciting new features across Oracle’s AI stack. If you’d like to dive deeper into these topics, don’t forget to visit mylearn.oracle.com and look for Oracle AI Vector Search Deep Dive course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 13:16 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/40975690
info_outline
RAG with Oracle AI Vector Search and OCI Generative AI: Python and PL/SQL Approaches
04/14/2026
RAG with Oracle AI Vector Search and OCI Generative AI: Python and PL/SQL Approaches
In this episode of the Oracle University Podcast, hosts Lois Houston and Nikita Abraham are joined by Brent Dayley, Senior Principal APEX & Apps Dev Instructor. Together, they explore how to implement Retrieval Augmented Generation (RAG) using Oracle AI Vector Search and OCI Generative AI. Brent walks listeners through the similarities and differences between building RAG workflows with Python and PL/SQL, offering practical insights into embedding creation, semantic search, and prompt engineering within Oracle’s technology stack. Oracle AI Vector Search Deep Dive: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release. -------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to another episode of the Oracle University Podcast! I’m Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead for Editorial Services with Oracle University. Nikita: Hi everyone! If you joined us last week, you’ll remember we explored AI Vector Search and how Retrieval Augmented Generation, or RAG, empowers large language models by surfacing relevant business content for smarter, more context-aware answers. Lois: That’s right, Niki. We also looked at how unstructured data gets transformed into embeddings, how these vectors power semantic search, and how Oracle Database 23ai is uniquely designed to support these advanced AI workflows. Nikita: Today, we’re building on that foundation with an exciting double feature. We’ll start with an introduction to OCI Generative AI Service and how you can use it with Python, and then dive into Retrieval Augmented Generation with Oracle AI Vector Search and the OCI Gen AI service using PL/SQL. 01:32 Lois: And to walk us through these topics, we’re delighted to welcome back Brent Dayley, Senior Principal APEX & Apps Dev Instructor. Brent, it’s great to have you. So, tell us, how does the OCI Generative AI service use Oracle AI Vector Search? Brent: So OCI Generative AI service allows us to take user questions and augment those using external data from outside of the large language model that allows us to return augmented content. We would leverage Oracle AI Vector Search in order to retrieve contextually relevant information. And we would create prompts that have some sort of a meaning to help guide the user to input the appropriate types of questions. And this allows us to retrieve the data using a large language model. 02:27 Nikita: What are the typical steps for implementing a RAG workflow using the OCI Generative AI service in Python? Brent: We would load the document. Transform the document to text. And then split the text into chunks. So if you're talking about maybe a PDF that contains chapters, we might split the different chapters into individual chunks. We would then set up Oracle AI Vector Search and insert the embedding vectors. We would build the prompt to query the document. And then we would invoke the chain. So first, you would load the text sources from a file. Open a terminal window and connect to your compute instance. And launch ipython to allow interactive work. Ipython allows you to insert a series of steps in order to put different commands in different steps. You might load the source file called FAQs. Next, you would load the FAQ chunks into the Vector Database. You would create a connection and connect to your database. And then create the table. And then you would vectorize the text chunks and then encode the text chunks. And then insert the chunks and vectors into the database. Next, you would vectorize the question. Define the SQL script ordering the results by the calculated score. Define the question. Write the retrieval code. And then execute the code. Finally, you would print the results. Then we would create the large language model prompt and call the AI generative LLM. Ensure that our prompt does not exceed the maximum context length of the model. And then define the prompt content. We would then initialize the OCI client and then make the call. 04:47 Here’s some exciting news! Oracle University has training to help your teams unlock Redwood—the next-gen design system for Fusion Cloud Applications. Learn how Redwood improves your user experience and discover how to personalize your Fusion investment using Visual Builder Studio. Whatever your role, visit mylearn.oracle.com and check out these courses today! 05:12 Nikita: Thanks, Brent. That gives us a nice overview of how Python can be leveraged with OCI Generative AI. Now, how would you compare working with Python for building RAG applications to using PL/SQL? Can you walk us through the high-level process for building a RAG solution in this environment? Brent: First, we would want to load the document. Next, we would transform the document into plain text. After that, we would take that text and split it into meaningful chunks. Next, we would go ahead and set up Oracle AI Vector Search and insert the embedding vectors. We would then build the prompt so that we can query the document. And then we would invoke all of those previous steps as our chain. 06:04 Lois: OK, and can we take a closer look at each of these steps? Brent: Step 1, text extraction and preparation. So, let's imagine we have some sort of document that we want to use as the augmented information. We would load that document. Next, we would transform the document to text. And we have a function in the DBMS Vector Chain Package called util to text. And this is used to extract plain text from the loaded documents. Next, we would want to split the text into meaningful chunks. The DBMS Vector Chain Package has another function called util two chunks, that allows us to divide the extracted text into smaller, more manageable pieces, which we call chunks. 07:02 Nikita: Once we have our text chunks ready, what’s the next step to make our data searchable and useful for the large language model? Brent: Step number 2, we would want to go ahead and use embedding models in order to create our vectors. We would load multiple ONNX models into the database. And the reason we would do this is because models with a greater number of dimensions usually produce higher quality vector embeddings. So you might want to load multiple different ONNX models into the database so that you can generate embeddings from each of the models, and then compare those vector embeddings using those different models. You would create vector embeddings using PL/SQL packages. 07:55 Lois: After embeddings are created, how does the solution find the most relevant content in response to a user’s question? Brent: Step 3, we would then go and do a similarity search so that we can return a response. We would select the text chunks that have the relevant information for the input user question based on vector search. This allows for integrating with Oracle's Gen AI Large Language Model Service to generate responses. The process ensures that the large language model generates contextually appropriate and relevant answers for those users' queries. Now, step 4 is to build the prompt, and I want to stress the importance of large language model prompt engineering. What this will do is to carefully craft input queries or instructions so that we can get more accurate and desirable outputs from the large language model. This allows developers to guide the LLM's behavior and tailor its responses to specific requirements. This is what we call LLM Prompt Engineering. And it allows us, as I was saying, to craft input queries or instructions so that we can create more accurate and desirable outputs. Next, we would use an example interactive RAG application that uses the Streamlit framework in order to create a user-friendly interface. This interface will allow us to upload documents, pose the question, and receive relevant answers generated by the underlying RAG pipeline within the database. In the final step, we will have an input prompt that asks us to ask a question about the PDF. We will then type in some sort of a question relative to the PDF content. And then we would retrieve the return data based on the input question. 10:11 Nikita: Brent, thank you for walking us through both the Python and PL/SQL approaches for building RAG solutions with Oracle Generative AI. If you’d like to dive deeper into these topics, don’t forget to visit mylearn.oracle.com and look for the Oracle AI Vector Search Deep Dive course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 10:33 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/40859570
info_outline
Retrieval Augmented Generation (RAG)
04/07/2026
Retrieval Augmented Generation (RAG)
Join hosts Lois Houston and Nikita Abraham as they explore one of the most exciting innovations in enterprise AI: Retrieval Augmented Generation (RAG) powered by Oracle AI Vector Search. In this episode, Senior Principal APEX & Apps Dev Instructor Brent Dayley walks through the fundamentals of RAG, explaining how it combines Oracle Database 23ai, vector embeddings, and large language models to deliver accurate, context-rich answers from both business and unstructured data. Discover the typical RAG workflow, practical setup steps on Oracle Cloud Infrastructure, and how to work with embedding models for real-world applications. Oracle AI Vector Search Deep Dive: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release. ---------------------------------------------- Episode Transcript 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and joining me is Lois Houston, Director of Communications and Adoption Programs with Customer Success Services. Lois: Hi everyone! If you’ve been with us this season, you’ll know we’ve already covered a lot about Oracle AI Vector Search. In Episode 1, we introduced the core concepts—how vectors let you search by meaning, not just keywords, and how embedding models translate your unstructured data into a searchable format inside Oracle Database 23ai. Nikita: Then, in Episode 2, we took a deeper dive into how these vectors are actually stored and managed. We explored the different types of vector indexes, similarity metrics, and best practices for designing and optimizing your database for semantic search. Lois: Right. Today, we’re shifting gears into one of the most exciting real-world applications: Retrieval Augmented Generation, or RAG. You’ll learn how RAG combines the power of Oracle AI Vector Search with large language models to answer natural language questions using both business and unstructured data. 01:39 Nikita: We’ll walk through the workflow, highlight why Oracle Database is uniquely suited for RAG, and give you the essential steps to get started. Back again is Senior Principal APEX & Apps Dev Instructor Brent Dayley. Hi Brent! Could you explain what RAG is, and why it’s important for working with AI and large language models? Brent: Well, RAG stands for Retrieval Augmented Generation. And this is a technique that allows us to enhance the capabilities of large language models, also known as LLMs, and this provides them with relevant context from external knowledge sources. This will allow the LLMs to generate more accurate, informative, and context-aware responses. Real world applications include answering questions, chatbot development, content summarization, and knowledge discovery. 02:35 Lois: Brent, what makes Oracle Database 23ai a good platform for implementing RAG workflows? Brent: Now, there are some key advantages of using Oracle Database 23ai as a RAG platform. These include native functionality, allowing built-in tools and packages specifically designed for RAG pipeline development. Also, if you are a PL/SQL developer, then this will allow you to develop within a familiar and robust database environment. Also, Oracle has a plethora of security and performance tools. And this ensures enhanced security and optimized performance. 03:18 Nikita: What does a typical RAG workflow look like in Oracle Database 23ai? What are the main steps involved? Brent: Now, the primary workflow steps are going to be to generate vector embeddings from your unstructured data. You do this using vector embedding models. And you can generate those embeddings either inside or outside of the database. Next, you need to store the vector embeddings, the unstructured data, and the relational business data, and you can store all of that in the Oracle Database. You might want to also create vector indexes that can allow you to run similarity searches over huge vector spaces with really good performance. Finally, you need to query data with similarity searches. You can use Oracle AI Vector Search native SQL operations to combine similarity with relational searches to retrieve relevant data. And optionally, you can generate a prompt and send it to a large language model for full RAG inference. 04:30 Lois: Can you give us an example of how this workflow operates in practice? Brent: A user's natural language question is encoded as a vector and sent to AI Vector Search. Next, AI vector search finds private content, such as documents, that are stored in the database, and those will match the user's question. The content is then sent to Oracle's GenAI service to help answer the user's question. And then GenAI uses the content plus general knowledge to provide an informed answer back to the user. 05:14 Nikita: What does the overall user experience look like when interacting with RAG? How does Oracle ensure the answers are both accurate and up to date? Brent: In this case, we have a chatbot. This is the interface that we usually use to enable dialogue with the large language model. Now, in order to improve the quality of the answers, we want to search your private business data, and that allows us to pass the most relevant facts back to the LLM. Next, we want to format the similarity search results as a prompt and context for the large language model. Now, this will allow us to use up to date facts as input to LLMs. And that will minimize the probability of the LLM hallucinating. And those high-quality responses are then returned back to the chatbot. 06:12 Lois: Brent, what does the setup process look like for getting RAG up and running with Oracle AI Vector Search on OCI? Can you take us through the main steps? Brent: First, you will log into OCI. Provide your cloud account name and click Next. There are also interfaces for signing in using a traditional cloud account. And if you're not an Oracle Cloud customer yet, you can also sign up using this page. Next, after signing in, you will create a compute instance. And you will use Oracle Infrastructure Cloud Console in order to do this. And you will wind up with the user called OPC. You'll notice that you're using SSH in order to connect to your compute instance, and you're running a script in order to set up the Oracle Database. After that, you will set up the Python environment, again using SSH to connect as an OPC user to your compute instance. 07:22 Do you want to optimize your implementation strategies? Check out the Oracle Fusion Cloud Applications Process Essentials training and certifications for insight into key processes and efficiencies across every phase of your Fusion Cloud Apps journey. Learn more at mylearn.oracle.com. 07:43 Nikita: Welcome back! So far, we’ve seen how Oracle AI Vector Search powers RAG, letting you surface relevant business knowledge for large language models and enhance their answers. At the heart of all this is the process of transforming unstructured data, like text or documents, into mathematical representations called embeddings. Lois: Those embeddings are what make meaningful, semantic search possible. But have you wondered how those embeddings actually get created, or what goes on behind the scenes when you choose an embedding model? Nikita: Up next, we’ll take a closer look at embedding models themselves: what they are, how to use them inside Oracle Database 23ai, and how you can experiment with different models to get the results that best fit your business needs. Lois: We’ll walk through importing models, generating embeddings, and even how you can swap out embedding models to compare results. But before we get into the nitty-gritty details, let’s quickly recap embedding models, since we’ve mentioned them in our previous episodes. 08:47 Nikita: Brent, for listeners who might need a refresher, can you explain what embedding models are and why they’re so central to AI Vector Search? Brent: AI Vector Search is based on similarity properties. You can search data by semantic similarity rather than by the actual values. Vector embeddings are created by embedding models to represent the unstructured data. So we have input data. What we'll want to do is to use an embedding model to generate vector embeddings. And then the vector embeddings would be stored inside of a vector column in a table. We would then compare those vectors to each other using vector distance function. And we would get the relevant content back based on the number of returns that we describe. For instance, maybe we want to bring back the five closest pieces of data compared to the input data. There is a new function that allows you to generate vector embeddings that is called the vector embedding function. It allows you to generate vectors within the database. 10:08 Lois: Can you walk us through the practical steps for using embedding models with Oracle AI Vector Search? Brent: In order to create and set up a table, we might use the Python program called create_schema.py. And that will allow us to create a table. We would ensure that the table was successfully created with the data. As an example, I would create a table called MY_DATA. Next, we would use a sentence transformers embedding model in order to vectorize the table. We can use the Python program, vectorize_table_SentenceTransformers.py. We would then query the MY_DATA table in the Oracle Database to verify that the data has been updated. And then we would use sentence transformers in order to perform the similarity search. The Python program is called similarity_search_SentenceTransformers.py And what that would do is create the table and then perform a similarity search using the sentence transformers. Now what if you decide that you want to maybe change embedding models? Maybe you want to compare the results by using one particular model as compared to a different model. So you can change the embedding model. And in order to do that, you would change the embedding model in both of the programs and re-vectorize the table using the vectorize_table_SentenceTransformers.py program. You would then use the new model with different words, possibly, and then compare and review the results, and then choose which one gets you back the data that you're looking for that is most similar. 12:02 Nikita: Well, that’s a wrap on this episode. A big thank you, Brent, for sharing your expertise with us. Lois: If you want to learn more about the topics we discussed today, visit to mylearn.oracle.com and search for the Oracle AI Vector Search Deep Dive course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 12:25 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/40746450
info_outline
Inside Oracle AI Vector Search: Indexes, Metrics, and Best Practices
03/31/2026
Inside Oracle AI Vector Search: Indexes, Metrics, and Best Practices
Go deeper into Oracle AI Vector Search as hosts Lois Houston and Nikita Abraham, along with Senior Principal APEX & Apps Dev Instructor Brent Dayley, break down how vector indexes, memory requirements, and similarity metrics make fast, powerful semantic search possible in Oracle Database 23ai. Learn about the different types of vector indexes, the VECTOR data type, and how exact and approximate similarity searches work, including best practices for vector management and search performance. Oracle AI Vector Search Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. *Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release. ---------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and joining me is Lois Houston, Director of Communications and Adoption Programs with Customer Success Services. Lois: Hi everyone! Thanks for joining us again as we continue our exploration into the exciting world of Oracle AI Vector Search. In today’s episode, we’re taking you inside the technology powering vector search in Oracle Database 23ai. We’ll break down core concepts like vector indices, how vectors are stored and managed, and how you can use similarity metrics to unlock new possibilities with your data. 01:09 Nikita: We’ll also dig into best practices for handling vectors, everything from memory requirements and table creation to the nuts and bolts of running both exact and approximate similarity searches. Back with us today is Senior Principal APEX & Apps Dev Instructor Brent Dayley. Hi Brent! What exactly are vector indexes? Brent: Now, vector indexes are specialized indexing data structures that can make your queries more efficient against your vectors. They use techniques such as clustering, and partitioning, and neighbor graphs. Now, they greatly reduce the search space, which means that your queries happen quicker. They're also extremely efficient. They do require that you enable the vector pool in the SGA. 02:06 Lois: And are there different types of vector indices supported? Brent: So, Oracle AI Vector Search supports two types of indexes, in-memory neighbor graph vector index. HNSW is the only type of in-memory neighbor graph vector index that is supported. These are very efficient indexes for vector approximate similarity search. HNSW graphs are structured using principles from small world networks along with layered hierarchical organization. And neighbor partition vector index. Neighbor partition vector index, inverted file flat index, is the only type of neighbor partition index supported. It is a partition-based index which balances high search quality with reasonable speed. In order for you to be able to use vector indexes, you do need to enable the vector pool area. And in order to do that, what you need to do is set the vector memory size parameter. You can set it at the container database level. And the PDB inherits it from the CDB. Now bear in mind that the database does have to be balanced when you set the vector pool. Other considerations, vector indexes are stored in this pool, and vector metadata is also stored here. You do need to restart the database. So large vector indexes do need lots of RAM, and RAM constrains the vector index size. You should use IVF indexes when there is not enough RAM. IVF index is used both the buffer cache as well as disk. 04:05 Lois: Now, memory is definitely a key consideration, right? Can you share more about the memory requirements and considerations for working with vectors? Brent: So to remind you, a vector is a numerical representation of text, images, audio, or video that encodes the features or semantic meaning of the data, instead of the actual contents, such as the words or pixels of an image. So the vector is a list of numerical values known as dimensions with a specified format. Now, Oracle does support the int8 format, the float32 format, and the float64 format. Depending on the format depends on the number of bytes. For instance, int8 is one byte, float32 is four bytes. 04:56 Nikita: And how do you calculate the size of a vector? Brent: Now, that's going to depend upon the embedding model that you use to create those embeddings. Oracle AI Vector Search supports vectors with up to 65,535 dimensions. As a reminder, vectors are stored in tables and table data is stored on disk. 05:19 Nikita: Let’s talk about working with vectors in tables. Can you walk us through how Oracle Database 23ai supports creating tables with vector columns? Brent: Now, Oracle Database 23ai does have a new VECTOR data type. The new data type was created in order to support vector search. The definition can include the number of dimensions and can include the format. Bear in mind that either one of those are optional when you define your column. The possible dimension formats are Int, float 32, and float 64. Float 32 and float 64 are IEEE standards, and Oracle Database will automatically cast the value if needed. Let's take a look at some of the declaration examples. Now, if we just do a vector type, then the vectors can have any arbitrary number of dimensions and formats. If we describe the vector type as vector * , *, then that means that vectors can have an arbitrary number of dimensions and formats. Vector and vector * , * are equivalent. Vector with the number of dimensions specified, followed by a comma, and then an asterisk, is equivalent to vector number of dimensions. Vectors must all have the specified number of dimensions, or an error will be thrown. Every vector will have its dimension stored without format modification. And if we do vector asterisk common dimension element format, what that means is that vectors can have an arbitrary number of dimensions, but their format will be up-converted or down-converted to the specified dimension element format, either INT8, float 32, or float 64. 07:25 Lois: Are there any operations or configurations that are prohibited with the VECTOR data type? Brent: You cannot define vector columns in or as external tables, index-organized tables, neither as the primary key nor as non-key columns, in clusters or cluster tables, global temporary tables, subpartitioning key, primary key, foreign key, or unique constraint. Additionally, you cannot define vector columns in or as check constraints, default value, modify column, manually segment space manage tablespaces. Only the SYS user can create vectors as basic files in manually segment space manage tablespaces. For continuous query notification queries, or for non-vector indexes such as B-tree, bitmap, reverse key, text, or spatial indexes. Also, bear in mind that Oracle does not support distinct, count distinct, order by, group by, join condition, or comparison operators such as less than, greater than, or equal to with vector columns. 08:46 Have you already nailed the basics of AI? Then it’s time to level up. Explore advanced AI with our OCI AI Professional courses and certifications covering Data Science, Generative AI, and AI Vector Search. Are you ready to take the next step? Head over to mylearn.oracle.com and learn more! 09:12 Nikita: Welcome back!! Now, let’s shift gears and discuss vector search itself. How does one create a vector “on the fly” for testing or learning purposes? Brent: Now, the vector constructor is a function that allows us to create vectors without having to store those in a column in a table. These are useful for learning purposes. You use these usually with a smaller number of dimensions. Bear in mind that most embedding models can contain thousands of different dimensions. You get to specify the vector values, and they usually represent two-dimensional like xy coordinates. The dimensions are optional, and the format is optional as well. 10:01 Lois: Once we have vectors, how do we compare them or measure how “close” they are to each other? Brent: Now vector distance uses the function VECTOR_DISTANCE as the main function. This allows you to calculate distances between two vectors and therefore takes two vectors as parameters. Optionally, you can specify a metric. If you do not specify a metric, then the default metric, COSINE, would be used. You can optionally use other shorthand functions, too. These include L1 distance, L2 distance, cosine distance, and inner product. All of these functions also take two vectors as input and return the distance between them. Now the VECTOR_DISTANCE function can be used to perform a similarity search. And bear in mind these caveats. If a similarity search query does not specify a distance metric, then the default cosine metric will be used for both exact and approximate searches. If a similarity search does specify a distance metric in the VECTOR_DISTANCE function, then an exact search with that distance metric is used if it conflicts with the distance metric specified in a vector index. If the two distance metrics are the same, then this will be used for both exact as well as approximate searches. 11:44 Nikita: Can you break down the distance metrics we use in Oracle AI Vector Search? Brent: We have Euclidean and Euclidean squared distances. We have cosine similarity, dot product similarity, Manhattan distance, and Hamming similarity. Now let's take a closer look at the first of these metrics, Euclidean and Euclidean squared distances. This gives us the straight-line distance between two vectors. It does use the Pythagorean theorem. And notice that it is sensitive to both the vector size as well as the direction. With Euclidean distances, comparing squared distances is equivalent to comparing distances. So when ordering is more important than the distance values themselves, the squared Euclidean distance is very useful as it is faster to calculate than the Euclidean distance, which avoids the square root calculation. 12:54 Lois: Cosine similarity is a term I hear often. How does it work exactly? Brent: It is one of the most widely used similarity metrics, especially in natural language processing. The smaller the angle means they are more similar. While cosine distance measures how different two vectors are, cosine similarity measures how similar two vectors are. 13:20 Nikita: Dot product similarity comes up a lot, too. What’s its role? Brent: Dot product similarity allows us to multiply the size of each vector by the cosine of their angle. The corresponding geometrical interpretation of this definition is equivalent to multiplying the size of one of the vectors by the size of the projection of the second vector onto the first one or vice versa. Larger means that they are more similar. Smaller means that they are less similar. 13:58 Lois: How does Manhattan distance differ from other metrics, and when is it used? Brent: This is useful for describing uniform grids. You can imagine yourself walking from point A to point B in a city such as Manhattan. Now, since there are buildings in the way, maybe we need to walk down one street and then turn and walk down the next street in order to get to our result. As you can imagine, this metric is most useful for vectors describing objects on a uniform grid such as city blocks, power grids, or perhaps a chessboard. Now these are faster than the Euclidean metric. 14:48 Nikita: And how is Hamming similarity different from the others? Brent: This describes where vector dimensions differ. They are binary vectors, and it tells us the number of bits that require change to match. It compares the position of each bit in the sequence. Now, these are usually used in order to detect network errors. 15:17 Nikita: Now that we’ve covered the foundations, how do we actually search for the “closest” vectors in our data? What’s an exact similarity search? Brent: An exact similarity search allows you to calculate the query vector distance to all other vectors. This is also called a flat search or an exact search. This does give you the most accurate results. It gives you perfect search quality. However, you might have potentially long search times. Now, this comparison is done using a particular distance metric. But what is important is the result set of your top closest vectors not the distance between them. Let's take a look at one of the metrics. This one is Euclidean. The Euclidean similarity search retrieves the top k nearest vectors in your space relative to the Euclidean distance metric and a query vector. Now let's take a look at Euclidean squared distance. In the case of Euclidean distances, comparing squared distances is equivalent to comparing distances. So when ordering is more important than the distance values themselves, the Euclidean squared distance is very useful, as it is faster to calculate than the Euclidean distance, avoiding the square-root calculation. 16:46 Lois: How does that compare to approximate searches, which are usually faster, using vector indices? Brent: Approximate similarity search is a type of vector search that uses vector indexes. In order to use vector indexes, you have to ensure that you have enabled the vector pool in the SGA. For a vector search to be useful, it needs to be fast and accurate. These types of searches can be more efficient. However, the trade off is that they can be less accurate. Now, approximate searches use vector indexes, and there are many types of approximate searches that you can perform using vector indexes. Vector indexes can be less accurate, but they can consume less resources. Because 100% accuracy cannot be guaranteed by the heuristics, vector index searches use target accuracy. Internally, the algorithms used for both the index creation and index search are doing their best to be as accurate as possible. You do have the option to influence those algorithms by specifying a target accuracy. Let's take a look at vector indexes a little closer. We have two types of vector indexes. We have HNSW indexes, which stand for Hierarchical Navigable Small World index, and we have Inverted File Flat index, or IVF. 18:23 Nikita: And for more complex requirements, how does Oracle handle multi-vector similarity search? Brent: Multi-vector similarity search is usually used for multi-document search. The documents would be split into chunks. The chunks would be embedded individually into vectors. It does use the concept of groupings called partitions. A multi-vector search consists of retrieving the top K vector matches, using the partitions based on the document's characteristics. The ability to score documents based on the similarity of their chunks to a query vector being searched is facilitated in SQL using the partitioned row-limiting clause. Now, the partition row-limiting clause extension is a generic extension of the SQL language. It does not have to apply to just vector searches. Multi-vector search with the partitioning row limit clause does not use vector indexes. 19:32 Lois: We covered quite a lot today! Thanks for that, Brent! If you want to learn more about the topics we discussed today, go to mylearn.oracle.com and search for the Oracle AI Vector Search Fundamentals course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 19:52 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/40669690
info_outline
Introduction to Oracle AI Vector Search
03/24/2026
Introduction to Oracle AI Vector Search
Explore Oracle AI Vector Search and learn how to find data by meaning, not just keywords, using powerful vector embeddings within Oracle Database 23ai. In this episode, hosts Lois Houston and Nikita Abraham, along with Senior Principal APEX & Apps Dev Instructor Brent Dayley, break down how similarity search works, the new VECTOR data type, and practical steps for implementing secure, AI-powered search across both structured and unstructured data. Oracle AI Vector Search Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. ---------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone! Today, we’re beginning a brand-new season, this time on Oracle AI Vector Search. Whether you’re new to vector searches or you’ve already been experimenting with AI and data, this episode will help you understand why Oracle’s approach is such a game-changer. Lois: To make sure we’re all starting from the same place, here’s a quick overview. Oracle AI Vector Search lets you go beyond traditional database searches. Not only can you find data based on specific attribute values or keywords, but you can also search by meaning, using the semantics of your data, which opens up a whole new world of possibilities. 01:20 Nikita: That’s right, Lois. And guiding us through this episode is Senior Principal APEX & Apps Dev Instructor Brent Dayley. Hi Brent! What’s unique about Oracle’s approach to vector search? What are the big benefits? Brent: Now one of the biggest benefits of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data, all in one single system. This is very powerful, and also a lot more effective because you don't need to add a specialized vector database. And this eliminates the pain of data fragmentation between multiple systems. It also supports Retrieval Augmented Generation, also known as RAG. Now this is a breakthrough generative AI technique that combines large language models and private business data. And this allows you to deliver responses to natural language questions. RAG provides higher accuracy and avoids having to expose private data by including it in the large language model training data. 02:41 Lois: OK, and can you explain what the new VECTOR data type is? Brent: So, this data type was introduced in Oracle Database 23ai. And it allows you to store vector embeddings alongside other business data. Now, the vector data type allows a foundation to store vector embeddings. This allows you to store your business data in the database alongside your unstructured data, and allows you to use those in your queries. So it allows you to apply semantic queries on business data. 03:24 Lois: For many of our listeners, “vector embeddings” might be a new term. Can you explain what vector embeddings are? Brent: Vector embeddings are mathematical representations of data points. They assign mathematical representations based on meaning and context of your unstructured data. You have to generate vector embeddings from your unstructured data either outside or within the Oracle Database. In order to get vector embeddings, you can either use ONNX embedding machine learning models or access third-party REST APIs. Embeddings can be used to represent almost any type of data, including text, audio, or visual such as pictures. And they are used in proximity searches. 04:19 Nikita: Now, searching with these embeddings isn’t about looking for exact matches like traditional search, right? This is more about meaning and similarity, even when the words or images differ? Brent, how does similarity search work in this context? Brent: So vector data is usually unevenly distributed and clustered. Vector data tends to be unevenly distributed and clustered into groups that are semantically related. Doing a similarity search based on a given query vector is equivalent to retrieving the k nearest vectors to your query vector in your vector space. What this means is that basically you need to find an ordered list of vectors by ranking them, where the first row is the closest or most similar vector to the query vector. The second row in the list would be the second closest vector to the query vector, and so on, depending on your data set. What we need to do is to find the relative order of distances. And that's really what matters rather than the actual distance. Now, similarity searches tend to get data from one or more clusters, depending on the value of the query vector and the fetch size. Approximate searches using vector indexes can limit the searches to specific clusters. Exact searches visit vectors across all clusters. 05:51 Lois: Let’s talk about how we actually convert information into these vectors. There are models behind the scenes, right? Kind of like translators between words, images, and numbers. Brent, what embedding models does Oracle support, and how do they handle different data types? Brent: Vector embedding models allow you to assign meaning to what a word, or a sentence, or the pixels in an image, or perhaps audio. What that actually means? It allows you to quantify features or dimensions. Most modern vector embeddings use a transformer model. Bear in mind that convolutional neural networks can also be used. Depending on the type of your data, you can use different pretrained open-source models to create vector embeddings. As an example, for textual data, sentence transformers can transform words, sentences, or paragraphs into vector embeddings. For visual data, you can use residual network, also known as ResNet, to generate vector embeddings. You can also use visual spectrogram representation for audio data. And that allows us to use the audio data to fall back into the visual data case. Now, these can also be based on your own data set. Each model also determines the number of dimensions for your vectors. As an example, Cohere's embedding model, embed English version 3.0, has 1,024 dimensions. Open AI's embedding model, text-embedding-3-large, has 3,072 dimensions. 07:45 Nikita: For organizations ready to put this into practice, there’s the question of how to get the models up and running inside Oracle Database. Can you walk us through how these models are brought into Oracle Database? Brent: Although you can generate vector embeddings outside the Oracle Database using pre-trained open-source embeddings or your own embedding models, you also have the option of doing those within the Oracle Database. In order to use those within the Oracle Database, you need to use models that are compatible with the Open Neural Network Exchange Standard, or ONNX, also known as onn-ex. Oracle Database implements an ONNX runtime directly within the database, and this is going to allow you to generate vector embeddings directly inside the Oracle Database using SQL. 08:41 AI is transforming every industry. So, it’s no wonder that AI skills are the most sought-after by employers. If you’re ready to dive into AI, check out the OCI AI Foundations training and certification that’s available for free! It’s the perfect starting point to build your AI knowledge. Head over to mylearn.oracle.com to kickstart your AI journey today! 09:06 Nikita: Welcome back! Let’s make this practical. Imagine I’m setting this up for the first time. What are the big steps? Can you walk us through the end-to-end workflow using Oracle AI Vector Search? Brent: Generate vector embeddings from your data, either outside the database or within the database. Now, embeddings are a mathematical representation of what your data meaning is. So, what does this long sentence mean, for instance? What are the main keywords out of it? You can also generate embeddings not only on your typical string type of data, but you can also generate embeddings on other types of data, such as pictures or perhaps maybe audio wavelengths. Maybe we want to convert text strings to embeddings or convert files into text. And then from text, maybe we can chunk that up into smaller chunks and then generate embeddings on those chunks. Maybe we want to convert files to embeddings, or maybe we want to use embeddings for end-to-end search. Now you have to generate vector embeddings from your unstructured data, as we mentioned, either outside or within the Oracle Database. You can either use the ONNX embedding machine learning models or you can access third-party REST APIs. You can import pretrained models in ONNX format for vector generation within the database. You can download pretrained embedding machine learning models, convert them into the ONNX format if they are not already in that format. Then you can import those models into the Oracle Database and generate vector embeddings from your data within the database. Oracle also allows you to convert pre-trained models to the ONNX format using Oracle machine learning for Python. This enables the use of text transformers from different companies. 11:36 Nikita: Once those embeddings are generated, what’s the next step? Brent: Store vector embeddings. So you can create one or more columns of the vector data type in your standard relational data tables. You can also store those in secondary tables that are related to the primary tables using primary key foreign key relationships. You can store vector embeddings on structured data and relational business data in the Oracle Database. You do store the resulting vector embeddings and associated unstructured data with your relational business data inside the Oracle Database. 12:17 Lois: And when do vector indexes come into play? Brent: Now you may want to create vector indexes in the event that you have huge vector spaces. This is an optional step, but this is beneficial for running similarity searches over those huge vector spaces. 12:38 Nikita: Now, once all of that is in place, how do users perform similarity searches? Brent: So once you have generated the vector embeddings and stored those vector embeddings and possibly created the vector indexes, you can then query your data with similarity searches. This allows for native SQL operations and allows you to combine similarity searches with relational searches in order to retrieve relevant data. So let's take a look at the combined complete workflow. Step number one, generate the vector embeddings from your unstructured data. Step number two, store the vector embeddings. Step number three, create vector indexes. And step number four, combine similarity and keyword searches. Now there is another optional step. You could generate a prompt and send it to a large language model for a full RAG inference. You can use the similarity search results to generate a prompt and send it to your generative large language model in order to complete your RAG pipeline. 14:07 Lois: Thanks for that detailed walk-through, Brent. To sum up, today we introduced Oracle AI Vector Search, discussed its core concepts, data types, embedding models, and the complete workflow you’ll use to get real value out of your business data, securely and efficiently. Nikita: If you want to learn more about the topics we discussed today, go to mylearn.oracle.com and search for the Oracle AI Vector Search Fundamentals course. And if you’re feeling inspired to try this out for yourself, don’t forget to check out the Oracle Database 23ai SQL Workshop for hands-on training. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 14:49 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/40596765
info_outline
Exploring the Oracle Analytics AI Assistant
03/17/2026
Exploring the Oracle Analytics AI Assistant
Join hosts Lois Houston and Nikita Abraham for a special episode of the Oracle University Podcast as they explore the Oracle Analytics AI Assistant. In this episode, you’ll discover how Oracle’s AI-powered conversational tool empowers users of all backgrounds to interact with business data using simple, natural-language questions. Learn how the assistant interprets queries, surfaces visualizations, and delivers actionable insights in seconds, all within Oracle’s secure analytics environment. The episode dives into best practices for data preparation, security and privacy safeguards, how to configure datasets for optimal AI performance, and tips for getting the most relevant results. You’ll also hear how synonyms, column indexing, and user permissions make analytics more accessible and accurate. Visualize Data with the Oracle Analytics AI Assistant: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. ------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone! Today’s episode is on the Oracle Analytics AI Assistant, which is all about making business data accessible and useful, no matter your background. Whether you’re a seasoned pro or just starting out with Oracle Analytics, you’ll want to stick around for this episode because we’re covering everything you need to know to unlock powerful, intuitive, and secure data insights. 01:06 Lois: That’s right. And full disclosure before we start. We’re trying something a little different for this episode. Instead of a live guest, our expert will be an AI-generated voice sharing insights drawn directly from Oracle’s official course materials. Think of it as getting a taste of what our training courses are like, with a little help from AI. So, with that, let’s kick things off by taking a closer look at what the Oracle Analytics AI Assistant really is. Expert: The Oracle Analytics AI Assistant is an AI-powered tool that provides a conversational interface for data analysis. With this tool, data exploration becomes more intuitive and efficient, helping you access fast, personalized insights. The AI Assistant makes use of Generative AI to process queries, analyze indexed datasets, and create or refine relevant visualizations. It is fully integrated into the Oracle Analytics platform, complementing existing analytic and visualization capabilities. 02:13 Nikita: So, put simply, users have the ability to interact with their data in plain English and receive immediate, visual answers. Expert: Exactly! You can ask natural language questions, such as, "What were my sales in the United States last Tuesday?" or "Show me monthly sales for this year," and the assistant interprets the question, queries the right data, and generates the best visualization. 02:39 Lois: Before we dive deeper, let’s ground ourselves in some of the core concepts behind this technology. Here’s an overview of the AI technologies powering the assistant. Expert: - Artificial Intelligence refers to systems or machines that perform tasks which typically require human intelligence, like reasoning, learning, perception, and language understanding. - Large Language Models or LLMs are AI programs trained on very large data sets. LLMs can generate human-like language and perform complex language tasks, such as writing emails or answering questions. - Generative AI is a branch of AI that can create new content, such as text, images, and audio. GenAI includes chatbots and virtual assistants capable of human-like conversations, answering questions, and creating content based on user prompts. - Natural Language Processing or NLP is a subfield of AI, targeting how computers understand and generate human language. 03:42 Lois: Now, let’s look at what happens behind the scenes when someone interacts with the Oracle Analytics AI Assistant. Expert: Here is how the process works. You ask a question or make a request in natural language. Oracle Analytics Cloud identifies the most relevant dataset to answer that question, looking at metadata and attribute values. The platform prepares a prompt for the LLM that includes dataset metadata, column names, synonyms, and your question. The LLM and Natural Language Understanding interpret the question, and then translate it into a structured query. Oracle Analytics validates this query against your data model, and then queries your database. Based on the results, the AI Assistant creates the most appropriate visualization, like a chart, table, or similar format, and provides additional natural language insights. 04:36 Nikita: Security and privacy are top priorities for organizations using tools like this, so let’s get into Oracle’s approach to protecting user data. Expert: At Oracle, your data privacy and security are always top priorities. Specifically, your data is never shared with external model providers or other customers. Pre-trained generative AI models are accessed exclusively within Oracle’s secure cloud infrastructure. No customer data is stored or retained by the AI models after processing, and prompt data is not used to train the models. And finally, all data processed is fully isolated and never combined or visible to anyone outside your organization. 05:20 Lois: In other words, users always remain in full control of their own data, with no risk of leakage or exposure to outside parties. Nikita: Yeah, this kind of reassurance is absolutely critical for enterprises. 05:32 Lois: That’s right, Niki. Next, let’s cover how to get the most accurate and relevant insights from the AI Assistant by following some best practices for prompting. Expert: To get the best answers, you need to be specific. Include key data points, timeframes, or filters. For example, something like: "Show total sales by country for Q2 2024." Keep questions focused, clear, and concise. Refine your request as needed. If you want different details or a simpler trend line, follow up with something like, "Show by quarter," or "Replace product category with customer segment." Avoid complex prompts, like highly nested or multi-step ones. Ask a series of concise questions instead. When typing column names or field values, pause briefly to let the Assistant suggest the correct field. This increases prompt accuracy. Consider the context of the conversation. Filters and refinements made in previous messages persist, so be aware that context builds over the conversation unless reset. 06:36 Nikita: So, you might start with something like, “Show me sales trends for the last 5 years,” and then get more granular, like, “Include only technology products,” or “Break the results down by product sub-category.” Lois: But sometimes, you may just want to start from scratch, so let’s discuss how you can reset your session with the AI Assistant. Expert: Just select the “Clear Assistant History” option and you can begin a new analysis. 07:03 Nikita: Language capabilities are another important consideration, so here’s an overview of which languages the Assistant currently supports. Expert: Right now, English is the primary language supported. Simple questions in other languages may work, but with less accuracy and fewer features. Talk to your Oracle Analytics administrator if you have multilingual needs. 07:26 Lois: Let’s clarify what kinds of questions are beyond the scope of the Assistant. Expert: The Assistant is built for business-oriented, goal-driven queries, not for technical schema questions or database logic. So, don’t ask about dataset structures or technical metadata. But do ask about trends, comparisons, breakdowns, and summaries that relate to your business. 07:53 Do you want to fast-track your learning goals? Join us for live events hosted by Oracle expert instructors! Get certification exam tips, learn about new technology, and ask your questions in real time. Take charge of your learning. Visit mylearn.oracle.com and join a live event today! 08:13 Nikita: Welcome back! Now, let’s discuss why configuring datasets is crucial for working effectively with the AI Assistant. Expert: Effectively indexing and configuring your dataset can make a huge difference when working with the AI Assistant. When you index a dataset, you’re basically creating searchable references. This makes it easier for the AI Assistant to quickly locate the most relevant columns and give accurate responses to natural language questions. It’s important to know that you’ll need to manually select which columns to index. For example, if your users are likely to ask about sales in the United States, you’ll want to make sure that both the “Country” column and the “Sales” column are included when indexing. That way, the Assistant knows exactly where to look when someone asks a question about U.S. sales figures. Another thing to remember is that you can make your analytics more user-friendly by resolving ambiguities and assigning synonyms to your dataset columns. For instance, if there’s a generic “date” column, clarify whether that refers to the “order date” or the “ship date.” It helps to add synonyms as well, so the assistant can handle different ways users might phrase their questions. So, while it may take a little extra effort upfront, making your dataset easy to search and understand pays off. Your AI Assistant can respond quickly and accurately, and your users get the answers they’re looking for with less hassle. 09:43 Lois: Next, we’ll outline the steps for configuring and indexing datasets for optimal performance. Expert: First you need to confirm dataset access. You’ll need read/write privileges to enable the AI Assistant and index the dataset. Then, on the Search tab, under “Index Dataset For,” select “Assistant.” Choose your language and, optionally, set an indexing schedule. Carefully pick columns users will likely question, like sales, region, or date. Avoid technical metadata, sensitive data, and high-cardinality columns like Customer IDs. Choose whether to index only column names or names plus data values. Including data values helps with typing suggestions and nuance. Avoid values no one will search on. Importantly, indexed dataset values are never sent to the LLM. They are retrieved from the dataset when visualizations are created. Assign synonyms to attribute names. Oracle Analytics suggests synonyms, but you can also add your own. Finally, save the changes and run indexing to make the dataset searchable by the Assistant. 10:50 Nikita: Now, let’s look at how configuring subject areas can further tailor the experience. Expert: You’ll need to navigate to the Search Index by going through the Console’s Configuration and Settings. Choose your language and indexing schedule. Index folders relevant to business questions; avoid non-relevant or sensitive columns. Select the Index Type: “Index Metadata Only” for high-cardinality columns (like IDs); “Index” for columns and values that users reference. As with datasets, clarify column meanings with user-friendly synonyms. Finalize settings and run the index to prepare your subject area for AI-powered queries. Special care must be taken with date columns. Select and clearly identify the main business date so queries don’t become ambiguous. 11:39 Lois: Synonyms play an important role in reducing ambiguity and enhancing results, so let’s review the best practices for setting them up effectively. Expert: If your columns use abbreviations, acronyms, or codes—like “custNo” or “Pname”—it’s a good idea to provide synonyms to clarify what those attributes actually mean. Think about how people typically refer to those columns in everyday language. So instead of just “custNo,” add “Customer Number” as a synonym, and for “Pname,” you would use “Product Name.” If you can, actually renaming the column is usually more effective than just adding a synonym. But if that’s not possible for some reason, a synonym is the next best thing. Dates can be another tricky area. Datasets often have several date columns, like “Ship Date,” “Order Date,” and “Invoice Date.” If a user asks, “Show me revenue by date,” the system has to decide which date column to use, and it may just pick one for you. If you definitely want “Order Date” to be considered the default date, make sure to assign “date” as a synonym specifically for that column. There’s also the situation where different tables have columns with the same name—like “name” from both a Product table and an Employee table. You’ll want to use synonyms for these columns too, to make it clear what each one means. Adding more than one synonym can help as well. For example, if you have a “Yield” column, maybe also specify “revenue” and “income” as synonyms, so users can ask questions however they naturally would. Avoid using reserved words or special characters in your synonyms. This means words like “Count,” “Year,” or anything that’s also a SQL function, plus characters like “@” or special symbols. Also, steer clear of Unicode characters and terms that are analytical functions or date formats. The whole point is to make your columns easy for business users or anyone else to reference naturally, using the terms they’re most likely to try in a search. And finally, just a few rules of thumb: synonyms can be up to 50 characters long, you can use up to 20 synonyms for each column, and you don’t need to worry about uppercase or lowercase; column names aren’t case sensitive. Besides the basic setup and using synonyms, you can really improve the quality of answers from the AI Assistant (and the LLM it uses) by prepping and enriching your data. It’s easier for the AI to work with words than numbers. Try “binning” numerical values into simple categories people can understand. For instance, instead of showing a long list of sales amounts, split them into groups like "small," "medium," and "large." LLMs handle words better than blanks. If your data has missing or null values, fill them in with something meaningful, like “Unknown,” “Not specified,” or “Not available.” Skipping this step could cause errors in queries, such as reports missing customers because their country is blank. Incorrect averages or summaries, especially if missing values are ignored. Issues with forecasting, if data gaps throw off trends. The AI Assistant might skip important columns or even generate errors. Ambiguous or duplicate column names confuse both users and the LLM. Make your names clear and consistent. You can use Oracle Analytics’s Transform editor to add even more context. For example, you might extract the day of the week from a date, so you can easily ask, “Show sales for all Fridays in 2026.” By preparing your data with these steps, you help the AI Assistant give you more accurate and insightful answers, making data analysis a lot smoother! 15:27 Nikita: Finally, let’s walk through the process of making the Oracle Analytics AI Assistant accessible to end users directly within their workbooks. Expert: Permissions are controlled through application roles. Your administrator must create a specific role enabling access to the AI Assistant. To enable consumer access, open your workbook in edit mode and select Present. From the Workbook tab, toggle it on in the Insights Panel section. Choose tabs like Watch Lists and Workbook Assistant. Decide which data sources in your workbook are available to the consumer. Save, and then use Preview to simulate the user experience. Consumers can access the AI Assistant by selecting Auto Insights at the top of the workbook. They can then type in natural language questions, review visualizations, and follow up. Repeat these steps for each workbook you wish to enable. 16:22 Lois: This really puts agile, self-service analytics at everyone’s fingertips, all while keeping data security and integrity front and center. Nikita: And it’s not just plug-and-play. To get the best results, you configure your data, enrich it, apply the right synonyms and permissions, and then your team can ask questions and visualize results just by using natural language. Lois: If you’re ready to kickstart or deepen your journey with the Oracle Analytics AI Assistant, or you want to review the topics we covered in today’s episode in even greater detail, visit mylearn.oracle.com. Nikita: That wraps up this episode. Thanks for spending time listening to us today. Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 17:14 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/40513975
info_outline
Oracle Database@AWS: Monitoring, Logging, and Best Practices
03/10/2026
Oracle Database@AWS: Monitoring, Logging, and Best Practices
Running Oracle Database@AWS is most effective when you have full visibility and control over your environment. In this episode, hosts Lois Houston and Nikita Abraham are joined by Rashmi Panda, who explains how to monitor performance, track key metrics, and catch issues before they become problems. Later, Samvit Mishra shares key best practices for securing, optimizing, and maintaining a resilient Oracle Database@AWS deployment. Oracle Database@AWS Architect Professional: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services Lois: Hello again! Last week’s discussion was all about how Oracle Database@AWS stays secure and available. Today, we’re joined by two experts from Oracle University. First, we’ll hear from Rashmi Panda, Senior Principal Database Instructor, who will tell you how to monitor and log Oracle Database@AWS so your environment stays healthy and reliable. Nikita: And then we’re bringing in Samvit Mishra, Senior Manager, CSS OU Cloud Delivery, who will break down the best practices that help you secure and strengthen your Oracle Database@AWS deployment. Let’s start with you, Rashmi. Is there a service that allows you to monitor the different AWS resources in real time? Rashmi: Amazon CloudWatch is the cloud-native AWS monitoring service that can monitor the different AWS resources in real time. It allows you to collect the resource metrics and create customized dashboards, and even take action when certain criteria is met. Integration of Oracle Database@AWS with Amazon CloudWatch enables monitoring the metrics of the different database resources that are provisioned in Oracle Database@AWS. Amazon CloudWatch collects raw data and processes it to produce near real-time metrics data. Metrics collected for the resources are retained for 15 months. This facilitates analyzing the historical data to understand and compare the performance, trends, and utilization of the database service resources at different time intervals. You can set up alarms that continuously monitor the resource metrics for breach of user-defined thresholds and configure alert notification or take automated action in response to that metric threshold being reached. 02:19 Lois: What monitoring features stand out the most in Amazon CloudWatch? Rashmi: With Amazon CloudWatch, you can monitor Exadata VM Cluster, container database, and Autonomous database resources in Oracle Database@AWS. Oracle Database@AWS reports metrics data specific to the resource in AWS/ODB namespace of Amazon CloudWatch. Metrics can be collected only when the database resource is an available state in Oracle Database@AWS. Each of the resource types have their own metrics defined in AWS/ODB namespace, for which the metrics data get collected. 02:54 Nikita: Rashmi, can you take us through a few metrics? Rashmi: At Exadata database VM Cluster, there is CPU utilization, memory utilization, swap space storage file system utilization metric. Then there is load average on the server, what is the node status, and the number of allocated CPUs, et cetera. Then for container database, there is CPU utilization, storage utilization, block changes, parse count, execute count, user calls, which are important elements that can provide metrics data on database load. And for Autonomous Database metrics data include DB time, CPU utilization, logins, IOPS and IO throughput, RedoSize, parse, execute, transaction count, and few others. 03:32 Nikita: Once you’ve collected these metrics and analyzed database performance, what tools or services can you use to automate responses or handle specific events in your Oracle Database@AWS environment? Rashmi: Then there is Amazon EventBridge, which can monitor events from AWS services and respond automatically with certain actions that may be defined. You can monitor events from Oracle Database@AWS in EventBridge, which sends events data continuously to EventBridge at real time. Eventbridge forwards these events data to target AWS Lambda and Amazon Simple Notification Service to perform any actions on occurrence of certain events. Oracle Database@AWS events are structured messages that indicate changes in the life cycle of the database service resource. Eventbridge can filter events based on your defined rules, process them, and deliver to one or more targets. Event Bus is the router that receives the events, optionally transform them, and then delivers the events to the targets. Events from Oracle Database@AWS can be generated by two means: they can be generated from Oracle Database@AWS in AWS, and they can also be generated directly from OCI and received by EventBridge in AWS. You can monitor Exadata Database and Autonomous Database resource events. Ensure that the Exadata infrastructure status is an available state. You can configure how the events are handled for these resources. You can define rules in EventBridge to filter the events of interest and the target, who is going to receive and process those events. You can filter events based on a pattern depending on the event type, and apply this pattern using Amazon EventBridge put-rule API, with the default event bus to route only those matching events to targets. 05:13 Lois: And what about events that AWS itself generates? Rashmi: Events that are generated in AWS for the Oracle Database@AWS resources are delivered to the default event bus of your AWS account. These events that are generated in AWS for Oracle Database@AWS resources include lifecycle changes of the ODB network. The different network events are successful creation or failure of the creation of the ODB network, and successful deletion or failure in deletion of the ODB network. When you subscribe to Oracle Database@AWS, then an event bus with prefix aws.partner/odb is created in your AWS account. All events generated in OCI for the Oracle Database@AWS resources are then received in this event bus. When you are creating filter pattern using Amazon EventBridge put-rule API, you must set the event bus name to this event bus. Make sure you do not delete this event bus. Events generated in OCI and received into event bus are extensive. They include events of Oracle Exadata infrastructure, VM Cluster, container, and pluggable databases. 06:14 Lois: If you want to look back at what’s happened in your environment, like who made the changes or accessed resources, what’s the best AWS service for logging and auditing all that activity? Rashmi: Amazon CloudTrail is a logging service in AWS that records the different actions taken by a user or roles, or an AWS service. Oracle Database@AWS is integrated with Amazon Cloud Trail. This enables logging of all the different events on Oracle Database@AWS resources. Amazon Cloud Trail captures all the API calls to Oracle Database@AWS as events. These API calls include calls from the Oracle Database@AWS console, and code calls to Oracle Database@AWS API operations. These log files are delivered to Amazon S3 bucket that you specify. These logs determine the identity of the caller who made the call request to Oracle Database@AWS, their IP from which the call originated, the time of the call, and some additional details. CloudTrail event history stores immutable record of the past 90 days of management events in an AWS region. You can view, search, and download these records from CloudTrail Event History. You can access CloudTrail when you create an AWS account that automatically gives you the access to CloudTrail. Event history. If you would like to retain the logs for a longer period of time beyond 90 days, you can create CloudTrail trails or CloudTrail Lake event data store. Management events in AWS provide information about management operations that are performed on the resources in your AWS account. Management operations are also called control plane operations. Thus, the control plane operations in Oracle Database@AWS are logged as management events in CloudTrail logs. 07:59 Are you a MyLearn subscriber? If so, you’re automatically a member of the Oracle University Learning Community! Join millions of learners, attend exclusive live events, and connect directly with Oracle subject matter experts. Enjoy the latest news, join challenges, and share your ideas. Don’t miss out! Become an active member today by visiting mylearn.oracle.com. 08:25 Nikita: Welcome back! Samvit, let’s talk best practices. What should teams keep in mind when they’re setting up and securing their Oracle Database@AWS environment? Samvit: Use IAM roles and policies with least privilege to manage Oracle Database@AWS resources. This ensures only authorized users can provision or modify DB resources, reducing the risk of accidental or malicious changes. Oracle Data Safe monitors database activity, user risk, and sensitive data, while AWS CloudTrail records all AWS API calls. Together, they give full visibility across the database and cloud layers. Autonomous Database supports Oracle Database Vault for enforcing separation of duties. Exadata Database Service can integrate with Audit Vault and Database Firewall to prevent privileged users from bypassing security controls. Enable multifactor authentication for AWS IAM users managing Oracle Database@AWS. This adds a strong second layer of protection against stolen credentials. Always deploy your Oracle Database@AWS in private subnets without public IPs. Use AWS security groups and NACLs to strictly limit inbound and outbound traffic, allowing access only from trusted applications. Exadata Database Service supports integration with Oracle Vault for key lifecycle management. And in case of Autonomous Database, the transparent data encryption keys are automatically managed. But you can bring your own keys with OCI Vault. Key rotation ensures compliance and reduces risk of key compromise. Oracle Database@AWS enforces encrypted connections by default. Ensure clients connect with TLS 1.2 or 1.3 to protect data in transit from interception or tampering. Use Oracle Data Safe's user assessment features to detect dormant users or excessive privileges. Disable unused accounts and rightsize permissions to reduce insider threats and security gap. Export database audit logs to Oracle Data Safe Audit Vault or AWS S3 with object lock for immutability. This prevents lock tampering and ensures audit evidence is preserved for compliance. 11:25 Lois: OK, that covers security. Do you have any tips for making sure your Oracle Database@AWS setup is reliable and resilient? Samvit: Start with clear recovery objectives. Define how much downtime and data loss each workload can tolerate. These targets drive your HADR architecture and backup strategy. Implement business continuity measures to deliver maximum uptime for your databases. As a best practice, you must configure disaster recovery environment for your critical databases so that, in the event of any disaster affecting the primary database, applications can be immediately failed over to the DR environment, ensuring least application downtime and zero or minimal data loss. With Oracle Database@AWS, you can automate the creation and management of DR environment for your database services using different deployment capabilities. You can opt to configure either cross-availability zone DR in the same region or configure cross-region DR. Since cross-availability zone can only provide site failure protection, you must also configure a cross-region DR to protect against regional failure. A DR plan is only effective if tested. Regular failover and switchover drills validate that people, processes, and systems can recover as designed. For Exadata Database, Autonomous Recovery Service provides automated backup validation, recovery guarantees, and protection against accidental data loss or corruption. Oracle-managed backups are fully managed by OCI. When you create your Oracle Exadata Database, you can enable automatic backups by choosing Enable Automatic Backups in the OCI Console. When you do that, you can select Amazon S3 or OCI Object Storage or Autonomous Recovery Service as the backup destination. Don't just take backups. You also need to test them. Regularly restore backups into non-production environment to validate integrity and recovery time. Plan beyond just the database. Map application and middleware dependencies to ensure end-to-end business resilience. A database failover is useless if dependent apps can't reconnect. 14:09 Nikita: Another area of interest is performance and cost. What practices help teams balance the two? Samvit: Autonomous Database automatically scales CPU and storage as workloads grow. This ensures performance during peaks while avoiding overprovisioning. So you should enable ADB auto-scaling. Monitor CPU, memory, and IO metrics with AWS CloudWatch to rightsize your compute. Scale up or down based on actual utilization instead of static provisioning. Autonomous databases continuously evaluate and creates indexes automatically. This improves query performance without requiring manual tuning. Use connection pooling in your applications to optimize database connections. Minimizing round-trip reduces latency and improves throughput. Apply AWS tags to database and related resources for cost allocation and chargeback. Tagging also helps with governance and cost visibility. Choose between bring your own license and license-included models for Oracle Database@AWS. The right model depends on your existing license portfolio and cost strategy. Not all workloads need long backup retention. Adjust retention policies based on business needs to balance compliance with storage costs. Exadata Database supports Oracle multitenant with pluggable databases. Consolidating databases reduces infrastructure footprint and licensing costs. Performance tuning isn't just technical. Align metrics with business KPIs. correlating DB performance to user experience and revenue impact helps prioritize optimizations. 16:20 Lois: Before we wrap up, Samvit, let’s look at operational efficiency. What advice do you have for making day-to-day operations more efficient? Samvit: Use infrastructure as code tools like Terraform or AWS CloudFormation to automate provisioning. This ensures consistent, repeatable deployments with minimal manual errors. For Autonomous Database, enable auto-start/stop to optimize costs by running databases only when needed. This is ideal for dev test or seasonal workloads. Exadata Database Service provides fleet maintenance to patch multiple systems consistently. This reduces downtime and simplifies lifecycle management. Integrate AWS CloudWatch for performance monitoring and EventBridge for event-driven automation. This helps detect issues early and trigger automated workflows. Oracle Data Safe provides ready-to-use audit and compliance reports. Use these to streamline governance and reduce the effort of manual compliance tracking. For Autonomous databases, Performance Hub simplifies monitoring while Exadata users benefit from AWR and ASH reports. Together, they give deep insights into performance trends. Automated tagging policies and change management workflows help maintain governance. They ensure resources are tracked properly and changes are auditable. Monitor storage consumption and growth patterns using AWS CloudWatch and the ADB Console. Proactive tracking helps avoid capacity issues and unexpected costs. Send CloudTrail logs into EventBridge to trigger automated incident responses. This shortens response time and builds operational resilience. 18:36 Nikita: Samvit and Rashmi, thanks for spending time with us today. Your insights always help bring the bigger picture into focus. Lois: They definitely do. And if you’d like to go deeper into everything we covered, head over to mylearn.oracle.com and look up the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 19:03 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/40375160
info_outline
How Oracle Database@AWS Stays Secure and Available
03/03/2026
How Oracle Database@AWS Stays Secure and Available
When your business runs on data, even a few seconds of downtime can hurt. That’s why this episode focuses on what keeps Oracle Database@AWS running when real-world problems strike. Hosts Lois Houston and Nikita Abraham are joined by Senior Principal Database Instructor Rashmi Panda, who takes us inside the systems that keep databases resilient through failures, maintenance, and growing workloads. Oracle Database@AWS Architect Professional: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone! In our last episode, we explored the security and migration strengths of Oracle Database@AWS. Today, we’re joined once again by Senior Principal Database Instructor Rashmi Panda to look at how the platform keeps your database available and resilient behind the scenes. 01:00 Lois: It’s really great to have you with us, Rashmi. As many of you may know, keeping critical business applications running smoothly is essential for success. And that’s why it’s so important to have deployments that are highly resilient to unexpected failures, whether those failures are hardware-, software-, or network-related. With that in mind, Rashmi, could you tell us about the Oracle technologies that help keep the database available when those kinds of issues occur? Rashmi: Databases deployed in Oracle Database@AWS are built on Oracle's Foundational High Availability Architecture. Oracle Real Application Cluster or Oracle RAC is an Active-Active architecture where multiple database instances are concurrently running on separate servers, all accessing the same physical database stored in a shared storage to simultaneously process various application workloads. Even though each instance runs on a separate server, they collectively appear as a single unified database to the application. As the workload grows and demands additional computing capacity, then new nodes can be added to the cluster to spin up new database instances to support additional computing requirements. This enables you to scale out your database deployments without having to bring down your application and eliminates the need to replace existing servers with high-capacity ones, offering a more cost-effective solution. 02:19 Nikita: That’s really interesting, Rashmi. It sounds like Oracle RAC offers both scalability and resilience for mission-critical applications. But of course, even the most robust systems require regular maintenance to keep them running at their best. So, how does planned maintenance affect performance? Rashmi: Maintenance on databases can take a toll on your application uptime. Database maintenance activities typically include applying of database patches or performing updates. Along with the database updates, there may also be updates to the host operating system. These operations often demand significant downtime for the database, which consequently leads to slightly higher application downtime. Oracle Real Application Cluster provides rolling patching and rolling upgrades feature, enabling patching and upgrades in a rolling fashion without bringing down the entire cluster that significantly reduces the application downtime. 03:10 Lois: And what happens when there’s a hardware failure? How does Oracle keep things running smoothly in that situation? Rashmi: In the event of an instance or a hardware failure, Oracle RAC ensures automatic service failover. This means that if one of the instance or node in the cluster goes down, the system transparently failovers the service to an available instance in the cluster, ensuring minimal disruption to your application. This feature enhances the overall availability and resilience of your database. 03:39 Lois: That sounds like a powerful way to handle unexpected issues. But for businesses that need even greater resilience and can’t afford any downtime, are there other Oracle solutions designed to address those needs? Rashmi: Oracle Exadata is the maximum availability architecture database platform for Oracle databases. Core design principle of Oracle Exadata is built around redundancy, consisting of networking, power supplies, database, and storage servers and their components. This robust architecture ensures protection against the failure of any individual component, effectively guaranteeing continuous database availability. The scale out architecture of Oracle Exadata allows you to start your deployment with two database servers and three storage servers, having different number of CPU cores and different sizes and types of storage to meet the current business needs. 04:26 Lois: And if a business suddenly finds demand growing, how does the system handle that? Is it able to keep up with increased needs without disruptions? Rashmi: As the demand increases, the system can be easily expanded by adding more servers, ensuring that the performance and capacity grow with your business requirements. Exadata Database Service deployment in Oracle Database@AWS leverages this foundational technologies to provide high availability of database system. This is achieved by provisioning databases using Oracle Real Application Cluster, hosted on the redundant infrastructure provided by Oracle Exadata Infrastructure Platform. This deployment architecture provides the ability to scale compute and storage to growing resource demands without the need for downtime. You can scale up the number of enabled CPUs symmetrically in each node of the cluster when there is a need for higher processing power or you can scale out the infrastructure by adding more database and storage servers up to the Exadata Infrastructure model limit, which in itself is huge enough to support any large workloads. The Exadata Database Service running on Oracle RAC instances enables any maintenance on individual nodes or patching of the database to be performed with zero or negligible downtime. The rolling feature allows patching one instance at a time, while services seamlessly failover to the available instance, ensuring that the application experienced little to no disruption during maintenance. Oracle RAC, coupled with Oracle Exadata redundant infrastructure, protects the Database Service from any single point of failure. This fault-tolerant architecture features redundant networking and mirrored disk, enabling automatic failover in the event of a component failure. Additionally, if any node in the cluster fails, there is zero or negligible disruption to the dependent applications. 06:09 Nikita: That’s really impressive, having such strong protection against failures and so little disruption, even during scaling and maintenance. But let’s say a company wants those high-availability benefits in a fully managed environment, so they don’t have to worry about maintaining the infrastructure themselves. Is there an option for that? Rashmi: Similar to Oracle Exadata Database Service, Oracle Autonomous Database Service on dedicated infrastructure in Oracle Database@AWS also offers the same feature, with the key difference being that it's a fully managed service. This means customers have zero responsibility for maintaining and managing the Database Service. This again, uses the same Oracle RAC technology and Oracle Exadata infrastructure to host the Database Service, where most of the activities of the database are fully automated, providing you a highly available database with extreme performance capability. It provides an elastic database deployment platform that can scale up storage and CPU online or can be enabled to autoscale storage and compute. Maintenance activities on the database like database updates are performed automatically without customer intervention and without the need of downtime, ensuring seamless operation of applications. 07:20 Lois: Can we shift gears a bit, Rashmi? Let’s talk about protecting data and recovering from the unexpected. What Oracle technologies help guard against data loss and support disaster recovery for databases? Rashmi: Oracle Database Autonomous Recovery Service is a centralized backup management solution for Oracle Database services in Oracle Cloud Infrastructure. It automatically takes backup of your Oracle databases and securely stores them in the cloud. It ensures seamless data protection and rapid recovery for your database. It is a fully managed solution that eliminates the need for any manual database backup management, freeing you from associated overhead. It implements an incremental forever backup strategy, a highly efficient approach where only the changes since the last backup are identified and backed up. This approach drastically reduces the time and storage space needed for backup, as the size of the incremental changes is significantly lower than the full database backup. 08:17 Nikita: And what’s the benefit of using this backup approach? Rashmi: The benefit of this approach is that your backups are completed faster, with much lesser compute and network resources, while still guaranteeing the full recoverability of your database in the event of a failure. You can achieve zero data loss with this backup service by enabling the real-time protection option, while minimizing the data loss by recovering data up to the last subsecond. It is highly recommended to enable this option for mission-critical databases that cannot tolerate any data loss, whether due to a ransomware attack or due to an unplanned outage. The protection policy can retain the protected database backups for a minimum of 14 days to a maximum of 95 days. The recovery service requires and enforces the backups are encrypted. These backups are compressed and encrypted during the backup process. The integrity of the backups is continuously validated without placing a burden on the production database. This ensures that the stored backup data is consistent and recoverable when needed. This protects against malicious user activity or any ransomware attack. With strict policy-based retention strategy, it prevents modification or deletion of backup data by malicious users. 09:30 Lois: Now, let’s look at the next layer of protection. Rashmi, can you tell us about Oracle Active Data Guard? Rashmi: Oracle Active Data Guard provides highly available data protection and disaster recovery for Enterprise Oracle Databases. It creates and manages one or more transactionally consistent standby copies of production database, which is the active primary. The standby database is isolated from production environment located miles away in a distance data center, ensuring the standby remains protected and unaffected, even if the primary is impacted by a disaster. In the event of a disaster or data corruption occurring at the primary, the standby can take over the role as new primary, thus allowing business to continue its operations uninterrupted. It keeps the standby database in sync with the production database by continuously applying change logs from production. 10:25 Do you want to stay ahead in today’s fast-paced world? Check out our New Features courses for Oracle Fusion Cloud Applications. Each quarter brings new updates and hands-on training to keep your skills sharp and your knowledge current. Head over to mylearn.oracle.com to dive into the latest advancements! 10:45 Nikita: Welcome back! Rashmi, how does Oracle Active Data Guard operate in practice? Rashmi: It uses the knowledge of Oracle Database block format to continuously validate physical blocks or logical intrablock corruption during redo transport and change apply. With automatic block repair feature, whenever any corrupt block is detected in the primary or the standby database, then it is automatically repaired by transferring a good copy of the block from other destination that holds it. This is handled transparently without any error being reported in the application. It enables you to upload the read-only workloads and backup operations to the standby database, reducing the load on the production database. You can achieve zero data loss at any distance by configuring a special synchronization mechanism known as parsing. File systems form the attack surface for ransomware. Since Active Data Guard replicates the data at the memory level, any ransomware attack on the primary database will never be replicated to the standby database. This allows for a safe failover to the standby without any data loss, and shielding the database from effects of the attack. You can enable automatic failover of the primary database to a chosen standby database without any manual intervention by configuring a Data Guard Broker. The Data Guard Broker continuously monitors the primary database and automatically performs a failover to the standby when the predefined failover conditions are met. Active Data Guard enables you to perform database maintenance or database software upgrades with almost zero or minimal downtime. 12:18 Lois: And how does disaster recovery work for Exadata Database Service in Oracle Database@AWS? Rashmi: Exadata Database Service, by design, are already protected against local failures by use of technologies like Oracle RAC and Oracle Exadata. Now, by deploying Exadata Database Service across multiple availability zones in an AWS region, it can ensure that your database services remain resilient to site failures. It leverages Oracle Active Data Guard to create standby in a separate availability zone such that if the primary availability zone is affected, then all application traffic can be routed to the database services in the secondary availability zone, restoring business continuity of the application back to normal. Through continuous validation of the data blocks at both the primary and the standby database, any potential corruption is detected and prevented. This ensures data integrity and protection across the entire database service. By leveraging zero data loss Autonomous Recovery Service, the database ensures that the backup remains secure and unaffected by ransomware. This enables rapid restoration of clean, uncompromised data in the event of an attack. Periodic patching and upgrades are performed online in a rolling fashion with little to no impact on the application uptime using a combination of Oracle RAC and Oracle Active Data Guard technologies. For all resource-intensive workloads like database backup or generating monthly reports, which are read-only in nature, they can be uploaded to the standby, reducing the load on the production database. In the cross-availability zone DR setup, you have the flexibility to configure Active Data Guard to use either the AWS network or the OCI network for keeping database redo logs to the standby database. Choosing which network to use for the traffic is entirely at the enterprise discretion. However, both are Oracle maximum availability–compliant and the setup is pretty simple. If the network traffic being used is OCI network or AWS network, then respective cloud provider is responsible for ensuring the reliability. You have to take into account the different charges that each cloud provider may have. And you can provision multiple standby databases using the console. Optionally, you may set up a broker manually to enable automatic failover capability. 14:30 Nikita: We just covered cross-availability-zone protection. But what if an entire AWS region goes down? Rashmi: This is where we can provide an additional level of protection by provisioning cross-region disaster recovery for your Exadata Database Service in Oracle Database@AWS. This deployment protects your database against regional disasters. You can provision another DR environment in a different AWS region that supports Oracle Database@AWS. This deployment, together with the cross-availability zone deployment, complements your highly available and protected database service deployment in Oracle Database@AWS. Under the hood, it uses the same Oracle Database technologies that include Oracle Active Data Guard, OCI Autonomous Recovery Service, Oracle Exadata, Oracle RAC to provide the same capabilities as in case of cross-availability zone deployment. Here too, you have the flexibility to configure Oracle Active Data Guard to use either AWS network or OCI network for shipping database redo logs to the standby. And for the network traffic options, the feature remains the same, except a small difference with respect to chargeback. When using OCI Network for cross-region deployment, there is no charge for the first 10 TB of data transfer per month. Beyond that, standard OCI charges would apply. When using AWS network, you may refer to AWS charging sheet for the cross-region traffic. 15:49 Nikita: Thank you so much, Rashmi, for this insightful episode. Lois: Yes, thank you! And if you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 16:13 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/40283105
info_outline
Security and Migration with Oracle Database@AWS
02/24/2026
Security and Migration with Oracle Database@AWS
In this episode, hosts Lois Houston and Nikita Abraham are joined by special guests Samvit Mishra and Rashmi Panda for an in-depth discussion on security and migration with Oracle Database@AWS. Samvit shares essential security best practices, compliance guidance, and data protection mechanisms to safeguard Oracle databases in AWS, while Rashmi walks through Oracle’s powerful Zero-Downtime Migration (ZDM) tool, explaining how to achieve seamless, reliable migrations with minimal disruption. Oracle Database@AWS Architect Professional: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services. Lois: Hello again! We’re continuing our discussion on Oracle Database@AWS and in today’s episode, we’re going to talk about the aspects of security and migration with two special guests: Samvit Mishra and Rashmi Panda. Samvit is a Senior Manager and Rashmi is a Senior Principal Database Instructor. 00:59 Nikita: Hi Samvit and Rashmi! Samvit, let’s begin with you. What are the recommended security best practices and data protection mechanisms for Oracle Database@AWS? Samvit: Instead of everyone using the root account, which has full access, we create individual users with AWS, IAM, Identity Center, or IAM service. And in addition, you must use multi-factor authentication. So basically, as an example, you need a password and a temporary code from virtual MFA app to log in to the console. Always use SSL or TLS to communicate with AWS services. This ensures data in transit is encrypted. Without TLS, the sensitive information like credentials or database queries can be intercepted. AWS CloudTrail records every action taken in your AWS account-- who did what, when, and from where. This helps with audit, troubleshooting, and detecting suspicious activity. So you must set up API and user activity logging with AWS CloudTrail. Use AWS encryption solutions along with all default security controls within AWS services. To store and manage keys by using transparent data encryption, which is enabled by default, Oracle Database@AWS uses OCI vaults. Currently, Oracle Database@AWS doesn't support the AWS Key Management Service. You should also use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3. 03:08 Lois: And how does Oracle Database@AWS deliver strong security and compliance? Samvit: Oracle Database@AWS enforces transparent data encryption for all data at REST, ensuring stored information is always protected. Data in transit is secured using SSL and Native Network Encryption, providing end-to-end confidentiality. Oracle Database@AWS also uses OCI Vault for centralized and secure key management. This allows organizations to manage encryption keys with fine-grained control, rotation policies, and audit capabilities to ensure compliance with regulatory standards. At the database level, Oracle Database@AWS supports unified auditing and fine-grained auditing to track user activity and sensitive operations. At the resource level, AWS CloudTrail and OCI audit service provide comprehensive visibility into API calls and configuration changes. At the database level, security is enforced using database access control lists and Database Firewall to restrict unauthorized connections. At the VPC level, network ACLs and security groups provide layered network isolation and access control. Again, at the database level, Oracle Database@AWS enforces access controls to Database Vault, Virtual Private Database, and row-level security to prevent unauthorized access to sensitive data. And at a resource level, AWS IAM policies, groups, and roles manage user permissions with the fine-grained control. 05:27 Lois Samvit, what steps should users be taking to keep their databases secure? Samvit: Security is not a single feature but a layered approach covering user access, permissions, encryption, patching, and monitoring. The first step is controlling who can access your database and how they connect. At the user level, strong password policies ensure only authorized users can login. And at the network level, private subnets and network security group allow you to isolate database traffic and restrict access to trusted applications only. One of the most critical risks is accidental or unauthorized deletion of database resources. To mitigate this, grant delete permissions only to a minimal set of administrators. This reduces the risk of downtime caused by human error or malicious activity. Encryption ensures that even if the data is exposed, it cannot be read. By default, all databases in OCI are encrypted using transparent data encryption. For migrated databases, you must verify encryption is enabled and active. Best practice is to rotate the transparent data encryption master key every 90 days or less to maintain compliance and limit exposure in case of key compromise. Unpatched databases are one of the most common entry points for attackers. Always apply Oracle critical patch updates on schedule. This mitigates known vulnerabilities and ensures your environment remains protected against emerging threats. 07:33 Nikita: Beyond what users can do, are there any built-in features or tools from Oracle that really help with database security? Samvit: Beyond the basics, Oracle provides powerful database security tools. Features like data masking allow you to protect sensitive information in non-production environments. Auditing helps you monitor database activity and detect anomalies or unauthorized access. Oracle Data Safe is a managed service that takes database security to the next level. It can access your database configuration for weaknesses. It can also detect risky user accounts and privileges, identify and classify sensitive data. It can also implement controls such as masking to protect that data. And it can also continuously audit user activity to ensure compliance and accountability. Now, transparent data encryption enables you to encrypt sensitive data that you store in tables and tablespaces. It also enables you to encrypt database backups. After the data is encrypted, this data is transparently decrypted for authorized users or applications when they access that data. You can configure OCI Vault as a part of the transparent data encryption implementation. This enables you to centrally manage keystore in your enterprise. So OCI Vault gives centralized control over encryption keys, including key rotation and customer managed keys. 09:23 Lois: So obviously, lots of companies have to follow strict regulations. How does Oracle Database@AWS help customers with compliance? Samvit: Oracle Database@AWS has achieved a broad and rigorous set of compliance certifications. The service supports SOC 1, SOC 2, and SOC 3, as well as HIPAA for health care data protection. If we talk about SOC 1, that basically covers internal controls for financial statements and reporting. SOC 2 covers internal controls for security, confidentiality, processing integrity, privacy, and availability. SOC 3 covers SOC 2 results tailored for a general audience. And HIPAA is a federal law that protects patients' health information and ensures its confidentiality, integrity, and availability. It also holds certifications and attestations such as CSA STAR, C5. Now C5 is a German government standard that verifies cloud providers meet strict security and compliance requirements. CSA STAR attestation is an independent third-party audit of cloud security controls. CSA STAR certification also validates a cloud provider's security posture against CSA's cloud controls matrix. And HDS is a French certification that ensures cloud providers meet stringent requirements for hosting and protecting health care data. Oracle Database@AWS also holds ISO and IEC standards. You can also see PCI DSS, which is basically for payment card security and HITRUST, which is for high assurance health care framework. So, these certifications ensure that Oracle Database@AWS not only adheres to best practices in security and privacy, but also provides customers with assurance that their workloads align with globally recognized compliance regimes. 11:47 Nikita: Thank you, Samvit. Now Rashmi, can you walk us through Oracle’s migration solution that helps teams move to OCI Database Services? Rashmi: Oracle Zero-Downtime Migration is a robust and flexible end-to-end database migration solution that can completely automate and streamline the migration of Oracle databases. With bare minimum inputs from you, it can orchestrate and execute the entire migration task, virtually needing no manual effort from you. And the best part is you can use this tool for free to migrate your source Oracle databases to OCI Oracle Database Services faster and reliably, eliminating the chances of human errors. You can migrate individual databases or migrate an entire fleet of databases in parallel. 12:34 Nikita: Ok. For someone planning a migration with ZDM, are there any key points they should keep in mind? Rashmi: When migrating using ZDM, your source databases may require minimal downtime up to 15 minutes or no downtime at all, depending upon the scenario. It is built with the principles of Oracle maximum availability architecture and leverages technologies like Oracle GoldenGate and Oracle Data Guard to achieve high availability and online migration workflow using Oracle migration methods like RMAN, Data Pump, and Database Links. Depending on the migration requirement, ZDM provides different migration method options. It can be logical or physical migration in an online or offline mode. Under the hood, it utilizes the different database migration technologies to perform the migration. 13:23 Lois: Can you give us an example of this? Rashmi: When you are migrating a mission critical production database, you can use the logical online migration method. And when you are migrating a development database, you can simply choose the physical offline migration method. As part of the migration job, you can perform database upgrades or convert your database to multitenant architecture. ZDM offers greater flexibility and automation in performing the database migration. You can customize workflow by adding pre or postrun scripts as part of the workflow. Run prechecks to check for possible failures that may arise during migration and fix them. Audit migration jobs activity and user actions. Control the execution like schedule a job pause, resume, if needed, suspend and resume the job, schedule the job or terminate a running job. You can even rerun a job from failure point and other such capabilities. 14:13 Lois: And what kind of migration scenarios does ZDM support? Rashmi: The minimum version of your source Oracle Database must be 11.2.0.4 and above. For lower versions, you will have to first upgrade to at least 11.2.0.4. You can migrate Oracle databases that may be of the Standard or Enterprise edition. ZDM supports migration of Oracle databases, which may be a single-instance, or RAC One Node, or RAC databases. It can migrate on Unix platforms like Linux, Oracle Solaris, and AIX. For Oracle databases on AIX and Oracle Solaris platform, ZDM uses logical migration method. But if the source platform is Linux, it can use both physical and logical migration method. You can use ZDM to migrate databases that may be on premises, or in third-party cloud, or even within Oracle Cloud Infrastructure. ZDM leverages Oracle technologies like RMAN datacom, Database Links, Data Guard, Oracle GoldenGate when choosing a specific migration workflow. 15:15 Are you ready to revolutionize the way you work? Discover a wide range of Oracle AI Database courses that help you master the latest AI-powered tools and boost your career prospects. Start learning today at mylearn.oracle.com. 15:35 Nikita: Welcome back! Rashmi, before someone starts using ZDM, is there any prep work they should do or things they need to set up first? Rashmi: Working with ZDM needs few simple configuration. Zero-downtime migration provides a command line interface to run your migration job. First, you have to download the ZDM binary, preferably download from my Oracle Support, where you can get the binary with the latest updates. Set up and configure the binary by following the instructions available at the same invoice node. The host in which ZDM is installed and configured is called the zero-downtime migration service host. The host has to be Oracle Linux version 7 or 8, or it can be RCL 8. Next is the orchestration step where connection to the source and target is configured and tested like SSH configuration with source and target, opening the ports in respective destinations, creation of dump destination, granting required database privileges. Prepare the response file with parameter values that define the workflow that ZDM should use during Oracle Database migration. You can also customize the migration workflow using the response file. You can plug in run scripts to be executed before or after a specific phase of the migration job. These customizations are called custom plugins with user actions. Your sources may be hosted on-premises or OCI-managed database services, or even third-party cloud. They may be Oracle Database Standard or Enterprise edition and on accelerator infrastructure or a standard compute. The target can be of the same type as the source. But additionally, ZDM supports migration to multicloud deployments on Oracle Database@Azure, Oracle Database@Google Cloud, and Oracle Database@AWS. You begin with a migration strategy where you list the different databases that can be migrated, classification of the databases, grouping them, performing three migration checks like dependencies, downtime requirement versions, and preparing the order migration, the target migration environment, et cetera. 17:27 Lois: What migration methods and technologies does ZDM rely on to complete the move? Rashmi: There are primarily two types of migration: physical or logical. Physical migration pertains to copy of the database OS blocks to the target database, whereas in logical migration, it involves copying of the logical elements of the database like metadata and data. Each of these migration methods can be executed when the database is online or offline. In online mode, migration is performed simultaneously while the changes are in progress in the source database. While in offline mode, all changes to the source database is frozen. For physical offline migration, it uses backup and restore technique, while with the physical online, it creates a physical standby using backup and restore, and then performing a switchover once the standby is in sync with the source database. For logical offline migration, it exports and imports database metadata and data into the target database, while in logical online migration, it is a combination of export and import operation, followed by apply of incremental updates from the source to the target database. The physical or logical offline migration method is used when the source database of the application can allow some downtime for the migration. The physical or logical online migration approach is ideal for scenarios where any downtime for the source database can badly affect critical applications. The only downtime that can be tolerated by the application is only during the application connection switchover to the migrated database. One other advantage is ZDM can migrate one or a fleet of Oracle databases by executing multiple jobs in parallel, where each job workflow can be customized to a specific database need. It can perform physical or logical migration of your Oracle databases. And whether it should be performed online or offline depends on the downtime that can be approved by business. 19:13 Nikita: Samvit and Rashmi, thanks for joining us today. Lois: Yeah, it’s been great to have you both. If you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 19:35 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/40206145
info_outline
Getting Started with Oracle Database@AWS
02/17/2026
Getting Started with Oracle Database@AWS
If you’ve ever wondered how Oracle Database really works inside AWS, this episode will finally turn the lights on. Join Senior Principal OCI Instructor Susan Jang as she explains the two database services available (Exadata Database Service and Autonomous Database), how Oracle and AWS share responsibilities behind the scenes, and which essential tasks still land on your plate after deployment. You’ll discover how automation, scaling, and security actually work, and which model best fits your needs, whether you want hands-off simplicity or deeper control. Oracle Database@AWS Architect Professional: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone! In our last episode, we began the discussion on Oracle Database@AWS. Today, we’re diving deeper into the database services that are available in this environment. Susan Jang, our Senior Principal OCI Instructor, joins us once again. 00:56 Lois: Hi Susan! Thanks for being here today. In our last conversation, we compared Oracle Autonomous Database and Exadata Database Service. Can you elaborate on the fundamental differences between these two services? Susan: Now, the primary difference is between the service is really the management model. The Autonomous is fully-managed by Oracle, while the Exadata provides flexibility for you to have the ability to customize your database environment while still having the infrastructure be managed by Oracle. 01:30 Nikita: When it comes to running Oracle Database@AWS, how do Oracle and AWS each chip in? Could you break down what each provider is responsible for in this setup? Susan: Oracle Database@AWS is a collaboration between Oracle, as well as AWS. It allows the customer to deploy and run Oracle Database services, including the Oracle Autonomous Database and the Oracle Exadata Database Service directly in AWS data centers. Oracle provides the ability of having the Oracle Exadata Database Service on a dedicated infrastructure. This service delivers full capabilities of Oracle Exadata Database on the Oracle Exadata hardware. It offers high performance and high security for demanding workloads. It has cloud automation, resource scaling, and performance optimization to simplify the management of the service. Oracle Autonomous Database on the dedicated Exadata infrastructure provides a fully Autonomous Database on this dedicated infrastructure within AWS. It automates the database management tasks, including patching, backups, as well as tuning, and have built-in AI capabilities for developing AI-powered applications and interacting with data using natural language. The Oracle Database@AWS integrates those core database services with various AWS services for a comprehensive unified experience. AWS provides the ability of having a cloud-based object storage, and that would be the Amazon S3. You also have the ability to have other services, such as the Amazon CloudWatch. It monitors the database metrics, as well as performance. You also have Amazon Bedrock. It provides a development environment for a generative AI application. And last but not the least, amongst the many other services, you also have the SageMaker. This is a cloud-based platform for development of machine learning models, a wonderful integration with our AI application development needs. 03:54 Lois: How has the work involved in setting up and managing databases changed over time? Susan: When we take a look at the evolution of how things have changed through the years in our systems, we realize that transfer responsibility has now been migrated more from customer or human interaction to services. As the database technology evolves from the traditional on-premise system to the Exadata engineered system, and finally to the Autonomous Database, certain services previously requiring significant manual intervention has become increasingly automated, as well as optimized. 04:34 Lois: How so? Susan: When we take a look at the more traditional database environment, it requires manual configuration of hardware, operating system, as well as the software of the database, along with initial database creation. As we evolve into the Exadata environment, the Exadata Database, specifically the Exadata cloud service, simplifies provisioning through web-based wizard, making it faster and easier to deploy the Oracle Database in an optimized hardware. But when we move it to an Autonomous environment, it automates the entire provisioning process, allowing users to rapidly deploy mission-critical databases without manual intervention, or DBA involvement. So as customers move toward Autonomous Database through Exadata, we have fewer components that the customer needs to manage in the database stack, which gives them more time to focus more on important parts of the business. With the Exadata Database, it provides a co-management of backup, restore, patches and upgrade, monitoring, and tuning. And it allows the administrator the ability to customize the configuration to meet their very specific business needs. With Autonomous Database, it's now fully automated and it's a greater responsibility is shift toward the service. With Autonomous Database on dedicated infrastructure, it provides that fine-grained tuning more for Oracle to help you perform that task. 06:15 Nikita: If we narrow it down just to Oracle and AWS for a moment, which parts of the infrastructure or day-to-day ops are handled by each company behind the scenes? Susan: When we take a look at Oracle Database@AWS, it operates under a shared responsibility model, dividing the service responsibilities between AWS, as well as Oracle, as well as you, the customer. The AWS has the data center. Remember, this is where everything is running. The Oracle Database@AWS, the Oracle Database infrastructure may be managed by Oracle and run in OCI, but is physically located within the AWS regions, as well as the availability zones and the AWS data centers. The AWS infrastructure, in this case, is AWS's responsibility to secure the environment, including the physical security of the data center, the network infrastructure, and the foundational services like the compute, the storage, and the networking, all within AWS. The next thing of who's responsible for the shared responsibility, it's Oracle. And that would be the hardware. We provide the hardware. While the hardware may physically reside in the AWS data center, Oracle's Cloud Infrastructure operational team will be the one managing this infrastructure, including software patching, infrastructure update, and other operations through a connection to OCI. This means Oracle handles the provisioning, as well as the maintenance of any of the underlying Exadata infrastructure hardware. When we take a look at the next thing that it manages, it is also responsible besides the infrastructure of the Exadata. It is also the ability to manage the hardware, the environment of that hardware through the database control plane. So Oracle manages the administration and the operational for the Oracle Database@AWS service, which resides in OCI. So this includes the capabilities for management, upgrade, and operational features. 08:37 Nikita: And what are the key things that still remain on the customer’s plate? Susan: If you are in an Exadata environment or in an Autonomous environment, it is you, the customer, who is responsible for most of the database administration operation, as well as managing the users and the privileges of the user to access the database. No one knows the database and who should be accessing the data better than you. You will be responsible for securing the applications, the data of the database, which now allows you to define who has access to it, control the data encryption, and securing the application that interacts with the Oracle Database@AWS. 09:29 Lois: Susan, we’ve talked about both Autonomous Database and Exadata Database Service being available on Oracle Database@AWS, but what’s different about how each works in this environment, and why might someone pick one over the other? Susan: Both databases, even though they run on the same Exadata Cloud Infrastructure, both can be deployed on both public cloud, as well as the customer data center, which is Oracle Cloud@Customer. The Autonomous Database is a fully managed, completely automated environment. And this provides a capability of having a fully Autonomous Database Service running on a dedicated Oracle Exadata Infrastructure within your AWS data center. The Exadata is a service that is provided and managed by Oracle and is physically running in the AWS data center, but is designed for mission critical workload and includes RAC environment, Real Application Cluster, offering a high performance availability and full feature capability that is similar to other Exadata environment, such as those running in our customers' data center. The primary difference is really between the two services. When you take a look at the Exadata, the customer only pays for the compute resources that is used. Autoscaling can be used for a variety or variable resources, the workload, to automatically scale to the compute resources up or down when required. The Autonomous Database also has automatic optimization for data warehousing, transaction processing, as well as JSON workload. The Exadata service, the customer again, also pays for the compute resources that they allocate. But that's the key thing. The customer can initiate the scaling because it's very specific to the workload that is needed. So when you take a look at the two database services, one gives the ability to let Oracle fully manage it, including the scaling capability. The other, the Exadata, provides you the capability of having the environment that it's running on the infrastructure be managed by Oracle that adds a database administrator. You may wish to have a little bit more granular control of how you want the database to not only be scaling, but how you wish to customize how the database will be running. 12:10 Nikita: Focusing on Autonomous Database for a moment, what should teams know about how it actually runs within AWS? Susan: The Autonomous Database on the Oracle Database@AWS brings the power of the Oracle's self-managing, self-securing, and self-repairing database into your AWS environment. It provides the capability of the database automatically, automates many of the traditional, complex, and time-consuming database management tasks, such as the provisioning of the database, the patching, the backing up, and the scaling, and the performance tuning, reducing the need for any manual intervention by the database administrator. Running the Autonomous Database in your AWS region enables low latency access for your AWS applications and services that is deployed within AWS, thus improving performance and response time. With the Autonomous Database, it automates many of the traditional things that is now automatically done by Oracle. It also supports integration with various AWS services, such as the ability of the not in addition to AIM, but the cloud formation, the CloudWatch for monitoring and the S3 for the storage. You can easily migrate existing Exadata workload, including those running on Oracle RAC to AWS with minimum or no change to any of your databases or applications. In addition, there's a really powerful capability and feature of the database is called zero ETL, and that's zero extract, transformation, and load. It's an integration capability with services like your Amazon Redshift, enabling near real time analytics and machine learning on your transactional database that is stored within the Autonomous Database on in your AWS environment. So with the Autonomous Database, it checks off many of the boxes for automatic capability, securing, tuning, as well as scaling the database. With the Autonomous Database in the Dedicated Exadata Infrastructure, the Exadata Cloud Infrastructure resource represents the physical system, which can be expanded with storage, as well as compute services, the compute host. This now provides the ability to have an isolated zone for the highest protection from other tenants. The data is stored on a dedicated server only for one customer. That would be you. 14:56 Lois: Could you explain the role of Autonomous VM? What are its primary benefits? Susan: The virtual machine or as we refer to them as the cluster, includes the grid infrastructure and provides a private network isolation. This provides you the capability of having custom memory, core, and storage allocation. The Oracle Grid Infrastructure includes the Oracle Clusterware, which manages the cluster, as well as the servers, and ensure that the database can failover to another server in case of any failure. 15:34 Be a part of something big by joining the Oracle University Learning Community! Connect with over 3 million members, including Oracle experts and fellow learners. Engage in topical forums, share your knowledge, and celebrate your achievements together. Discover the community today at mylearn.oracle.com. 15:55 Nikita: Welcome back! Susan, what is the Autonomous Container Database? Susan: With the Autonomous Container Database, and you need that if you're going to create an Autonomous Database, you need to provision that within your Autonomous Exadata VM Cluster. It serves as a container to hold or to house one or more Autonomous Databases. This allows multiple Autonomous Databases to coexist in the same infrastructure while still being logically separated. And this allows for the separation of databases based on their intended use. Think of a database for production. Think of a database for development. Think of a database for testing. You may have different database versions within the same infrastructure. This isolation makes it easier for you to be able to meet your SLA, your Service Level Agreement, any long-term backups you may have, very specific encryption key needs to prevent issues from one database impacting another. So, the ability to have everything be isolated and secure is still grouping it in a manner that will meet your business needs. 17:08 Lois: Looking at Exadata Database Service specifically, what are some standout advantages for customers who deploy it on Oracle Database@AWS? Is there anything in particular they should get excited about in terms of performance or integration with AWS? Susan: The Exadata Database Service is running on a dedicated Exadata Infrastructure that's deployed within your AWS data center. It delivers the same Exadata service experience in cloud control planes as the Oracle Cloud Infrastructure, allowing you to leverage existing skills and processing across your multi-cloud environment. It addresses the data resiliency, or residency rather. And that's the scenario where many of our customers has the need. You have a need because of your security compliance to have the data local to you. By having the Exadata Database in your Oracle Database@AWS, it is running in your data center. So, this addresses that very important need, data residency, to have it close to you. It also allows for seamless integration with other AWS services and applications. So now you have a capability of a hybrid cloud architecture leveraging the benefit of both Oracle Exadata and your AWS system. It has built-in high availability, the RAC application cluster, as well as Data Guard, a capability of addressing disaster recovery capability. This also provides the ability for you to scale your compute, as well as your storage and your I/O resources independently. So as mentioned with Exadata, you have flexibility of how you want your database to be running individually. So just like the Autonomous, the Exadata Database checks off many of the boxes for running a mission-critical with high availability, highly redundant hardware and software features, along with extreme performance, scalability, and reliability. This now allows you to run your AI environment, your online transaction processing, your analytic workload on any scale on the Exadata Infrastructure running in the Oracle Cloud. And in this case, running in your data center. 19:45 Nikita: If a business suddenly needs more capacity, how does scaling work with Exadata Database Service versus Autonomous Database on Oracle Database@AWS? Susan: So with the Exadata scaling, you now can scale to meet expected demands so you know at certain point I will need more. I will then ask it to scale at that point when I will assign it-- and I'm using an example, I will assign it three computer cores all the time. But there may be demands. Think of your end of the quarter, end of the year processing that you may need more. So, you are enabling the compute cores to scale at the time you need it. And what's cool is it will then, when it's no longer needed, it will then scale back down to the original three cores that you assign. So, you only pay for the enabled cores. But what's very cool about the Autonomous is that it is real-time scaling. So, with Autonomous, now you have the capability using Autonomous Database since it is self-tuning, self-monitoring, the Autonomous Database actually monitors the workload requirement and scales to match the workload demand. Once the minimum level of the compute is defined and enabled, the automatic scaling is set. Autonomous Database will adjust to the consumption when it's needed, and it will scale back down when it's not. So though the Exadata is pretty cool, it will scale up and down on the workload demand. This is with the Autonomous is even more powerful. It is real-time scaling based on that usage at that moment. Built-in automatic increase to meet the workload demands when it spikes and it automatically scales back when it's not needed. A very powerful capability with all of our Oracle databases, the ability, even with traditional, to allow you to define what you may need with Exadata scaling for peak demands, as well as Autonomous scaling for real-time consumption and scaling when needed. When you look at all of our options, one of the key things to bear in mind is a phrase that we use: performance scale as more servers are added. And what this is really saying is Oracle's automated scaling ability for the database, it basically has the ability to maintain or improve its performance under increased workload by automatically adding computational resources when needed. This process is also known as horizontal scaling. It involves adding more servers, compute instances, to a cluster to share the processing load. And it has that capability automatically. 22:53 Nikita: There’s so much more we can discuss about Oracle Database@AWS, but let’s pause here for today! Thank you so much Susan for joining us. Lois: Yeah, it’s...
/episode/index/show/oracleuniversitypodcast/id/40009620
info_outline
What is Oracle Database@AWS?
02/10/2026
What is Oracle Database@AWS?
In this episode, hosts Lois Houston and Nikita Abraham take you inside how Oracle brings its industry-leading database technology directly to AWS customers. Senior Principal OCI Instructor Susan Jang unpacks what the OCI child site is, how Exadata hardware is deployed inside AWS data centers, and how the ODB network enables secure, low-latency connections so your mission-critical workloads can run seamlessly alongside AWS services. Susan also walks through the differences between Exadata Database Service and Autonomous Database, helping teams choose the right level of control and automation for their cloud databases. Oracle Database@AWS Architect Professional: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services. Lois: Hi there! Last week, we talked about multicloud and the partnerships Oracle has with Microsoft Azure, Google Cloud, and Amazon Web Services. If you missed that episode, do listen to it as it sets the foundation for today’s discussion, which is going to be about Oracle Database@AWS. 00:59 Nikita: That’s right. And we’re joined by Susan Jang, a Senior Principal OCI Instructor. Susan, thanks for being here. To start us off, what is Oracle Database@AWS? Susan: Oracle Database@AWS is a service that allows Oracle Exadata infrastructure that is managed by Oracle Cloud Infrastructure, or OCI, to run directly inside an AWS data center. 01:25 Lois: Susan, can you go through the key architecture components and networking relationships involved in this? Susan: The AWS Cloud is the Amazon Web Service. It's a cloud computing platform. The AWS region is a distinct, isolated geographic location with multiple physically separated data center, also known as availability zone. The availability zone is really a physically isolated data center with its own independent power, cooling, and network connectivity. When we speak of the AWS data center, it's a highly secured, specialized physical facility that houses the computing storage, the compute servers, the storage server, and the networking equipment. The VPC, the Virtual Private Cloud, is a logical, isolated virtual network. The AWS ODB network is a private user-created network that connects the virtual private cloud network of Amazon resources with an Oracle Cloud Infrastructure Exadata system. This is all within an AWS data center. The AWS-ADB peering is really an established private network connection that's between the Oracle VPC, the Virtual Private Cloud, and the Oracle Database@AWS network. And that would be the ODB. Within the AWS data center, you have something that you see called the child site. Now, an OCI child site is really a physical data center that is managed by Oracle within the AWS data center. It's a seamless extension of the Oracle Cloud Infrastructure. The site is hosting the Exadata infrastructure that's running the Oracle databases. The Oracle Database@AWS service brings the power as well as the performance of an Oracle Exadata infrastructure that is managed by Oracle Cloud Infrastructure to run directly in an AWS data center. 03:57 Nikita: So essentially, Oracle Database@AWS lets you to run your mission-critical Oracle data load close to your AWS application, while keeping management simple. Susan, what advantages does Oracle Database@AWS bring to the table? Susan: Oracle Database@AWS offers a powerful and flexible solution for running Oracle workloads natively within AWS. Oracle Database@AWS streamlines the process of moving your existing Oracle Database to AWS, making migration faster as well as easier. You get direct, low latency connectivity between your application and Oracle databases, ensuring a high performance for your mission-critical workloads. Billing, resource management, and operational tasks are unified, allowing you to manage everything through similar tools with reduce complexity. And finally, Oracle Database@AWS is designed to integrate smoothly with your AWS environments' workloads, making it so much easier to build, deploy, and scale your solutions. 05:15 Lois: You mentioned the OCI child site earlier. What part does it play in how Oracle Database@AWS works? Susan: The OCI child site really gives you the capability to combine the physical proximity and resources of AWS with the logical management and the capability of Oracle Cloud Infrastructure. This integrated approach allows us to enable the ability for you to run and manage your Oracle databases seamlessly in your AWS environment while still leveraging the power of OCI, our Oracle Cloud Infrastructure. 06:03 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure for subscribers! Whether you’re interested in multicloud, databases, networking, security, AI, or machine learning, there’s something for everyone. So, what are you waiting for? Pick your topic and get started by visiting mylearn.oracle.com. 06:29 Nikita: Welcome back! Susan, I’m curious about the Exadata infrastructure inside AWS. What does that setup look like? Susan: The Exadata Infrastructure consists of physical database, as well as storage servers. That is deployed-- the database and the storage servers are interconnected using a high-speed, low-latency network fiber, ensuring optimal performance and reliable data transfer. Each of the database server runs one or more Virtual Machines, or VMs, as we refer to them, providing flexible compute resources for different workloads. You can create, as well as manage your virtual machine, your VM clusters in this infrastructure using various methods. Your AWS console, Command-Line Interface, CLI, or Application Program Interface, that's your API, giving you various options, several options for automating, as well as integrating your existing tools. When you're creating your Exadata Infrastructure, there are a few things you need to define and set up. You need to define the total number of your database servers, the total number of your storage server, the model of your Exadata system, as well as the availability zone where all these resources will be deployed. This architecture delivers a high-performance resiliency and flexible management capability for running your Oracle Database on AWS. 08:18 Lois: Susan, can you explain the network architecture for Oracle Database deployments on AWS? Susan: The ODB network is an isolated network within the AWS that is designed specifically for Exadata deployments. It includes both the client, as well as the backup subnet, which are essential for securing and efficient database operations. When you create your Exadata Infrastructure, you need to specify the ODB network as you need the connectivity. This network is mapped directly to the corresponding network in the OCI child site. This will enable seamless communication between AWS, as well as the Oracle Cloud Infrastructure. The ODB network requires two separate CIDR ranges. And in addition, the client subnet is used for the Exadata VM cluster, providing connectivity for database operations. Well, you do also have another subnet. And that subnet is the backup subnet. And it's used to manage database backups of those VM cluster, ensuring not only data protection, but also data recovery. Within your AWS region and availability zone, the ODB network contains these dedicated client, as well as backup subnet. It basically isolates the Exadata traffic for both the day-to-day access, and that would be for the client, as well as the backup operations, and that would be for the backup subnet. This network design supports secure, high performance, and connectivity in a reliable backup management of the Oracle Database deployments that is running on AWS. 10:23 Nikita: Since we're on the topic of networking, can you tell us about ODB peering within the Oracle Database architecture? Susan: The ODB peering establishes a secure private connection between your AWS Virtual Private Cloud, your VPC, then the Oracle Database, the ODB network that contains your Exadata Infrastructure. This connection makes it possible for application servers that's running in your VPC, such as your Amazon EC2 instances to access your Oracle databases that is being hosted on Exadata within your ODB network. You specify the ODB network when you set up your infrastructure, specifically the Exadata Infrastructure. This network includes dedicated client, as well as backup subnets for an efficient and secure connectivity. If you wish to enable multiple VPCs to connect to the same ODB network and access the Oracle Database@AWS resources, you can leverage AWS Transit Gateways or even an AWS Cloud WAN for scalable and centralized connectivity. The virtual private cloud contains your application server, and that's securely paired with the Oracle Database network, creating a seamless, high-performance path to your application to interact with your Oracle Database. ODB peering simplifies the connectivity between the AWS application environments and the Oracle Exadata Infrastructure, thus supporting a flexible, high performance, and secure database access. 12:23 Lois: Now, before we close, can you compare two key databases that are available with Oracle Database@AWS: Oracle Exadata Database Service and Oracle Autonomous Database Service? Susan: The Exadata Database Service offers a fully managed and dedicated infrastructure with operational monitoring that is handled by you, the customer. In contrast, the Autonomous Database is fully managed by Oracle, taking care of all the operational monitoring. Exadata provides very high scalability though resources, such as disk and compute, must be sized manually. Where in the Autonomous Database, it offers high scalability through automatic elastic scaling. When we speak of performance, both service deliver strong results. Exadata offers ultra-low latency and Exadata-level performance, while the Autonomous Database delivers optimal performance with automation. Both services provide high migration capability. Exadata offers full compatibility and the Autonomous Database includes a robust set of migration tools. When it comes to management, Exadata requires manual management and administration. And that's really in a way to provide you the ability to customize it in the manner you desire, making it meets your very specific business needs, especially your database needs. In contrast, the Autonomous Database is fully managed by Oracle, including automated administration tasks, optimal self-tuning features to further reduce any management overhead. When we speak of the feature sets, the Exadata delivers a full suite of Oracle features, including the RAC application cluster, or the Real Application Cluster, RAC, whereas the Autonomous offers a complete feature set, but specifically that is designed for optimized Autonomous operations. Finally, when we speak of integration, integration for both of this service integrates seamlessly with AWS service, such as your EC2, your network, the VPC, your policies, the Identity and Access Management, your IAM, the monitoring with your CloudWatch, and of course, your storage, your SC, ensuring a consistent experience within your AWS ecosystem. 15:21 Nikita: So, you could say that the Exadata Database Service is better for customers who want dedicated infrastructure with granular control, while the Autonomous Database is built for customers who want a fully automated experience. Thank you, Susan, for taking the time to talk to us about Oracle Database@AWS. Lois: That’s all we have for today. If you want to learn more about the topics we discussed, head over to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. In our next episode, we’ll find out how to get started with the Oracle Database@AWS service. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 16:06 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/39997670
info_outline
Oracle Multicloud Made Easy
02/03/2026
Oracle Multicloud Made Easy
Multicloud is changing the way modern teams run their workloads: with real choice and real control. In this episode, hosts Lois Houston and Nikita Abraham welcome Senior Principal OCI Instructor Sergio Castro, who explains how Oracle has partnered with Microsoft Azure, Google Cloud, and AWS to bring Oracle Database directly inside their data centers, unlocking sub-millisecond latency and new levels of flexibility. They discuss how organizations can seamlessly migrate from on-premises or between clouds with minimal disruption, take advantage of best-in-class cloud services, and enhance business continuity. Oracle Database@AWS Architect Professional: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! We’re kicking off a new season of the podcast today, this time on Oracle Database@AWS. But before we fully dive into that, we’ve got Sergio Castro with us to introduce multicloud and talk about some of its use cases. Sergio, who you may have heard on the podcast before, is a Senior Principal OCI Instructor with Oracle University. 01:02 Lois: Hi Sergio! Thanks for joining us today. We’ve spoken a lot about multicloud before, but we couldn’t possibly discuss Oracle Database@AWS without another quick intro to multicloud. So, for anyone who doesn’t already know, what is multicloud? And could you also talk about what Oracle is doing in this space? Sergio: It is the use of several Cloud providers to deliver an IT service. Basically, a multi-cloud strategy allows organizations to distribute their workloads across multiple Cloud platforms and providers. This will help aiding the flexibility when picking the right tool for each job. Basically, by selecting the best Cloud Service, IT architects can take advantage of each provider's strengths, including custom hardware, software, and service capabilities. And Oracle is a pioneer in multi-cloud. We have partnerships with Azure, Google Cloud, AWS, and we've been doing multi-cloud since 2019, including Oracle Interconnect for Azure and Oracle Interconnect for Google Cloud. Our multi-cloud products is the Oracle Database Service at Azure, at Google Cloud, and at AWS. Here we have our database inside the data centers of these Cloud Service providers. And multi-cloud can be complemented by resources that you have on-premises, providing you with a hybrid Cloud model. And our public Cloud offerings are not limited to the commercial realm. Multi-cloud is beginning to be available also in the government realm. You can now find Oracle Interconnect for Azure in the US government realm. We also have government realm offerings in the UK and in the European Union. And of course, dedicated Cloud. If you're going to be involving on-premises, you can also have all the Oracle Cloud Infrastructure resources behind your firewall, behind your routers with dedicated Cloud. So the offers from Oracle Cloud Infrastructure are really exceptional. It offers you great flexibility and choice. And the choice is yours. You select the platform for your Oracle Cloud solutions. 03:39 Nikita: You’ve already mentioned a few of them, but could you talk about the various benefits of multicloud? Sergio: A solid multi-cloud approach enables organizations to leverage the unique strengths and offerings of various Cloud service providers. By not being limited to a single vendor's capabilities or policies, businesses can adapt quickly to changing environments, deploy workloads where they fit best, and rapidly integrate new solutions as market demands evolve. Relying on a single Cloud vendor can make it challenging and costly to migrate workloads or switch providers if businesses needs change. Multi-cloud strategies mitigate this risk by distributing applications and data across multiple platforms, making technology transition smoother and giving organizations greater bargaining power. Now, diminishing single points of failure at the Cloud service provider level is great, because distributing systems and data across multiple clouds can definitely reduce dependence on a single provider or region. This increased geographic diversity improves resilience and provides a more robust backup and recovery option, helping to ensure business continuity in the event of a disaster or even an outage. With access to a range of pricing models and service levels from different providers, organizations can allocate workloads based on cost effectiveness. This best fit approach encourages cost savings by enabling the selection of the most economical provider for each workload. And this facilitates continuous cost optimization efforts. For example, OCI provides significantly lower data egress charges, this in comparison to our competitors. Multicloud management empowers organizations to place their workloads in the environments where they perform the best. By distributing workloads based on latency, processing power, or data proximity, businesses can realize performance improvements and achieve higher availability for their critical services. Now regarding best of breed, each Cloud provider brings unique innovations and specialized services to the market. With a multi-cloud approach, organizations can tailor solutions to meet specific business needs. Operating across multiple Cloud platforms means access to a wider array of data centers worldwide. This extended reach supports expansion into new markets, improves local performance for users, and helps satisfy data sovereignty requirements in diverse jurisdictions. And speaking about jurisdictions, this flexibility helps meet industry standards and regional data protection laws more effectively. 06:50 Nikita: You mentioned that Oracle’s multicloud journey started in 2019 with Azure. What was that early phase like? Sergio: The Oracle Cloud Infrastructure multi-cloud offering started with the Oracle Interconnect for Microsoft Azure, where we connect FastConnect, our digital circuit, to the equivalent Express Route, the digital circuit of Microsoft Azure. Basically FastConnect, it is used typically for extending the OCI services into on-premises. In this case, it is extending these services into another Cloud Service provider, Microsoft Azure or various applications. 07:29 Lois: And then we moved on to Oracle Database Service for Azure, right? Sergio: It's very similar to what we have right now, the Oracle Database Service at Azure, except that back then, the interface was on OCI. Basically on OCI, we had a console that resembled Azure, but all the services were still running on OCI. Now, the difference with Oracle Database Service at Azure is that we extended Oracle Cloud Infrastructure into the Azure data centers. So Oracle Database at Azure is a child site in the Microsoft Azure data centers. Basically we are placing our hardware in Azure data centers. And this gives us a very good latency, sub-one millisecond latency. 08:24 Lois: What about Oracle’s multicloud services with Google and Amazon Web Services? Sergio: Oracle Interconnect org and Oracle Database app are available for Google Cloud. We do have a service called Oracle Interconnect or Google Cloud, similar to the Azure one. And we also have the Oracle Database inside the Google Cloud data centers operating as a child site. And back in 2024 during Oracle Cloud World, we announced Oracle Database@AWS. This product is now available in two AWS regions. In a similar way, we have the Oracle Database inside the AWS data center with sub-one millisecond latency. We are currently in two data centers, but we have brought plans for being available in over 20 plan regions between Oracle Cloud and Amazon Web Services. 09:32 Nikita: Sergio, how do the capabilities of Oracle Database multicloud help enterprises modernize? Sergio: Oracle Database multi-cloud capabilities help enterprise modernize, adopting a Gen AI strategy, obviously, using the Oracle database to bring Oracle's powerful AI to business data. When you move to multi-cloud environments, you have a playground for you to test and run your workloads and then go into productions with your choice of services on the Oracle Exadata. And reducing risk, it's very easy to move to cloud and gain Oracle maximum availability architecture benefits. And by moving into a multi-cloud environment, you guarantee that you're going to be lowering your cost because you're going to be selecting the best of breed of the services that the Cloud Service provider can offer. Now, with the Oracle Database on multi-cloud environments, you're able to port your Oracle Database knowledge that you have from on-premises to a single cloud provider to a multi-cloud environment. It is the same solution, the same Oracle Database capabilities available everywhere-- on-premises, on your private cloud, on a single cloud provider, or on a multi-cloud environment. Having the same capabilities make it very easy to migrate from on-premises or to migrate from one cloud service provider to the other. Oracle Database multi-cloud solutions really offer the best of both worlds. So a choice of services directly from hyperscaler marketplace and the vendor's cloud portal. 11:21 Lois: And when you say “hyperscalers,” who exactly are you referring to? Sergio: These hyperscalers, we're talking about OCI, we're talking about Azure, we're talking about Google cloud, we're talking about AWS. Having the Oracle Database inside the Cloud data centers, regardless of who the hyperscaler provider is, guarantees low latency from your application into your database. But Oracle Database is not the only product. We also offer Oracle Interconnect for Azure and GCP. So if you want to go beyond Oracle Database@Cloud Service provider, or if you're looking to going into a region where the service is not available yet, you can leverage the Oracle Interconnect for Azure or Google Cloud platform. Basically, this service interconnects the Cloud Service providers. We have a partnership and selected regions where we interconnect with either Azure or Google Cloud platform. 12:25 Are you working toward an Oracle Certification? Join one of our live certification prep events! Get insider tips from seasoned experts and connect with others on the same path. Visit mylearn.oracle.com and kick off your certification journey today! 12:45 Nikita: Welcome back! Sergio, could you tell us about some key Oracle Database multicloud use cases? Sergio: Move to cloud. Lift and shift from on-premises to Cloud. Lift and shift from one Cloud Service provider to the other, and consolidate your database on Exadata. This will guarantee all the tools that you need for building innovative applications, bringing artificial intelligence to your business data, on the Oracle powerful AI suite, and combine Database AI with hyperscaler services and frameworks. Remember, the best of breed from the Cloud Service provider of your choice. And this will allow you to reduce complexity and cost. Now according to knowledge is not the only thing. You can also lift and ship without refactoring your data, reducing migration times, complexity, and costs with the Oracle Database Exadata and maximum availability architecture. 13:47 Nikita: What are the key differentiators and benefits of moving Oracle Database workloads to the cloud? Sergio: Extreme performance. Accelerate your database workloads with scalability, scale infrastructure, and consumption, and extreme cost optimization. But that's not all. You also get extreme availability with the Oracle maximum availability architecture, extreme resiliency, making sure that you're always running with high availability and disaster recovery protection and extreme simplicity. So you can use all your Oracle Database and Exadata capabilities. Build innovative applications with Cloud-First capabilities. These are Cloud native capabilities that are going to enable you to innovate for all your applications. And having a unified multi-cloud environment reduce complexity and cost because you can leverage your Exadata infrastructure with share licenses, low administration with database lifecycle automation, and purchase through your hyperscaler marketplace. So you can only have one vendor running all billing, even if you're leveraging multi-cloud solutions. And you can leverage your Oracle investments with bring-your-own-license and earn up to 33% towards Oracle tech license. Reduce administration by up to 65% with the Autonomous self-driving database. Only pay consumption for actual usage with online scaling, Autonomous Database, elastic pools, and per second billing. And enjoy advanced features at no added cost, like using the built-in AI vector search. 15:31 Lois: Can you give us a real-world example of a company using Oracle Database@AWS? Sergio: Fidelity Investments rely on Oracle Database@AWS. They were one of the very first customers to leverage the best of both worlds, in this case, the offering from the AWS hyperscale applications and the Oracle Database Exadata Cloud service inside AWS. Specifically, Fidelity uses this integration to make it easier to move some of its database workloads to AWS, combining the reliability and security of AWS with the critical enterprise software provided by Oracle. 16:17 Lois: Thank you, Sergio, for joining us on the podcast! To learn more about what we discussed today, head over to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. Join us next week when we dive deep into what Oracle Database@AWS is all about. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 16:43 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/39971085
info_outline
From Curiosity to Career Growth: An Oracle AI Certification Journey
01/27/2026
From Curiosity to Career Growth: An Oracle AI Certification Journey
Join us for an inspiring conversation with private equity advisor Jeffrey Malcolm as he shares how Oracle AI certification has transformed his career, family, and approach to business. Discover the real-world impact and opportunities that come from upskilling with Oracle’s leading AI training programs. AI Foundations: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Welcome to the Oracle University Podcast. I'm your host, Lois Houston, and I'm joined today by Jeffrey Malcolm, Operating Adviser working in the private equity space, to talk about how Oracle AI certifications have impacted his professional and personal life. Hi Jeffrey, thank you so much for taking the time to chat with us today. Our conversation actually stems from a fascinating discussion we had at AI World, Oracle's annual user conference. There you shared your journey to becoming Oracle AI certified... How that process not only shifted your perspective on emerging technologies, but also influenced the way you work, interact with colleagues and clients, and even how you encourage continued learning in your own family. I'm really excited to dive deeper into your story and explore the value and real world benefits of certification in today's AI-driven landscape. 01:20 Jeffrey: Uh Lois, first of all, thank you for having me. Um, it was fantastic running into your teammates at AI World. It was amazing. You know, for for me, you know, as we go through this AI journey with my portfolio companies, I'm constantly looking at what are the new things out there? How can I get myself enabled? So, excited that we're having this conversation today. 01:42 Lois: That's great. So, let's start at the beginning. Before your certifications, what was your initial reaction when you heard about Oracle's OCI and AI certification programs? Were you immediately interested or was there hesitation? Jeffrey: I was skeptical. You know, I was skeptical about OCI capabilities as you guys didn't have much market penetration at the time. You know, in my technology career, I built several enterprise applications on AWS, Azure, and GCP. However, OCI Cloud was new and my wife Kay Malcolm, who you know, kept raving at home over and over about OCI, that the cloud was faster, it was more secure and cost friendly. All of which this thing that I'm hearing are appealing to me as a CIO because that's something that I need to control at my companies that I'm working with. Lois: Right. Jeffrey: So because even though I was skeptical I was like if all of these things are appealing to her, I'm going to go ahead, I'm going to take the certification, I'm going to confirm all of these allegations that she's making to just make sure that, you know, it's actually true. And I was pleasantly surprised once I pulled the covers back. 02:59 Lois: So, you mentioned that your wife actually encouraged you to sign up for the free OCI Foundations training. Can you tell me a little bit more about that experience and how it influenced your decision to continue learning? Jeffrey: When she took the OCI test, at first she passed with a 95% score. So, you know, that encouraged me to just, you know, to take it as as as informative as I can. And to be honest, I wanted to beat her score because, you know, we're competitive. Um, upon passing and seeing the high quality of the candidate, you know, of the content. Uh, it was just hard for me to keep this internally. I wanted to share it with my network. I wanted to kind of see if there's others that could benefit from it. But you know my my initial piece was how can I beat her? And when I was able to date the score I I did score a 96 and beat her and I started sharing it with my network. And what happened Lois it was amazing. You know we we found a a cohort of individuals right around 50 who wanted to start taking the similar course. We were like hey this is something that's amazing. We had individuals who were teachers. We had individuals who had work in the corrections facility. We had plumbers. We had electrician. And they were all skeptical about taking this highly technical course. But we said, "Hey, it's self-paced. It's something that you can do and you can really benefit your career." So at the end, we had 50 people who took it. Of the 50, we had 30 brave souls who went ahead and took the certification. Um, and of the 30, we had a 24 people who passed. That's almost a 90% pass rate. Lois: Yeah. Jeffrey: And it was so successful, we actually had one individual who shared their news. He was able to get a new position where he became a technical project manager and 3x his salary. So, it was just amazing to watch how people were brave enough to take this content, how OCI did an amazing job of making it self-paced and absorbable and then people got the certification, we published it on LinkedIn, and people actually got jobs. So, it was actually quite quite impressive. 05:24 Lois: That's an incredible story. So, you didn't just become a believer, you actually went and built an application on OCI, right? What was the project and how did your new skills play a role in making that happen? Jeffrey: That's that's a funny story. So at the time I was doing the uh the OCI training. I was building a mobile native application for a startup who was at the time looking to impact climate change. They were socially conscious enterprise dedicated to bring human-centered tools to individuals to live a better life and protect our environment. You know, the the main focus was how can they create an application that had no ads, only information, and provide a tool that would allow people to do joyful actions such as recycling, such as, you know, um looking at how you can lower your power consumption in your home, moving from plastics away from your home and just not consuming that much plastic. So we really wanted to gamify that and build an application that could do that. Uh my training gave me the confidence that as I was architecting the solution to say I needed to build something scalable and secure and full transparency at the time my myself and the rest of my development team was looking at completely AWS solutions. From this training I was like no if we really want something secure and scalable, the Oracle Database specifically Autonomous Database is it and we switched we built a multicloud solution across Azure um AWS and um GCP as well as OCI OCI had our backend and we built our application to leverage it specifically because after taking a training I was convinced that the backend needed to on Oracle Database specifically Autonomous Database. So it helped the the application had been running now for 3 years no issues um from a scalability standpoint and it's been fantastic for us. 07:34 Lois: Well that's great. That's a great story to uh to talk about how you leveraged your training and into something that actually made a difference in your job. So let's talk a little bit about your AI certification. You've described the AI foundations training that you took from Oracle University as demystifying. So tell me about some of the biggest takeaways for you. How did it shift your understanding of what AI really is and how it can be used? Jeffrey: That's a great point. Um you know in the last two three years AI has just been the talk of the town and specifically in my role as an advisor to private equity companies, I'm constantly being asked how can AI impact the top line? How can AI the bottom line and help us realize the multiple or investment pieces to exit um our our different companies? My background whenever I look at a problem I need to understand the guts of it and at the time there was all of these myths and and and confusion and scared to be honest around AI. So coming from an engineering background at MIT, one of the things MIT taught me is I need to look under the covers to truly understand something from a technology standpoint. Do my due diligence before sharing best practices with my portfolio companies that I'm working with. So that made me take on this challenge to say hey I need to understand what's the difference between machine learning, deep learning. What are the different you know kind of you know neural networks out there? When do you want to use it? So the AI foundation training that Oracle was offering was compelling to me to the point that I had had great success on the OCI's piece I'm like let me take this on. So that's what really started my journey back in January of 2023 um this was just a few months after the release of ChatGPT and I really wanted to understand how AI can like skyrocket and help our companies you know drive drive value. So that's what made me take it on. I wanted to understand what's the difference between RNN you know recurrent neural networks convolutional um neural networks and what's the best business case that our companies can use? What's the best time to use a vector database? Why is it important? Why is it needed for AI solution? I wanted to be able to articulate the difference between a RAG and Agentic AI workflow to our companies so that's really was the impass of as to why I wanted to take on this piece and why I wanted to do uh the AI foundational training. 10:08 Lois: And your journey didn't stop with you and Kay, right? Your sons are both Oracle AI-certified as well as I understand it. So, tell me a little bit about that. What inspired them? Jeffrey: So our poor boys um you know we have two boys you know both are in college uh you know and one full transparency is a computer science major at at Georgia Tech. The other one is doing a biology major at um Kennesaw State. In our household, we believe in technology and and we believe anyone can look at it. So, after getting a better understanding of AI and realizing AI is really going to impact every aspect of our our society and our industry, we said we should absolutely have our boys do this. Well, of course, with any kind of young kids, they're going to be hesitant. So um we we had to really incentivize them you know for you know for my for our youngest you know who was not you know been exposed to this technology, he was starting his new business and he wanted to learn and for our oldest being at Georgia Tech he was in that computer science major this was going to help him secure some summer internship. So what we did was to incentivize them we had to turn off the Wi-Fi and so the Wi-Fi could only be on if they were doing the certification and and full transparency after I would say a weekend and like 3 days they were able to complete the course and truly um pass and understand on a foundational level like what's the difference between RAG and Agents? What's the difference between you know RNN and CNN? What are neural networks and what's deep learning the machine learning? For my oldest, who was in computer science, it helped him secure a summer internship because he was able to talk about in a very clear way that he understood AI. And then he was also able to show his um his certification and that helped him to secure an internship with Oracle on the OCI team being a software developer. And to be honest, he's going into his third summer where he's going to be at Oracle um coming this next summer in 2026. So, it's been beneficial. I tell people like this is something that you should absolutely do and um we encourage our friends and we tell the story about our boys because it's it's personal. We show that anyone can do it. 12:35 Lois: That's that's an awesome story. And the whole family is AI certified. That's great. So, you mentioned that you've been sharing your experience with your friends and your colleagues and neighbors. What are some of the common misconceptions or fears that you encounter when you're talking about AI with people? How do you help them understand what it means for their careers and for their lives? Jeffrey: Um, no, it's a great question. You know, a lot of people I talk to still think AI is either going to replace them or that it's too technical for them to ever understand. Lois: Right. Jeffrey: And the fear usually comes from not knowing where to start. Um, I tell them that AI is really just a tool and and and learning the basics helps you to see where it fits into your work life. And and once they understand that it's it's here to help, not to replace them, the conversation shifts and become, oh, okay, now how can I become more knowledgeable so I can be less fearful and identify opportunities? So really, I've been having conversation to say, look, it is not something that's here to replace you. It is a tool and it's a tool that really once you understand how you can use it at your job or in your school work or in where you volunteer, it can really drive automation and speed and allow you to do your job much better. Lois: Yeah, that's so true. And the knowledge understanding is so powerful. It really does change people's perspective from being fearful to being excited about the possibilities with AI. 14:13 Be a part of something big by joining the Oracle University Learning Community! Connect with over 3 million members, including Oracle experts and fellow learners. Engage in topical forums, share your knowledge, and celebrate your achievements together. Discover the community today at mylearn.oracle.com. 14:34 Lois: So tell me a little bit about uh how in your private equity work um I know you interact with a wide variety of clients. How does this knowledge about Oracle's AI uh technology and having the certification empower you to have conversations and build trust with your clients? Jeffrey: The biggest value I've received from getting the Oracle AI certification is that it gives me clear and practical foundations for talking to people about what AI is and what it's not. Let's be honest, there's a lot of hype out there about AI and there's a lot of hype and fear that is unproven. You know, in in my work with private equity, clients want to know what's the real, what's possible, what's worth our investment. You know is this something that we should really look at. So when I can explain AI concepts, agentic workflow, you know, neural networks and one is important, you know, what neural networks are better for vision capabilities, what neural networks are better for audio capabilities, what neural networks are just better for for text. Right? When I can really go down to those simple terms and connect with them on the operating challenges that that companies is facing, then I have tangible case studies that I can help work with my companies on that will build credibility and this hype and fear kind of starts to subside and go away. So with that, you know, when when when working with my private equity companies, I don't want to just do something just because it's the hype. We we really need to make sure that whatever we doing can drive growth and drive IBIDA growth so that we could realize our investment thesis. So this certification really helped me to just ground it in the ways that I could have real conversations with our companies about what are the activities that are going to drive growth, what are the activities that's going to be efficient, what are the activities that are going to have value creation for our company. and it's been something that you know has really been helpful. So, another thing I wanted to share is Kay and I have been working with not only, you know, digging it to enterprises, but I want to take it to universities and we've been working with her mother's alma mater, Alabama State, which is a historically black college and university, to help them get on Oracle AI and then get their foundation going because we want to take this down to the college level and help to drive this and offer it. Through that interaction, they've reached out to the city of uh Montgomery and they want to work with the public schools to start getting the school system to start becoming AI foundation um certified and understand how AI can evolve in everything that you do. So, we've been working with them. We actually had a quick event here that Oracle did um at here in Atlanta and they were able to attend and see some of the application and we're hoping to just continue this. So it it's not something that I'm just talking to my private equity companies about. I'm also want to bring this into universities, bring this into the school because it's a fundamentally different way to solve problems and anyone can do it. You don't have to have a technical background. We're at a foundationally different level where anyone can start their AI journey. 18:05 Lois: Right. And we're just at the beginning of this transformation of the industry. It's a great way to teach the next generation how to be prepared so they can have uh you know great careers and leveraging AI. So, one of the things you've mentioned to me when we've talked in the past is that you boil down AI to two things, data and math, right? Not innovation itself but a tool. So can you elaborate a little bit on that? Jeffrey: Yeah, it's one of the the things I like to say. I think if people talk to me and they say, you know, Jeff always boils it down. So, you know, when I look at it, generative AI foundation is based on the concept of machine learning and deep learning. Lois: Right. Jeffrey: Um both concepts are based on linear algebra, calculus, probability, statistics, optimization theory. You know those are some of the the the foundation of both of those. LLMs use these foundations and data to generate content and execute tasks. So people's actions in system generate the data that these LLMs use and and math follows specific patterns such as Bayer's theorem, Pythagorean theorem. The innovation that comes from those because you have math in terms of the machine learning and you have data requires thinking out of the box and not following one of these pattern, which is only accomplished through people and our unique experience. So when I think of generative AI that's why I see it as a tool. You're always going to require human interaction to drive an outcome. Lois: Right. Jeffrey: It it is a combination of math and data. Our unique experience of how we engage and how we look at a problem brings innovation to a challenge. So that's one of the things I always say it's I boils it down to it's math and data. Innovation comes from people. And the reason why it comes from people is we all have unique experiences, unique backgrounds. So we look at a problem differently in terms of how we solve it and that difference is what drives the innovation. Lois: Right. And just leverages AI to do so. Jeffrey: Correct. 20:20 Lois: So you've said that AI levels the playing field. What does that mean to you in practical terms and how have you seen it make an impact in your area of expertise? Jeffrey: When I say level the playing ground, what I mean is there's a unique opportunity where this technology is impacting every industry that we know. Prior to this, you know, if there was a technology solution such as, let's look at the big ones, cloud, everyone moving to the cloud, it required you to have some knowledge of cloud technology. Big data it required you to have some knowledge of data solutions. AI is so transformational that you know if we look take a look at vibe coding that's out there. Your ability to think about a problem and break it down in a in a tangible way and build an app and leverage vibe coding...
/episode/index/show/oracleuniversitypodcast/id/39817015
info_outline
Driving Business Value with OCI – Part 2
01/20/2026
Driving Business Value with OCI – Part 2
Security, compliance, and resilience are the cornerstones of trust. In this episode, Lois Houston and Nikita Abraham continue their conversation with David Mills and Tijo Thomas, exploring how Oracle Cloud Infrastructure empowers organizations to protect data, stay compliant, and scale with confidence. Real-world examples from Zoom, KDDI, 8x8, and Uber highlight these capabilities. Cloud Business Jumpstart: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone! In our last episode, we started the conversation around the real business value of Oracle Cloud Infrastructure and how it helps organizations create impact at scale. Lois: Today, we’re taking a closer look at what keeps the value strong — things like security, compliance, and the technology that helps businesses stay resilient. To walk us through it, we have our experts from Oracle University, David Mills, Senior Principal PaaS Instructor, and Tijo Thomas, Principal OCI Instructor. 01:12 Nikita: Hi David and Tijo! It’s great to have you both here! Tijo, let’s start with you. How does Oracle Cloud Infrastructure help organizations stay secure? Tijo: OCI uses a security first approach to protect customer workloads. This is done with implementing a Zero Trust Model. A Zero Trust security model use frequent user authentication and authorization to protect assets while continuously monitoring for potential breaches. This would assume that no users, no devices, no applications are universally trusted. Continuous verification is always required. Access is granted only based on the context of request, the level of trust, and the sensitivity of that asset. There are three strategic pillars that Oracle security first approach is built on. The first one is being automated. With automation, the business doesn't have to rely on any manual work to stay secure. Threat detection, patching, and compliance checks, all these happen automatically. And that reduces human errors and also saving time. Security in OCI is always turned on. Encryption is automatic. Identity checks are continuous. Security is not an afterthought in OCI. It is incorporated into every single layer. Now, while we talk about Oracle's security first approach, remember security is a shared responsibility, and what that means while Oracle handles the data center, the hardware, the infrastructure, software, consumers are responsible for securing their apps, configurations and the data. 03:06 Lois: Tijo, let’s discuss this with an example. Imagine an online store called MuShop. They’re a fast-growing business selling cat products. Can you walk us through how a business like this can enhance its end-to-end security and compliance with OCI? Tijo: First of all, focusing on securing web servers. These servers host the web portal where customers would browse, they log in, and place their orders. So these web servers are a prime target for attackers. To protect these entry points, MuShop deployed a service called OCI Web Application Firewall. On top of that, the MuShop business have also used OCI security list and network security groups that will control their traffic flow. As when the businesses grow, new users such as developers, operations, finance, staff would all need to be onboarded. OCI identity services is used to assign roles, for example, giving developers access to only the dev instances, and finance would access just the billing dashboards. MuShop also require MFA multi-factor authentication, and that use both password and a time-based authentication code to verify their identities. Talking about some of the critical customer data like emails, addresses, and the payment info, this data is stored in databases and storage. Using OCI Vault, the data is encrypted with customer managed keys. Oracle Data Safe is another service, and that is used to audit who has got access to sensitive tables, and also mask real customer data in non-production environments. 04:59 Nikita: Once those systems are in place, how can MuShop use OCI tools to detect and respond to threats quickly? Tijo: For that, MuShop used a service called OCI Cloud Guard. Think of it like a security operation center, and which is built right into OCI. It monitors the entire OCI environment continuously, and it can track identity activities, storage settings, network configurations and much more. If it finds something risky, like a publicly exposed object storage bucket, or maybe a user having a broad access to that environment, it raises a security finding. And better yet, it can automatically respond. So if someone creates a resource outside of their policy, OCI Cloud Guard can disable it. 05:48 Lois: And what about preventing misconfigurations? How does OCI make that easier while keeping operations secure? Tijo: OCI Security Zone is another service and that is used to enforce security postures in OCI. The goody zones help you to avoid any accidental misconfigurations. For example, in a security zone, you can choose users not to create a storage bucket that is publicly accessible. To stay ahead of vulnerabilities, MuShop runs OCI vulnerability scanning. They have scheduled to scan weekly to capture any outdated libraries or misconfigurations. OCI Security Advisor is another service that is used to flag any unused open ports and with recommending stronger access rules. MuShop needed more than just security. They also had to be compliant. OCI's compliance certifications have helped them to meet data privacy and security regulations across different regions and industries. There are additional services like OCI audit logs for traceability that help them pass internal and external audits. 07:11 Oracle University is proud to announce three brand new courses that will help your teams unlock the power of Redwood—the next generation design system. Redwood enhances the user experience, boosts efficiency, and ensures consistency across Oracle Fusion Cloud Applications. Whether you're a functional lead, configuration consultant, administrator, developer, or IT support analyst, these courses will introduce you to the Redwood philosophy and its business impact. They’ll also teach you how to use Visual Builder Studio to personalize and extend your Fusion environment. Get started today by visiting mylearn.oracle.com. 07:52 Nikita: Welcome back! We know that OCI treats security as a continuous design principle: automated, always on, and built right into the platform. David, do you have a real-world example of a company that needed to scale rapidly and was able to do so successfully with OCI? David: In late 2019, Zoom averaged 10 million meeting participants a day. By April 2020, well that number surged to over 300 million as video conferencing became essential for schools, businesses, and families around the world due to the global pandemic. To meet that explosive demand, Zoom chose OCI not just for performance, but for the ability to scale fast. In just nine hours, OCI engineers helped Zoom move from deployment to live production, handling hundreds of thousands of concurrent meetings immediately. Within weeks, they were supporting millions. And Zoom didn't just scale, they sustained it. With OCI's next-gen architecture, Zoom avoided the performance bottlenecks common in legacy clouds. They used OCI functions and cloud native services to scale workloads flexibly and securely. Today, Zoom transfers more than seven petabytes of data per day through Oracle Cloud. That's enough bandwidth to stream HD video continuously for 93 years. And they do it while maintaining high availability, low latency, and enterprise grade security. As articulated by their CEO Erik Yuan, Zoom didn't just meet the moment, they redefined it with OCI behind the scenes. 09:45 Nikita: That’s an incredible story about scale and agility. Do you have more examples of companies that turned to OCI to solve complex data or integration challenges? David: Telecom giant KDDI with over 64 million subscribers, faced a growing data dilemma. Data was everywhere. Survey results, system logs, behavioral analytics, but it was scattered across thousands of sources. Different tools for different tasks created silos, delays, and rising costs. KDDI needed a single platform to connect it all, and they chose Oracle. They replaced their legacy data systems with a modern data platform built on OCI and Autonomous Database. Now they can analyze behavior, improve service planning, and make faster, smarter decisions without the data chaos. But KDDI didn't stop there. They built a 300 terabyte data lake and connected all their systems-- custom on-prem apps, SaaS providers like Salesforce, and even multi-cloud infrastructure. Thanks to Oracle Integration and pre-built adapters, everything works together in real-time, even across clouds. AWS, Azure, and OCI now operate in harmony. The results? Reduced operational costs, faster development cycles, governance and API access improved across the board. KDDI can now analyze customer behavior to improve services like where to expand their 5G network. Next up, 8 by 8 powers communication for over 55,000 companies and 160 countries with more than 3 million users, depending on its voice, video, and messaging tools every day. To maintain that scale, they needed a cloud that could deliver low latency global availability and high performance without blowing up costs. Well, they moved their video meeting services from Amazon to OCI and went live in just four days. The results? 25% increase in performance per node, 80% reduction in network egress costs, and a significantly lower overall infrastructure spend. But this wasn't just a lift and shift. 8 by 8 also replaced legacy tools with Oracle Logging Analytics, giving their teams a single view across apps, infrastructure, and regions. 8 by 8 scaled up fast. They migrated core voice services, deployed over 300 microservices using OCI Kubernetes, and now run over 1,700 nodes across 26 global OCI regions. In addition, OCI's Ampere-based virtual machines gave them a major boost, sustaining 80% CPU utilization and more than 30% increased performance per core and with no degradation. And with OCI's Observability and Management platform, they gained real-time visibility into application health across both on-prem and cloud. Bottom line, 8x8 represents yet another excellent example of a company leveraging OCI for maximum business results. 13:24 Lois: Uber handles more than a million trips per hour, and Oracle Cloud Infrastructure is an integral part of making that possible. Can you walk us through how OCI supports Uber’s needs? David: Uber, the world's largest on-demand mobility platform, handles over 1 million trips every hour. And behind the scenes, OCI is helping to make that possible. In 2023, Uber began migrating thousands of microservices, data platforms, and AI models to OCI. Why? Because OCI provides the automation, flexibility, and infrastructure scale needed to support Uber's explosive growth. Today, Uber uses OCI Compute to handle massive trips serving traffic and OCI Object Storage to replace one of the largest Hadoop-based data environments in the industry. They needed global reach and multi-cloud compatibility, and OCI delivered. But it's not just scale, it's intelligence. Uber runs dozens of AI models on OCI to support real-time predictions up 14 million per second. From ride pricing to traffic patterns, this AI layer powers every trip behind the scenes. And by shifting stateless workloads to OCI Ampere ARM Compute servers, Uber reduced cost while increasing CPU efficiency. For AI inferencing, Uber uses OCI's AI infrastructure to strike the perfect balance between speed, throughput, and cost. So the next time you use your Uber app to schedule a ride, consider what happens behind the scenes with OCI. 15:18 Lois: That’s so impressive! Thank you, David, for those wonderful stories, and Tijo for all of your insights. Whether you’re in strategy, finance, or transformation, we hope you’re walking away with a clearer view of the business value OCI can bring. Nikita: Yeah, and if you want to learn more about the topics we discussed today, visit mylearn.oracle.com and search for the Cloud Business Jumpstart course. Until next time, this is Nikita Abraham… Lois: And Lois Houston signing off! 15:48 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/39763330
info_outline
Driving Business Value with OCI–Part 1
01/14/2026
Driving Business Value with OCI–Part 1
Understanding cloud costs can be challenging, but it’s essential for maximizing value. In this episode, hosts Lois Houston and Nikita Abraham speak with Oracle Cloud experts David Mills and Tijo Thomas about how Oracle Cloud Infrastructure offers predictable pricing, robust security, and high performance. They also introduce FinOps, a practical approach to tracking and optimizing cloud spending. Cloud Business Jumpstart: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:27 Nikita: Welcome back to another episode of the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I’m joined by Lois Houston, Director of Communications and Adoption with Customer Success Services. Lois: Hi everyone! Last week, we talked about how Oracle Cloud Infrastructure brings together developer tools, automation, and AI on a single platform. In today’s episode, we’re highlighting the real-world impact OCI can have on business outcomes. 00:58 Nikita: And to tell us about this, we have our experts David Mills and Tijo Thomas back with us. David is a Senior Principal PaaS Instructor and Tijo is a Principal OCI Instructor, and they’re both from Oracle University. David, let’s start with you. What makes Oracle Cloud Infrastructure the trusted choice for organizations across industries like banking, healthcare, retail, and government? David: It all comes down to one thing. OCI was built for real businesses, not side projects, not hobby apps, not test servers, but mission-critical systems at scale. Most clouds brag about their speed, but OCI is consistently fast, even under pressure. And that's because Oracle built OCI on a non-blocking network and bare metal infrastructure, with dedicated resources and no noisy neighbors. So, whether you're running one application or 1,000, you get predictable, low latency, performance every time as OCI doesn't force you into any specific mold. You want full control? Spin up a virtual machine and configure everything. You need to move fast? Use a managed service like Autonomous Database or Kubernetes. Prefer to build your own containers, functions, APIs, or develop with low code or even no code tools? OCI supports all of it. And it plays nicely with your existing stack—on-prem or in another cloud. OCI adapts to how you already work instead of making you start over. 02:39 Lois: And when it comes to pricing, how does OCI help customers manage costs more effectively? David: OCI is priced for real business use, not just the flashy low entry number. You only pay for what you use. No overprovisioning, no lock in. Virtual machines can scale up and down automatically. Object storage automatically shifts to a lower cost tier based on frequency of access. Autonomous services don't need babysitting or patching. And unlike some providers, OCI doesn't charge you to get your own data back. It's enterprise grade cloud without enterprise grade sticker shock. 03:26 Lois: Security and flexibility are top priorities for many organizations. How does OCI address those challenges? David: OCI treats security as a starting point, not an upsell. From the moment you create an account, every tenant is isolated. All data is encrypted. Admin activity is logged and security tools like Cloud Guard are ready to go. And if you need to prove compliance for GDRP, FedRAMP, HIPAA, or more, you're covered. OCI is trusted by the world's most regulated industries. Most companies don't live in one cloud. They've got legacy systems, other cloud providers, and different teams doing different things. OCI is designed to work in hybrid and multi-cloud environments. Connect to your on-prem apps with VPN or FastConnect. Run Oracle workloads in your data center with Cloud@Customer. Interconnect with Azure and Google Cloud or integrate with Amazon. OCI isn't trying to lock you in. It's seeking to meet you where you are and help you modernize without breaking what works. 04:40 Nikita: Can you share an example of a business that’s seen measurable results with OCI? David: A national health care provider was stuck on aging hardware with slow batch processing and manual upgrades. They migrated core patient systems to OCI and used Oracle Autonomous Database for faster, self-managed workloads. They leveraged Oracle Integration to connect legacy electronic health records, OCI FastConnect to keep real-time sync with data in their on-prem systems, and they went from 12-hour downtime Windows to zero, from three weeks to launch a feature to three days, and they cut infrastructure cost by 38%. And that's what choosing OCI looks like. 05:37 Are you looking to boost your expertise in enterprise AI? Check out the Oracle AI Agent Studio for Fusion Applications Developers course and professional certification—now available through Oracle University. This course helps you build, customize, and deploy AI Agents for Fusion HCM, SCM, and CX, with hands-on labs and real-world case studies. Ready to set yourself apart with in-demand skills and a professional credential? Learn more and get started today! Visit mylearn.oracle.com for more details. 06:12 Nikita: Welcome back! Tijo, controlling costs while driving innovation is a tough balancing act for many organizations. What are the biggest challenges organizations face when trying to manage and optimize their cloud spending? Tijo: The first one is unexpected cloud cost. Let's be honest. Cloud bills can be shocking. You think you've got things under control, that the invoice shows up and you realize it is way over the budget. Without real-time visibility, it is quite hard to catch these surprises before they happen. The next one is with waste of resources and inefficiencies. It is quite common to find resources that are just sitting idle, such as unused storage, underutilized CPU, or overprovisioned memory. It may not seem like there are much of resource wastage at first, but over time all that is really going to add up. Then there is no clear ownership of cloud spend. It is one of the big problem in cost management. If cost are not clearly tagged to a team or a project, nobody feels responsible, and that makes it really tough to manage or reduce the cloud spend. There is also misaligned priorities across teams, and looking at different teams like finance, they may want to cut the cost while engineering want to move faster, operations want everything to be up and running. While every team is doing their best, but without a common approach to cost, it becomes challenging to prioritize tasks. Slow and reactive decision making is another challenge. Most cost issues gets identified after the bill is invoiced, and by then the budget has been already spent. Without timely data, it becomes difficult to make real time changes. And then complex, multi-cloud and regional footprint. As businesses grow across regions and with multi-cloud deployment model, tracking where the budget is going gets really tricky. More services means there are more teams and more complexity. Now, all of these challenges have one thing in common. They need a better way to manage cloud cost together. And this is where FinOps comes in. 08:42 Lois: And what exactly is FinOps? How does it address these cloud cost challenges? Tijo: FinOps stands for financial operations. It is a framework that brings teams like engineering, operations, finance, and beyond to work together so that the cloud spending becomes smarter, more visible, and better aligned towards business goals. And so FinOps is not just a tool, it is a way of working. According to FinOps Foundation, FinOps lifecycle happens in three phases: inform, optimize, and operate. The inform phase is about visibility and allocation, which means you gather the cost, usage, and efficiency data in order to forecast and budget. The optimize phase is about rates and usage, and this is where you would take action to optimize or bring efficiencies. And then in operate, you turn those into continuous improvements through policies, trainings, and automation. 09:51 Nikita: Let’s unpack FinOps a bit more. Why is understanding your cloud subscription model so fundamental in the Inform phase? Tijo: Because cost visibility is very important while managing your Oracle Cloud subscription. There are two ways to purchase OCI services. The first one, we refer to it as pay as you go model, which means you pay for what you use, and the second one is called universal credit annual commitment model, where you can purchase a prepaid amount of universal credits, and the prepaid amount will be drawn down based on actual usage. OCI provides a portal called FinOps Hub, where you can easily track how your usage has changed month by month over the past year. Through the Hub, you can monitor whether you have stayed within your credit allocation or not. You will also see how much of your committed credits have been used, how much is left, and when is your commitment set to expire. The next step is to gain visibility or to understand the cost. In Oracle Cloud Infrastructure, this starts with the service called cost analysis. OCI Cost Analysis is a service that would help you to filter, group, and visualize your cloud cost in a way that makes sense for your business. You can compare cost over time. You can drill down the cost by services, and track those spending by specific teams or projects. And then finally export detailed reports for finance or leadership reviews. OCI Cost Analysis gives you an interactive, near real-time view of your cloud spending. So you're not just seeing the numbers, you are understanding what is driving them. The next one is about setting up spending limits and this is done through OCI Budgets. For example, the organization can set up a monthly budget for the development team. If their usage, the cloud usage exceeds 80% of that limit, an alert will be triggered to notify the team. This means you can configure a threshold, send alerts, or even take actions automatically. 12:16 Lois: Tijo, what happens during the Optimize and Operate phases of the FinOps framework? Tijo: The inform stage was more about awareness. In the optimize phase, you take that data you've collected, and use it to optimize resources and improve efficiency. In OCI, we'll start with Cloud Advisor. OCI Cloud Advisor finds potential inefficiencies in your tenancy, and offers you guided solutions that explain how to address them. The recommendations help you to maximize cost savings. For example, it gives you personalized recommendations like deleting idle resources or resizing compute instances. Secondly, you can identify steps for performance improvements. And finally, enhance high availability and security with suggesting configurations for your cloud resources. In the third phase, operate, it is about making optimization as a routine or continuous improvements, and this is done through incorporating FinOps into your organization. OCI provides cost and usage reports that can automatically generate daily reports. These reports would show detailed usage data for every OCI service that you're using. You can export cost reports in FOCUS format. FOCUS is an industry standard and it stands for FinOps Open Cost and Usage Specification. 13:52 Nikita: And what makes the FOCUS format important for organizations? Tijo: The format enables the cost data to be consistent. It is well structured, and ready to use with other FinOps tools or dashboards. These reports can also ingest into Business Intelligence or analytics tools that will help you with better visualizations. Organizing your resources the right way is the key to get more accurate and simplified data. Without a clear structure, your cost data will be too complex. In OCI, this structure starts with your tenancy. Tenancy is your top level OCI account, and it represents the presence of cloud for your entire organization. Next, you have compartments. Compartments help you to break down your cloud environment into logical groups, for example, by department or business unit or projects. Then there are tags, and this is where cost visibility gets more meaningful. Tags allow you to assign custom labels to each resources. Things like environment type, cost center, or the owner name. 15:06 Lois: Some people think cost visibility is a concern mainly for finance teams. What’s your perspective on this? Tijo: Cost visibility should be a shared responsibility, which means it shouldn't just be shared with the finance. Engineers, architects, and project owners all need to have access to the cost data that are relevant to them. Because when teams have visibility, they take ownership and that leads to better decisions which are faster, smarter, and more aligned to business goals. 15:42 Nikita: Thank you, David and Tijo, for joining us and sharing your insights. Lois: If you’d like to learn more, visit mylearn.oracle.com and look for the Cloud Business Jumpstart course. Next week, we’ll explore security and compliance in OCI. Until next time, this is Lois Houston… Nikita: And Nikita Abraham signing off! 16:03 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/39735150
info_outline
Getting to Know Oracle Cloud Infrastructure
01/06/2026
Getting to Know Oracle Cloud Infrastructure
Every system depends on reliable infrastructure behind the scenes. Oracle Cloud Infrastructure (OCI) delivers that reliability with speed, flexibility, and built-in security. Join Lois Houston and Nikita Abraham as they speak with Oracle Cloud experts David Mills and Tijo Thomas about what makes OCI different and how it drives real results for businesses of every size. Cloud Business Jumpstart Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone, and welcome to a brand-new season of the podcast! We’re really excited about this one because we’ll be diving into how Oracle Cloud Infrastructure is transforming the way businesses innovate, stay secure, and drive results. 00:55 Lois: And to help us with this, we’ve got two experts who know this space inside out—David Mills, Senior Principal PaaS Instructor, and Tijo Thomas, Principal OCI Instructor, both from Oracle University. Hi David! For those who might not be familiar, could you explain what Oracle Cloud Infrastructure is? David: OCI, as we call it, is Oracle's enterprise grade cloud platform, built from the ground up to run the systems that matter most to business. It provides the infrastructure and platform services businesses need to build, run, and scale applications securely, globally, and cost effectively. To provide more context, all of Oracle's SaaS applications such as NetSuite, Customer Experience, Human Capital Management, Supply Chain Management, as well as Enterprise Resource and Enterprise Performance Management, they all run on OCI. But OCI isn't just for Oracle's own apps. It's a full featured cloud platform used by thousands of customers to run their own applications, data, and services. OCI includes platform services such as databases, integration, analytics, and many others, and of course, the infrastructure services, such as compute, networking, and storage, which comprise the core of OCI. Bottom line, if something is running on Oracle Cloud, OCI is behind it. OCI includes over 100 services across numerous categories like compute, storage, networking, database, containers, AI, developer tools, integration, security, observability, and much more. So, whether you're lifting and shifting legacy workloads or building new apps in the cloud, OCI has the building blocks. 03:02 Lois: David, who was OCI designed for? David: OCI was built from scratch to address the limitations of first-generation clouds. No patchwork of legacy acquisitions, just a clean, modern, high-performance foundation designed for real enterprise workloads. OCI was designed for businesses that can't compromise financial services, health care, retail, governments, customers with strict regulations, global scale, and mission-critical systems. These are the companies choosing OCI not just because it works, but because it works under pressure. 03:42 Nikita: What else makes OCI different from other cloud platforms? David: Oracle's network and storage architecture delivers low latency results consistently. Then there's pricing—simple, predictable, and often much lower than our competitors. OCI was designed with governance and security in every layer. OCI supports all types of cloud strategies: public cloud, hybrid deployments, multi-cloud environments, and even a dedicated cloud we can install inside your own data center. We call all that distributed cloud, and that's where OCI really shines. OCI gives you everything you need to modernize your technology stack, run securely at scale, and build for the future without giving up control or blowing your budget. 04:37 Lois: Now, Tijo, we’ve covered what OCI is, who it’s for, and what makes it unique. Let’s switch gears a bit and talk about cloud regions. For anyone who doesn’t know, a cloud region is just a specific geographic location where Oracle, or any cloud provider, runs its own data centers. Why does the choice of region matter for businesses, and what should they think about when picking one? Tijo: Many businesses are required by law to keep their data within national borders, whether it is GDPR in Europe or local privacy laws in Australia or Singapore, choosing the right region would help you to stay compliant. The closer your applications are to your users, the faster they perform. Running in a nearby region means lower latency, faster response times, and better customer experience. Then there is disaster recovery and high availability. Regions are the building blocks for setting up failover strategies. By deploying workloads in multiple regions, businesses can protect themselves from outages and keeping their systems in running state. Some businesses also need to meet industry-specific compliance requirements. Think of sectors like health care, government, or finance. They often require that the infrastructure and the data should stay within the national or regional boundaries. If your business is growing into new markets, regions allow you to deploy apps and services closer to your customers and without having the need to build new data centers. Regions also enable local integrations and partnerships, whether it is connecting with ISPs, local service providers, or complying with in-country partner requirements. Having a region nearby makes that integrations and operations smoother. Regions are not just about geography. They are a critical part of how the businesses would stay compliant, resilient, and responsive across the globe. Oracle runs a fast-growing global network of cloud regions, and each OCI region is fully independent and fully isolated. You choose your regions, and your data stays there. 07:06 Nikita: And are there different types of cloud regions? Tijo: There are several commercial regions, sovereign regions, government regions, and multi-cloud regions. Even with a wide range of cloud regions, some organizations cannot move their workloads and its data to the public cloud. Those workloads may need to stay in their own on-premises data center, but at the same time, they still want to leverage the benefits of OCI. 07:42 Take your cloud skills to the next level with the new Oracle Database@AWS course. Master provisioning, migration, security, and high availability for Oracle Database on AWS. Then validate your experience with an industry-recognized certification. Stand out in the multicloud space and accelerate your career. Visit mylearn.oracle.com for more information. 08:09 Nikita: Welcome back! We were talking about workloads and how some companies may have to keep their workloads on-premises. Why would they need to do that, Tijo? Tijo: First, data sovereignty. Let's say there may not be a list of public cloud region that the organization is looking for, or maybe the business need to set up a disaster recovery strategy within that specific location. Then there is security and control. Some industries have very strict regulations, and they require physical access and oversight of their infrastructure. And finally, there are latency-sensitive workloads. These are applications that cannot afford the delay of going back and forth to a remote cloud region. They need cloud services right next to their physical data center. 08:59 Nikita: So, how does Oracle help with that? Tijo: To address these requirements, Oracle introduces a set of offerings. The first one is called dedicated region, and the second one is called Cloud@Customer services. Through both these offerings, you get OCI services right in your data center and all behind your firewall, while achieving the benefits of flexibility and automation. 09:24 Nikita: So, what’s a dedicated region? Tijo: Dedicated region is a completely managed cloud region that brings all the OCI services and Oracle Fusion SaaS applications within your data centers. Along with deploying the full stack OCI, you would receive support for Oracle Fusion SaaS applications and also gain a consistent experience with the same SLAs, APIs, and the tools available in Oracle Cloud. 09:53 Lois: Ok and what about Cloud@Customer? Tijo: While dedicated region is ideal for large scale enterprise needs, with full stack OCI and SaaS, some organizations just require a lighter footprint. And that's where Cloud@Customer comes in. And to begin with, we'll talk about Compute Cloud@Customer. It is a fully managed rack scale infrastructure that allows you to use the core OCI services, like the OCI compute, OCI storage, and OCI networking services at your on-premises. With Compute Cloud@Customer, you can run applications and middleware systems to provide consistent user experience and simplify IT administration across your distributed cloud architecture. We can plan to run the same application stack everywhere and centrally manage them without needing experts in every location. 10:52 Nikita: Is there a way to make running your Oracle databases easier and more cost-effective? Tijo: That's why Oracle offers you Oracle Exadata Cloud@Customer. Oracle Exadata Cloud@Customer combines the performance of Oracle Exadata with the simplicity, flexibility, and affordability of a managed database service delivered through customer data centers. It is the simplest way to move your current Oracle databases to the cloud, because it provides full compatibility with existing Exadata systems and Exadata Database services in Oracle Cloud Infrastructure. You could also run the fully-managed Oracle Autonomous Database on Exadata Cloud@Customer that would combine all the benefits of having Exadata, along with the simplicity of an autonomous cloud service. And when Compute Cloud@Customer is combined with Exadata Cloud@Customer, you can run full stack applications completely in your own data center. Applications will use the same high performance OCI compute and database services you get in the cloud, so you don't have to change the way you architect or deploy them. 12:09 Nikita: So, what you’re saying is that Oracle dedicated region and Cloud@Customer bring OCI services into your data center. Tijo: It enables you to run applications faster using the same high-performance capabilities and autonomous operations. You get all of this while maintaining complete control of your data so that you can address data residency, security, and connectivity concerns. 12:35 Lois: Ok. We've talked about where OCI runs. Now David, let’s get into what it actually does. David: OCI compute lets you run business applications on demand without buying or managing physical servers. You choose the type and size of the virtual machine you want, and OCI handles the rest. Need more power for peak traffic? OCI can automatically add capacity and scale it back down after. In addition to virtual machines, bare metal servers are also available for ultra high performance jobs like simulations, AI, or high speed trading. Every business stores data, but not all data needs the same kind of storage. OCI gives you options, fast block storage for your compute servers. It works just like a hard drive for your home computer. Shared file storage for applications and microservices. Large scale object storage for backups, videos, or other data, and low-cost long-term storage for object archives. The system even moves rarely used data to cheaper storage automatically. 13:51 Lois: Given Oracle’s expertise in databases, what are some of the database options businesses can access with OCI? David: Oracle Autonomous Database automatically patches, tunes, and scales itself. Need raw power? Use Oracle Exadata, or go open source with MySQL HeatWave, which can be used for real time analytics. With these and many other database options, you get high performance automation and reliability all on demand. 14:24 Nikita: With so many database options, how is everything kept connected and running smoothly on OCI? David: Every cloud service relies on a fast, secure network. OCI's Virtual Cloud network acts like your own private data highway. You control how traffic flows between your apps, your people, and your regions. Need private direct connections to your data center or office? Use OCI FastConnect to bypass the public internet. OCI networking provides high speed performance with enterprise grade security designed for global business. 15:05 Lois: And what security service does Oracle provide? David: OCI doesn't treat this as an optional add on. When you sign up for OCI, your environment is isolated, your data is encrypted, and admin actions are logged. And there are so many security services. Identity and Access Management for handling users and permissions, Cloud Guard to detect threats and misconfigurations, OCI Vault for managing your encryption keys, Data Safe to monitor sensitive data access, as well as many others. You can leverage to meet any government or business compliance requirement. All of these are included in OCI, no need to stitch together third-party tools. 15:55 Lois: What if I want to see what’s going on in my environment? David: OCI has monitoring services for metrics, logging services for real-time insights, tracing for distributed applications, and alarms to notify you when things go sideways. All of these services are integrated. So you can see what matters when you need it without all the noise. 16:23 Nikita: David, let’s say someone wants to build and deploy an app. What services does OCI offer them? David: OCI provides numerous developer services for your teams to build apps or digital tools. OCI DevOps supports automated builds and deployments. OCI Container Engine for Kubernetes helps run microservices. OCI Functions supports serverless code that runs on demand. All of this works with familiar languages and frameworks. In short, OCI gives developers what they need to build, test, and deliver quickly without having to manage infrastructure. 17:03 Nikita: How does OCI make it easier for companies to bring their apps together and use AI, even if they don’t have a dedicated AI team? David: Modern businesses run dozens of apps, and OCI helps you to connect them with Oracle Integration Cloud. With OIC, you can integrate SaaS applications as well as on-premise apps and systems, automate business processes and workflows, route and transform messages, and you can even expose key services as APIs so partners and systems can interact securely. OCI integration is the glue that holds modern IT together. OCI helps you turn data into decisions without needing an AI team. Use ready-made AI tools for language translation, image recognition, document understanding, speech transcription, and more. Or build your own models with data science and data flow services. It's all designed to bring machine learning into reach for every business. 18:10 Lois: Thank you, David and Tijo, for joining us on this episode of the Oracle University Podcast. If you want to learn more about OCI, visit mylearn.oracle.com and search for the Cloud Business Jumpstart course. Nikita: Next week, we'll look at why businesses choose OCI and how they’re using OCI services to create real outcomes. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 18:38 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/39628815
info_outline
Best of 2025: Unlocking the Power of Oracle APEX and AI
12/23/2025
Best of 2025: Unlocking the Power of Oracle APEX and AI
Lois Houston and Nikita Abraham explore how Oracle APEX integrates with AI to build smarter low-code applications. They are joined by Chaitanya Koratamaddi, Director of Product Management at Oracle, who explains the basics of Oracle APEX, its global adoption, and the challenges it addresses for businesses managing and integrating data. They also explore real-world use cases of AI within the Oracle APEX ecosystem Oracle APEX: Empowering Low Code Apps with AI: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone! We hope you’ve been enjoying these last few weeks as we’ve been revisiting our most popular episodes of the year. Today’s episode is the last one in this series and is a throwback to a conversation on APEX with Chaitanya Koratamaddi, Director of Product Management for Oracle APEX. 00:57 Lois: We began by asking Chaitanya what Oracle APEX is and why it’s so widely used. So, let’s jump right in! Chaitanya: Oracle APEX is the world's most popular enterprise low code application platform. APEX enables you to build secure and scalable enterprise-scale applications with world class features that can be deployed anywhere, cloud or on-premises. And with APEX, you can build applications 20 times faster with 100 times less code. APEX delivers the most productive way to develop and deploy mobile and web applications everywhere. 01:40 Lois: That’s impressive. So, what’s the adoption rate like for Oracle APEX? Chaitanya: As of today, there are 19 million plus APEX applications created globally. 5,000 plus APEX applications are created on a daily basis and there are 800,000 plus APEX developers worldwide. 60,000 plus customers in 150 countries across various industry verticals. And 75% of Fortune 500 companies use Oracle APEX. 02:19 Nikita: Wow, the numbers really speak for themselves, right? But Chaitanya, why are organizations adopting Oracle APEX at this scale? Or to put it differently, what’s the core business challenge that Oracle APEX is addressing? Chaitanya: From databases to all data, you know that the world is more connected and automated than ever. To drive new business value, organizations need to explore and exploit new sources of data that are generated from this connected world. That can be sounds, feeds, sensors, videos, images, and more. Businesses need to be able to work with all types of data and also make sure that it is available to be used together. Typically, businesses need to work on all data at a massive scale. For example, supply chains are no longer dependent just on inventory, demand, and order management signals. A manufacturer should be able to understand data describing global weather patterns and how it impacts their supply chains. Businesses need to pull in data from as many social sources as possible to understand how customer sentiment impacts product sales and corporate brands. Our customers need a data platform that ensures all this data works together seamlessly and easily. 04:00 Lois: So, you’re saying Oracle APEX is the platform that helps businesses manage and integrate data seamlessly. But data is just one part of the equation, right? Then there’s AI. How are the two related? Chaitanya: Before we start talking about Oracle AI, let's first talk about what customers are looking for and where they are struggling within their AI innovation. It all starts with data. For decades, working with data has largely involved dealing with structured data, whether it is your customer records in your CRM application and orders from your ERP database. Data was organized into database and tables, and when you needed to find some insights in your data, all you need to do is just use stored procedures and SQL queries to deliver the answers. But today, the expectations are higher. You want to use AI to construct sophisticated predictions, find anomalies, make decisions, and even take actions autonomously. And the data is far more complicated. It is in an endless variety of formats scattered all over your business. You need tools to find this data, consume it, and easily make sense of it all. And now capabilities like natural language processing, computer vision, and anomaly detection are becoming very essential just like how SQL queries used to be. You need to use AI to analyze phone call transcripts, support tickets, or email complaints so you can understand what customers need and how they feel about your products, customer service, and brand. You may want to use a data source as noisy and unstructured as social media data to detect trends and identify issues in real time. Today, AI capabilities are very essential to accelerate innovation, assess what's happening in your business, and most importantly, exceed the expectations of your customers. So, connecting your application, data, and infrastructure allows everyone in your business to benefit from data. 06:54 Oracle University is proud to announce three brand new courses that will help your teams unlock the power of Redwood—the next generation design system. Redwood enhances the user experience, boosts efficiency, and ensures consistency across Oracle Fusion Cloud Applications. Whether you're a functional lead, configuration consultant, administrator, developer, or IT support analyst, these courses will introduce you to the Redwood philosophy and its business impact. They’ll also teach you how to use Visual Builder Studio to personalize and extend your Fusion environment. Get started today by visiting mylearn.oracle.com. 07:35 Nikita: Welcome back! So, let’s focus on AI across the Oracle Cloud ecosystem. How does Oracle bring AI into the mix to connect applications, data, and infrastructure for businesses? Chaitanya: By embedding AI throughout the entire technology stack from the infrastructure that businesses run on through the applications for every line of business, from finance to supply chain and HR, Oracle is helping organizations pragmatically use AI to improve performance while saving time, energy, and resources. Our core cloud infrastructure includes a unique AI infrastructure layer based on our supercluster technology, leveraging the latest and greatest hardware and uniquely able to get the maximum out of the AI infrastructure technology for scenarios such as large language processing. Then there is generative AI and ML for data platforms. On top of the AI infrastructure, our database layer embeds AI in our products such as autonomous database. With autonomous database, you can leverage large language models to use natural language queries rather than writing a SQL when interacting with the autonomous database. This enables you to achieve faster AI adoption in your application development. Businesses and their customers can use the Select AI natural language interface combined with Oracle Database AI Vector Search to obtain quicker, more intuitive insights into their own data. Then we have AI services. AI services are a collection of offerings, including generative AI with pre-built machine learning models that make it easier for developers to apply AI to applications and business operations. The models can be custom-trained for more accurate business results. 09:47 Nikita: And what specific AI services do we have at Oracle, Chaitanya? Chaitanya: We have Oracle Digital Assistant Speech, Language, Vision, and Document Understanding. Then we have Oracle AI for Applications. Oracle delivers AI built for business, helping you make better decisions faster and empowering your workforce to work more effectively. By embedding classic and generative AI into its applications, Fusion Apps customers can instantly access AI outcomes wherever they are needed without leaving the software environment they use every day to power their business. 10:32 Lois: Let’s talk specifically about APEX. How does APEX use the Gen AI and machine learning models in the stack to empower developers. How does it help them boost productivity? Chaitanya: Starting APEX 24.1, you can choose your preferred large language models and leverage native generative AI capabilities of APEX for AI assistants, prompt-based application creation, and more. Using native OCI capabilities, you can leverage native platform capabilities from OCI, like AI infrastructure and object storage, etc. Oracle APEX running on autonomous infrastructure in Oracle Cloud leverages its unique native generative AI capabilities tuned specifically on your data. These language models are schema aware, data aware, and take into account the shape of information, enabling your applications to take advantage of large language models pre-trained on your unique data. You can give your users greater insights by leveraging native capabilities, including vector-based similarity search, content summary, and predictions. You can also incorporate powerful AI features to deliver personalized experiences and recommendations, process natural language prompts, and more by integrating directly with a suite of OCI AI services. 12:08 Nikita: Can you give us some examples of this? Chaitanya: You can leverage OCI Vision to interpret visual and text inputs, including image recognition and classification. Or you can use OCI Speech to transcribe and understand spoken language, making both image and audio content accessible and actionable. You can work with disparate data sources like JSON, spatial, graphs, vectors, and build AI capabilities around your own business data. So, low-code application development with APEX along with AI is a very powerful combination. 12:51 Nikita: What are some use cases of AI-powered Oracle APEX applications? Chaitanya: You can build APEX applications to include conversational chatbots. Your APEX applications can include image and object detection capability. Your APEX applications can include speech transcription capability. And in your applications, you can include code generation that is natural language to SQL conversion capability. Your applications can be powered by semantic search capability. Your APEX applications can include text generation capability. 13:30 Lois: So, there’s really a lot we can do! Thank you, Chaitanya, for joining us today. With that, we’re wrapping up this episode. We covered Oracle APEX, the key challenges businesses face when it comes to AI innovation, and how APEX and AI work together to give businesses an AI edge. Nikita: Yeah, and if you want to know more about Oracle APEX, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. Lois: We hope you’ve enjoyed revisiting some of our most popular episodes of the year. We always appreciate your feedback and suggestions so do write to us at ou-podcast_ww@oracle.com. That’s ou-podcast_ww@oracle.com. We’re taking a break next week and will be back with a brand-new season of the Oracle University Podcast in January. Happy holidays, everybody! Nikita: Happy holidays! Until next time, this is Nikita Abraham... Lois: And Lois Houston, signing off! 14:34 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/39257825
info_outline
Best of 2025: Oracle Fusion Cloud Applications Foundations Training & Certifications
12/16/2025
Best of 2025: Oracle Fusion Cloud Applications Foundations Training & Certifications
In this episode of the Oracle University Podcast, hosts Lois Houston and Nikita Abraham dive into Oracle Fusion Cloud Applications and the new courses and certifications on offer. They are joined by Oracle Fusion Apps experts Patrick McBride and Bill Lawson who introduce the concept of Oracle Modern Best Practice (OMBP), explaining how it helps organizations maximize results by mapping Fusion Application features to daily business processes. They also discuss how the new courses educate learners on OMBP and its role in improving Fusion Cloud Apps implementations. OMBP: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services. Lois: Hi everyone! Thanks for joining us for this Best of 2025 series, where we’re playing you four of our most popular episodes of the year. Nikita: Today’s episode is #3 of 4 and is a throwback to a conversation with our friends and Oracle Fusion Apps experts Patrick McBride and Bill Lawson. We chatted with them about the latest courses and certifications available for Oracle Fusion Cloud Applications, featuring Oracle Modern Best Practice and the Oracle Cloud Success Navigator. 01:08 Lois: We kicked things off by asking Patrick to help us understand what Oracle Modern Best Practice is, and the reasons behind its creation. Patrick: So, modern best practices are more than just a business process. They’re really about translating features and technology into actionable capabilities in our product. So, we've created these by curating industry leading best practices we've collected from our customers over the years. And ensure that the most modern technologies that we've built into the Fusion Application stack are represented inside of those business processes. Our goal is really to help you as customers improve your business operations by easily finding and applying those technologies to what you do every day. 01:53 Nikita: So, by understanding this modern best practice and the technology that enables it, you’re really unlocking the full potential of Fusion Apps. Patrick: Absolutely. So, the goal is that modern best practice make it really easy for customers, implementers, partners, to see the opportunity and take action. 02:13 Lois: That’s great. OK, so, let’s talk about implementations, Patrick. How do Oracle Modern Best Practice support customers throughout the lifecycle of an Oracle Fusion Cloud implementation? Patrick: What we found during many implementers’ journey with taking our solution and trying to apply it with customers is that customers come in with a long list of capabilities that they're asking us to replicate. What they've always done in the past. And what modern best practice is trying to do is help customers to reimage the art of the possible…what's possible with Fusion by taking advantage of innovative features like AI, like IoT, like, you know, all of the other solutions that we built in to help you automate your processes to help you get the most out of the solution using the latest and greatest technology. So, if you're an implementer, there's a number of ways a modern best practice can help during an implementation. First is that reimagine exercise where you can help the customer see what's possible. And how we can do it in a better way. I think more importantly though, as you go through your implementation, many customers aren't able to get everything done by the time they have to go live. They have a list of things they’ve deferred and modern best practices really establishes itself as a road map for success, so you can go back to it at the completion and see what's left for the opportunity to take advantage of and you can use it to track kind of the continuous innovation that Oracle delivers with every release and see what's changed with that business process and how can I get the most out of it. 03:43 Nikita: Thanks, Patrick. That’s a great primer on OMBP that I’m sure everyone will find very helpful. Patrick: Thanks, Niki. We want our customers to understand the value of modern best practices so they can really maximize their investment in Oracle technology today and in the future as we continue to innovate. 03:59 Lois: Right. And the way we’re doing that is through new training and certifications that are closely aligned with OMBP. Bill, what can you tell us about this? Bill: Yes, sure. So, the new Oracle Fusion Applications Foundations training program is designed to help partners and customers understand Oracle Modern Best Practice and how they improve the entire implementation journey with Fusion Cloud Applications. As a learner, you will understand how to adhere to these practices and how they promise a greater level of success and customer satisfaction. So, whether you’re designing, or implementing, or going live, you’ll be able to get it right on day one. So, like Patrick was saying, these OMBPs are reimagined, industry-standard business processes built into Fusion Applications. So, you'll also discover how technologies like AI, Mobile, and Analytics help you automate tasks and make smarter decisions. You’ll see how data flows between processes and get tips for successful go-lives. So, the training we’re offering includes product demonstrations, key metrics, and design considerations to give you a solid understanding of modern best practice. It also introduces you to Oracle Cloud Success Navigator and how it can be leveraged and relied upon as a trusted source to guide you through every step of your cloud journey, so from planning, designing, and implementation, to user acceptance testing and post-go-live innovations with each quarterly new release of Fusion Applications and those new features. And then, the training also prepares you for Oracle Cloud Applications Foundations certifications. 05:31 Nikita: Which applications does the training focus on, Bill? Bill: Sure, so the training focuses on four key pillars of Fusion Apps and the associated OMBP with them. For Human Capital Management, we cover Human Resources and Talent Management. For Enterprise Resource Planning, it’s all about Financials, Project Management, and Risk Management. In Supply Chain Management, you’ll look at Supply Chain, Manufacturing, Inventory, Procurement, and more. And for Customer Experience, we’ll focus on Marketing, Sales, and Service. 05:59 Lois: That’s great, Bill. Now, who is the training and certification for? Bill: That’s a great question. So, it’s really for anyone who wants to get the most out of Oracle Fusion Cloud Applications. It doesn’t matter if you’re an experienced professional or someone new to Fusion Apps, this is a great place to start. It’s even recommended for professionals with experience in implementing other applications, like on-premise products. So, the goal is to give you a solid foundation in Oracle Modern Best Practice and show you how to use them to improve your implementation approach. We want to make it easy for anyone, whether you’re an implementer, a global process owner, or an IT team employee, to identify every way Fusion Applications can improve your organization. So, if you’re new to Fusion Apps, you’ll get a comprehensive overview of Oracle Fusion Applications and how to use OMBP to improve business operations. If you're already certified in Oracle Cloud Applications and have years of experience, you'll still benefit from learning how OMBP fits into your work. If you’re an experienced Fusion consultant who is new to Oracle Modern Best Practice processes, this is a good place to begin and learn how to apply them and the latest technology enablers during implementations. And, lastly, if you’re an on-premise or you have non-Fusion consultant skills looking to upskill to Fusion, this is a great way to begin acquiring the knowledge and skills needed to transition to Fusion and migrate your existing expertise. 07:29 Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like Large Language Models, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.com and find out more. 07:58 Nikita: Welcome back! Bill, how long is it going to take me to complete this training program? Bill: So, we wanted to make this program detailed enough so our learners find it valuable, obviously. But at the same time, we didn’t want to make it too long. So, each course is approximately 5 hours or more, and provides folks with all the requisite knowledge they need to get started with Oracle Modern Best Practice and Fusion Applications. 08:22 Lois: Bill, is there anything that I need to know before I take this course? Are there any prerequisites? Bill: No, Lois, there are no prerequisites. Like I was saying, whether you’re fresh out of college or a seasoned professional, this is a great place to start your journey into Fusion Apps and Oracle Modern Best Practice. 08:37 Nikita: That’s great, you know, that there are no barriers to starting. Now, Bill, what can you tell us about the certification that goes along with this new program? Bill: The best part, Niki, is that it’s free. In fact, the training is also free. We have four courses and corresponding Foundation Associate–level certifications for Human Capital Management, Enterprise Resource Planning, Supply Chain Management, and Customer Experience. So, completing the training prepares you for an hour-long exam with 25 questions. It’s a pretty straightforward way to validate your expertise in Oracle Modern Best Practice and Fusion Apps implementation considerations. 09:11 Nikita: Ok. Say I take this course and certification. What can I do next? Where should my learning journey take me? Bill: So, you’re building knowledge and expertise with Fusion Applications, correct? So, once you take this training and certification, I recommend that you identify a product area you want to specialize in. So, if you take the Foundations training for HCM, you can dive deeper into specialized paths focused on implementing Human Resources, Workforce Management, Talent Management, or Payroll applications, for example. The same goes for other product areas. If you finish the certification for Foundations in ERP, you may choose to specialize in Finance or Project Management and get your professional certifications there as your next step. So, once you have this foundational knowledge, moving on to advanced learning in these areas becomes much easier. We offer various learning paths with associated professional-level certifications to deepen your knowledge and expertise in Oracle Fusion Cloud Applications. So, you can learn more about these courses by visiting oracle.com/education/training/ to find out more of what Oracle University has to offer. 10:14 Lois: Right. I love that we have a clear path from foundational-level training to more advanced levels. So, as your skills grow, we’ve got the resources to help you move forward. Nikita: That’s right, Lois. Thanks for walking us through all this, Patrick and Bill. We really appreciate you taking the time to join us on the podcast. Bill: Yeah, it’s always a pleasure to join you on the podcast. Thank you very much. Patrick: Oh, thanks for having me, Lois. Happy to be here. Nikita: We hope you enjoyed that conversation. Join us next week for another throwback episode. Until then, this is Nikita Abraham... Lois: And Lois Houston, signing off! 10:47 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/39193265
info_outline
Best of 2025: What is Multicloud?
12/09/2025
Best of 2025: What is Multicloud?
This week, hosts Lois Houston and Nikita Abraham are shining a light on multicloud, a game-changing strategy involving the use of multiple cloud service providers. Joined by Senior Manager of CSS OU Cloud Delivery Samvit Mishra, they discuss why multicloud is becoming essential for businesses, offering freedom from vendor lock-in and the ability to cherry-pick the best services. They also talk about Oracle's pioneering role in multicloud and its partnerships with Microsoft Azure, Google Cloud, and Amazon Web Services. Oracle Cloud Infrastructure Multicloud Architect Professional: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. ------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone! You’re listening to our Best of 2025 series, where over the next few weeks, we’re revisiting four of our most popular episodes of the year. Lois: Today is #2 of 4, and we’re throwing it back to an episode with Senior Manager of CSS OU Cloud Delivery Samvit Mishra. This episode was all about shining a light on multicloud, a game-changing strategy involving the use of multiple cloud service providers. 01:07 Nikita: That’s right, Lois. Oracle has been an early adopter of multicloud and a pioneer in multicloud services. So, we began that conversation by asking Samvit to explain what multicloud is and why someone would need more than one cloud provider. Samvit: Multicloud is a very simple, basic concept. It is the coordinated use of cloud services from more than one cloud service provider. 01:30 Nikita: But why would someone want to use more than one cloud service provider? Samvit: There are many reasons why a customer might want to leverage two or more cloud service providers. First, it addresses the very real concern of mitigating or avoiding vendor lock-in. By using multiple providers, companies can avoid being tied down to one vendor and maintain their flexibility. 01:53 Lois: That’s like not putting all your eggs in one basket, so to speak. Samvit: Exactly. Another reason is that customers want the best of breed. What that means is basically leveraging or utilizing the best product from one cloud service provider and pairing it against the best product from another cloud service provider. Getting a solution out of the combined products…out of the coordinated use of those services. 02:22 Nikita: So, it sounds like multicloud is becoming the new normal. And as we were saying before, Oracle was a pioneer in this space. But why did we embrace multicloud so wholeheartedly? Samvit: We recognized that our customers were already moving in this direction. Independent studies from Flexera found that 89% of the subjects of the study used multicloud. And we conducted our own study and came to similar numbers. Over 90% of our customers use two or more cloud service providers. HashiCorp, the big infrastructure as code company, came to similar numbers as well, 94%. They basically asked companies if multicloud helped them advance their business goals. And 94% said yes. And all this is very recent data. 03:13 Lois: Can you give us the backstory of Oracle’s entry into the multicloud space? Samvit: Sure. So back in 2019, Oracle and Microsoft Azure joined forces and announced the interconnect service between Oracle Cloud Infrastructure and Microsoft Azure. The interconnect was between Oracle’s FastConnect and Microsoft Azure’s ExpressRoute. This was a big step, as it allowed for a direct connection between the two providers without needing a third-party. And now we have several of our data centers interconnected already. So, out of the 48 regions, 12 of them are already interconnected. And more are coming. And you can very easily configure the interconnect. This interconnectivity guarantees low latency, high throughput, and predictable performance. And also, on the OCI side, there are no egress or ingress charges for your data. There's also a product called Oracle Database@Azure, where Oracle and Microsoft deliver Oracle Database services in Microsoft Azure data centers. 04:20 Lois: That’s exciting! And what are the benefits of this product? Samvit: The main advantage is the co-location. Being co-located with the Microsoft Azure data center offers you native integration between Azure and OCI resources. No manual configuration of a private interconnect between the two providers is needed. You're going to get microsecond latency between your applications and the Oracle Database. The OCI-native Exadata Database Service is available on Oracle Database@Azure. This enables you to get the highest level of Oracle Database performance, scalability, security, and availability. And your tech support can be provided either from Microsoft or from Oracle. 05:11 AI is being used in nearly every industry…healthcare, manufacturing, retail, customer service, transportation, agriculture, you name it! And it’s only going to get more prevalent and transformational in the future. It’s no wonder that AI skills are the most sought-after by employers. If you’re ready to dive in to AI, check out the OCI AI Foundations training and certification that’s available for free! It’s the perfect starting point to build your AI knowledge. So, get going! Head on over to mylearn.oracle.com to find out more. 05:51 Nikita: Welcome back. Samvit, there have been some new multicloud milestones from OCI, right? Can you tell us about them? Samvit: That’s right, Niki. I am thrilled to share the latest news on Oracle’s multicloud partnerships. We now have agreements with Microsoft Azure, Google Cloud, and Amazon Web Services. So, as we were discussing earlier, with Azure, we have the Oracle Interconnect for Azure and Oracle Database@Azure. Now, with Google Cloud, we have the Oracle Interconnect for Google Cloud. And it is very similar to the Oracle Interconnect for Azure. With Google Cloud, we have physically interconnected data centers and they provide a sub-2 millisecond latency private interconnection. So, you can come in and provision virtual circuits going from Oracle FastConnect to Google Cloud Interconnect. And the best thing is that there are no egress or ingress charges for your data. The way it is structured is you have your Oracle Cloud Infrastructure on one side, with your virtual cloud network, your subnets, and your resources. And on the other side, you have your Google Cloud router with your virtual private cloud subnet and your resources interconnecting. You initiate the connectivity on the Google Cloud side, retrieve the service key and provide that service key to Oracle Cloud Infrastructure, and complete the interconnection on the OCI side. So, for example, our US East Ashburn interconnect will match with us-east4 on the Google Cloud side. 07:29 Lois: Now, wasn’t the other major announcement Oracle Database@Google Cloud? Tell us more about that, please. Samvit: With Oracle Database@Google Cloud, you can run your applications on Google Cloud and the database inside the Google Cloud platform. That's the Oracle Cloud Infrastructure database co-located in Google Cloud platform data centers. It allows you to run native integration between GCP and OCI resources with no manual configuration of private interconnect between these two cloud service providers. That means no FastConnect, no Interconnect because, again, the database is located in the Google Cloud data center. And you're going to get microsecond latency and the OCI native Exadata Database Service. So, you're going to gain the highest level of Oracle Database performance, scalability, security, and availability. 08:25 Lois: And how is the tech support managed? Samvit: The technical support is a collaboration between Google Cloud and Oracle Cloud Infrastructure. That means you can either have the technical support provided to completion by Google Cloud or by Oracle. One of us will provide you with an end-to-end solution. 08:43 Nikita: During CloudWorld last year, we also announced Oracle Database@AWS, right? Samvit: Yes, Niki. That’s where Oracle and Amazon Web Services deliver the Oracle Database service on Oracle Cloud Infrastructure in your AWS data center. This will provide you with native integration between AWS and OCI resources, with no manual configuration of private interconnect between AWS and OCI. And you're getting microsecond latency with the OCI-native Exadata Database Service. And again, as with Oracle Database@Google Cloud and Oracle Database@Azure, you're gaining the highest level of Oracle Database performance, scalability, security, and availability. And the technical support is provided by either AWS or Oracle all the way to completion. Now, Oracle Database@AWS is currently available in limited preview, with broader availability in the coming months as it expands to new regions to meet the needs of our customers. 09:49 Lois: That’s great. Now, how does Oracle fare when it comes to pricing, especially compared to our major cloud competitors? Samvit: Our pricing is pretty consistent. You’ll see that in all cases across the world, we have the less expensive solution for you and the highest performance as well. 10:06 Nikita: Let’s move on to some use cases, Samvit. How might a company use the multicloud setup? Samvit: Let’s start with the split-stack architecture between Oracle Cloud Infrastructure and Microsoft Azure. Like I was saying earlier, this partnership dates back to 2019. And basically, we eliminated the FastConnect partner from the middle. And this will provide you with high throughput, low latency, and very predictable performance, all of this on highly available links. These links are redundant, ensuring business continuity between OCI and Azure. And you can have your database on the OCI side and your application on Microsoft Azure side or the other way around. You can have SQL Server on Azure and the application running on Oracle Cloud Infrastructure. And this is very easy to configure. 10:55 Lois: It really sounds like Oracle is at the forefront of the multicloud revolution. Thanks so much, Samvit, for shedding light on this exciting topic. Samvit: It was my pleasure. Nikita: That's a wrap for today. To learn more about what we discussed, head over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure Multicloud Architect Professional course. Lois: We hope you enjoyed that conversation. Join us next week for another throwback episode. Until then, this is Lois Houston... Nikita: And Nikita Abraham, signing off! 11:26 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/39193235
info_outline
Best of 2025: Introduction to MySQL
12/02/2025
Best of 2025: Introduction to MySQL
Join hosts Lois Houston and Nikita Abraham as they explore the world of MySQL 8.4. Together with Perside Foster, a MySQL Principal Solution Engineer, they break down the fundamentals of MySQL, its wide range of applications, and why it’s so popular among developers and database administrators. This episode also covers key topics like licensing options, support services, and the various tools, features, and plugins available in MySQL Enterprise Edition. MySQL 8.4 Essentials: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services. Lois: Hi there! If you’ve been following along with us, you’ll know we’ve had some really interesting seasons this year. We covered MySQL, Multicloud, APEX, GoldenGate, Artificial Intelligence, and Cloud Tech. Nikita: And we’ve had some pretty awesome special guests too. Do go back and check out those episodes if any of these topics interest you. Lois: As we close out the year, we thought this would be a good time to revisit some of our best episodes. So over the next few weeks, you’ll be able to listen to four of our most popular episodes of the year. 01:12 Nikita: Right, this is the best of the best according to you our listeners. Today’s episode is #1 of 4 and is a throwback to a discussion with MySQL Principal Solution Engineer Perside Foster on the Oracle MySQL ecosystem and its various components. We began by asking Perside to explain what MySQL is and why it’s so widely used. So, let’s get to it! Perside: MySQL is a relational database management system that organizes data into structured tables, rows, and columns for efficient programming and data management. MySQL is transactional by nature. When storing and managing data, actions such as selecting, inserting, updating, or deleting are required. MySQL groups these actions into a transaction. The transaction is saved only if every part completes successfully. 02:13 Lois: Now, how does MySQL work under the hood? Perside: MySQL is a high-performance database that uses its default storage engine, known as InnoDB. InnoDB helps MySQL handle complex operations and large data volumes smoothly. 02:33 Nikita: For the unversed, what are some day-to-day applications of MySQL? How is it used in the real world? Perside: MySQL works well with online transaction processing workloads. It handles transactions quickly and manages large volumes of transaction at once. OLTP, with low latency and high throughput, makes MySQL ideal for high-speed environments like banking or online shopping. MySQL not only stores data but also replicates it from a main server to several replicas. 03:14 Nikita: That's impressive! And what are the benefits of using MySQL? Perside: It improves data availability and load balancing, which is crucial for businesses that need up-to-date information. MySQL replication supports read scale-out by distributing queries across servers, which increases high availability. MySQL is the most popular database on the web. 03:44 Lois: And why is that? What makes it so popular? What sets it apart from the other database management systems? Perside: First, it is a relational database management system that supports SQL. It also works as a document store, enabling the creation of both SQL and NoSQL applications without the need for separate NoSQL databases. Additionally, MySQL offers advanced security features to protect data integrity and privacy. It also uses tablespaces for better disk space management. This gives database administrators total control over their data storage. MySQL is simple, solid in its reliability, and secure by design. It is easy to use and ideal for both beginners and professionals. MySQL is proven at scale by efficiently handling large data volumes and high transaction rates. MySQL is also open source. This means anyone can download and use it for free. Users can modify the MySQL software to meet their needs. However, it is governed by the GNU General Public License, or GPL. GPL outlines specific rules for its use. MySQL offers two major editions. For developers and small teams, the Community Edition is available for free and includes all of the core features needed. For large enterprises, the Commercial Edition provides advanced features, management tools, and dedicated technical support. 05:42 Nikita: Ok. Let’s shift focus to licensing. Who is it useful for? Perside: MySQL licensing is essential for independent software vendors. They're called ISVs. And original manufacturers, they're called OEMs. This is because these companies often incorporate MySQL code into their software products or hardware system to boost the functionality and performance of their product. MySQL licensing is equally important for value-added resellers. We call those VARs. And also, it's important for other distributors. These groups bundle MySQL with other commercially licensed software to sell as part of their product offering. The GPL v.2 license might suit Open Source projects that distribute their products under that license. 06:46 Lois: But what if some independent software vendors, original manufacturers, or value-add resellers don’t want to create Open Source products. They don’t want their source to be publicly available and they want to keep it private. What happens then? Perside: This is why Oracle provides a commercial licensing option. This license allows businesses to use MySQL in their products without having to disclose their source code as required by GPL v2. 07:17 Nikita: I want to bring up the robust support services that are available for MySQL Enterprise. What can we expect in terms of support, Perside? Perside: MySQL Enterprise Support provides direct access to the MySQL Support team. This team consists of experienced MySQL developers, who are experts in databases. They understand the issues and challenges their customers face because they, too, have personally tackled these issues and challenges. This support service operates globally and is available in 29 languages. So no matter where customers are located, Oracle Support provides assistance, most likely in their preferred language. MySQL Enterprise Support offers regular updates and hot fixes to ensure that the MySQL customer systems stays current with the latest improvements and security patches. MySQL Support is available 24 hours a day, 7 days a week. This ensures that whenever there is an issue, Oracle Support can provide the needed help without any delay. There are no restrictions on how many times customers can receive help from the team because MySQL Enterprise Support allows for unlimited incidents. MySQL Enterprise Support goes beyond simply fixing issues. It also offers guidance and advice. Whether customers require assistance with performance tuning or troubleshooting, the team is there to support them every step of the way. 09:11 Lois: Perside, can you walk us through the various tools and advanced features that are available within MySQL? Maybe we could start with MySQL Shell. Perside: MySQL Shell is an integrated client tool used for all MySQL database operations and administrative functions. It's a top choice among MySQL users for its versatility and powerful features. MySQL Shell offers multi-language support for JavaScript, Python, and SQL. These naturally scriptable languages make coding flexible and efficient. They also allow developers to use their preferred programming language for everything, from automating database tasks to writing complex queries. MySQL Shell supports both document and relational models. Whether your project needs the flexibility of NoSQL’s document-oriented structures or the structured relationships of traditional SQL tables, MySQL Shell manages these different data types without any problems. Another key feature of MySQL Shell is its full access to both development and administrative APIs. This ability makes it easy to automate complex database operations and do custom development directly from MySQL Shell. MySQL Shell excels at DBA operations. It has extensive tools for database configuration, maintenance, and monitoring. These tools not only improve the efficiency of managing databases, but they also reduce the possibility for human error, making MySQL databases more reliable and easier to manage. 11:21 Nikita: What about the MySQL Server tool? I know that it is the core of the MySQL ecosystem and is available in both the community and commercial editions. But how does it enhance the MySQL experience? Perside: It connects with various devices, applications, and third-party tools to enhance its functionality. The server manages both SQL for structured data and NoSQL for schemaless applications. It has many key components. The parser, which interprets SQL commands. Optimizer, which ensures efficient query execution. And then the queue cache and buffer pools. They reduce disk usage and speed up access. InnoDB, the default storage engine, maintains data integrity and supports robust transaction and recovery mechanism. MySQL is designed for scalability and reliability. With features like replication and clustering, it distributes data, manage more users, and ensure consistent uptime. 12:44 Nikita: What role does MySQL Enterprise Edition play in MySQL server’s capabilities? Perside: MySQL Enterprise Edition improves MySQL server by adding a suite of commercial extensions. These exclusive tools and services are designed for enterprise-level deployments and challenging environments. These tools and services include secure online backup. It keeps your data safe with efficient backup solutions. Real-time monitoring provides insight into database performance and health. The seamless integration connects easily with existing infrastructure, improving data flow and operations. Then you have the 24/7 expert support. It offers round the clock assistance to optimize and troubleshoot your databases. 13:48 Lois: That's an extensive list of features. Now, can you explain what MySQL Enterprise plugins are? I know they’re specialized extensions that boost the capabilities of MySQL server, tools, and services, but I’d love to know a little more about how they work. Perside: Each plugin serves a specific purpose. Firewall plugin protects against SQL injection by allowing only pre-approved queries. The audit plugin logs database activities, tracking who accesses databases and what they do. Encryption plugin secures data at rest, protecting it from unauthorized access. Then we have the authentication plugin, which integrates with systems like LDAP and Active Directory for control access. Finally, the thread pool plugin optimizes performance in high load situation by effectively controlling how many execution threads are used and how long they run. The plugin and tools are included in the MySQL Enterprise Edition suite. 15:16 Join the Oracle University Learning Community and tap into a vibrant network of over 1 million members, including Oracle experts and fellow learners. This dynamic community is the perfect place to grow your skills, connect with likeminded learners, and celebrate your successes. As a MyLearn subscriber, you have access to engage with your fellow learners and participate in activities in the community. Visit community.oracle.com/ou to check things out today! 15:47 Nikita: Welcome back! We’ve been going through the various MySQL tools, and another important one is MySQL Enterprise Backup, right? Perside: MySQL Enterprise Backup is a powerful tool that offers online, non-blocking backup and recovery. It makes sure databases remain available and performs optimally during the backup process. It also includes advanced features, such as incremental and differential backup. Additionally, MySQL Enterprise Backup supports compression to reduce backups and encryptions to keep data secure. One of the standard capabilities of MySQL Enterprise Backup is its seamless integration with media management software, or MMS. This integration simplifies the process of managing and storing backups, ensuring that data is easily accessible and secure. Then we have the MySQL Workbench Enterprise. It enhances database development and design with robust tools for creating and managing your diagram and ensuring proper documentation. It simplifies data migration with powerful tools that makes it easy to move databases between platforms. For database administration, MySQL Workbench Enterprise offers efficient tools for monitoring, performance tuning, user management, and backup and recovery. MySQL Enterprise Monitor is another tool. It provides real-time MySQL performance and availability monitoring. It helps track database's health and performance. It visually finds and fixes problem queries. This is to make it easy to identify and address performance issues. It offers MySQL best-practice advisors to guide users in maintaining optimal performance and security. Lastly, MySQL Enterprise Monitor is proactive and it provides forecasting. 18:24 Lois: Oh that’s really going to help users stay ahead of potential issues. That’s fantastic! What about the Oracle Enterprise Manager Plugin for MySQL? Perside: This one offers availability and performance monitoring to make sure MySQL databases are running smoothly and efficiently. It provides configuration monitoring. This is to help keep track of the database settings and configuration. Finally, it collects all available metrics to provide comprehensive insight into the database operation. 19:03 Lois: Are there any tools designed to handle higher loads and improve security? Perside: MySQL Enterprise Thread Pool improves scalability as concurrent connections grows. It makes sure the database can handle increased loads efficiently. MySQL Enterprise Authentication is another tool. This one integrates MySQL with existing security infrastructures. It provides robust security solutions. It supports Linux PAM, LDAP, Windows, Kerberos, and even FIDO for passwordless authentication. 19:46 Nikita: Do any tools offer benefits like customized logging, data protection, database security? Perside: The MySQL Enterprise Audit provides out-of-the-box logging of connections, logins, and queries in XML or JSON format. It also offers simple to fine-grained policies for filtering and log rotation. This is to ensure comprehensive and customizable logging. MySQL Enterprise Firewall detects and blocks out of policy database transactions. This is to protect your data from unauthorized access and activities. We also have MySQL Enterprise Asymmetric Encryption. It uses MySQL encryption libraries for key management signing and verifying data. It ensures data stays secure during handling. MySQL Transparent Data Encryption, another tool, provides data-at-rest encryption within the database. The Master Key is stored outside of the database in a KMIP 1.1-compliant Key Vault. That is to improve database security. Finally, MySQL Enterprise Masking offers masking capabilities, including string masking and dictionary replacement. This ensures sensitive data is protected by obscuring it. It also provides random data generators, such as range-based, payment card, email, and social security number generators. These tools help create realistic but anonymized data for testing and development. 21:56 Lois: Can you tell us about HeatWave, the MySQL cloud service? We’re going to have a whole episode dedicated to it soon, but just a quick introduction for now would be great. Perside: MySQL HeatWave offers a fully managed MySQL service. It provides deployment, backup and restore, high availability, resizing, and read replicas, all the features you need for efficient database management. This service is a powerful union of Oracle Infrastructure and MySQL Enterprise Edition 8. It combines robust performance with top-tier infrastructure. With MySQL HeatWave, your systems are always up to date with the latest security fixes, ensuring your data is always protected. Plus, it supports both OLTP and analytics/ML use cases, making it a versatile solution for diverse database needs. 23:05 Nikita: So to wrap up, what are your key takeways when it comes to MySQL? Perside: When you use MySQL, here is the bottom line. MySQL Enterprise Edition delivers unmatched performance at scale. It provides advanced monitoring and tuning capabilities to ensure efficient database operation, even under heavy loads. Plus, it provides insurance and immediate help when needed, allowing you to depend on expert support whenever an issue arises. Regarding total cost of ownership, TCO, this edition significantly reduces the risk of downtime and enhances productivity. This leads to significant cost savings and improved operational efficiency. On the matter of risk, MySQL Enterprise Edition addresses security and regulatory compliance. This is to make sure your data meets all necessary standards. Additionally, it provides direct contact with the MySQL team for expert guidance. In terms of DevOps agility, it supports automated scaling and management, as well as flexible real-time backups, making it ideal for agile development environments. Finally, concerning customer satisfaction, it enhances application performance and uptime, ensuring your customers have a reliable and smooth experience. 25:02 Lois: Thank you so much, Perside. This is really insightful information. To learn more about all the support services that are available, visit support.oracle.com. This is the central hub for all MySQL Enterprise Support resources. Nikita: Yeah, and if you want to know about the key commercial products offered by MySQL, visit mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Lois: We hope you enjoyed that conversation. Join us next week for another throwback episode. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 25:39 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/39193210
info_outline
Understanding Security Risks and Threats in the Cloud - Part 1
11/18/2025
Understanding Security Risks and Threats in the Cloud - Part 1
This week, Lois Houston and Nikita Abraham are joined by Principal OCI Instructor Orlando Gentil to explore what truly keeps data safe, and what puts it at risk. They discuss the CIA triad, dive into hashing and encryption, and shed light on how cyber threats like malware, phishing, and ransomware try to sneak past defenses. Cloud Tech Jumpstart: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! Last week, we discussed how you can keep your data safe with authentication and authorization. Today, we’ll talk about various security risks that could threaten your systems. 00:48 Lois: And to help us understand this better, we have Orlando Gentil, Principal OCI Instructor, back with us. Orlando, welcome back! Let’s start with the big picture—why is security such a crucial part of our digital world today? Orlando: Whether you are dealing with files stored on a server or data flying across the internet, one thing is always true—security matters. In today's digital world, it's critical to ensure that data stays private, accurate, and accessible only to the right people. 01:20 Nikita: And how do we keep data private, secure, and unaltered? Is there a security framework that we can use to make sense of different security practices? Orlando: The CIA triad defines three core goals of information security. CIA stands for confidentiality. It's about keeping data private. Only authorized users should be able to access sensitive information. This is where encryption plays a huge role. Integrity means ensuring that the data hasn't been altered, whether accidentally or maliciously. That's where hashing helps. You can compare a stored hash of data to a new hash to make sure nothing's changed. Availability ensures that data is accessible when it's needed. This includes protections like system redundancy, backups, and anti-DDoS mechanisms. Encryption and hashing directly support confidentiality and integrity. And they indirectly support availability by helping keep systems secure and resilient. 02:31 Lois: Let’s rewind a bit. You spoke about something called hashing. What does that mean? Orlando: Hashing is a one-way transformation. You feed in data and it produces a unique fixed length string called a hash. The important part is the same input always gives the same output, but you cannot go backward and recover the original data from the hash. It's commonly used for verifying integrity. For example, to check if a file has changed or a message was altered in transit. Hashing is also used in password storage. Systems don't store actual passwords, just their hashes. When you log in, the system hashes what you type it and compare the stored hash. If they match, you're in. But your actual password was never stored or revealed. So hashing isn't about hiding data, it's about providing it hasn't changed. So, while hashing is all about protecting integrity, encryption is the tool we use to ensure confidentiality. 03:42 Nikita: Right, the C in CIA. And how does it do that? Orlando: Encryption takes readable data, also known as plaintext, and turns it into something unreadable called ciphertext using a key. To get the original data back, you need to decrypt it using the right key. This is especially useful when you are storing sensitive files or sending data across networks. If someone intercepts the data, all they will see is gibberish, unless they have the correct key to decrypt it. Unlike hashing, encryption is reversible as long as you have the right key. 04:23 Lois: And are there different types of encryption that serve different purposes? Orlando: Symmetric and asymmetric encryption. With symmetric encryption, the same key is used to both encrypt and decrypt the data. It's fast and great for securing large volumes of data, but the challenge lies in safely sharing the key. Asymmetric encryption solves that problem. It uses a pair of keys: public key that anyone can use to encrypt data, and a private key that only the recipient holds to decrypt it. This method is more secure for communications, but also slower and more resource-intensive. In practice, systems often use both asymmetric encryption to exchange a secure symmetric key and then symmetric encryption for the actual data transfer. 05:21 Nikita: Orlando, where is encryption typically used in day-to-day activities? Orlando: Data can exist in two primary states: at rest and in transit. Data at rest refers to data stored on disk, in databases, backups, or object storage. It needs protection from unauthorized access, especially if a device is stolen or compromised. This is where things like full disk encryption or encrypted storage volumes come in. Data in transit is data being sent from one place to another, like a user logging into a website or an API sending information between services. To protect it from interception, we use protocols like TLS, SSL, VPNs, and encrypted communication channels. Both forms data need encryption, but the strategies and threats can differ. 06:19 Lois: Can you do a quick comparison between hashing and encryption? Orlando: Hashing is one way. It's used to confirm that data hasn't changed. Once data is hashed, it cannot be reversed. It's perfect for use cases like password storage or checking the integrity of files. Encryption, on the other hand, it's two-way. It's designed to protect data from unauthorized access. You encrypt the data so only someone with the right key can decrypt and read it. That's what makes it ideal for keeping files, messages, or network traffic confidential. Both are essential for different reasons. Hashing for trust and encryption for privacy. 07:11 Adopting a multicloud strategy is a big step towards future-proofing your business and we’re here to help you navigate this complex landscape. With our suite of courses, you'll gain insights into network connectivity, security protocols, and the considerations of working across different cloud platforms. Start your journey to multicloud today by visiting mylearn.oracle.com. 07:39 Nikita: Welcome back! When we talk about cybersecurity, we hear a lot about threats and vulnerabilities. But what do those terms really mean? Orlando: In cybersecurity, a threat is a potential danger and a vulnerability is a weakness an asset possess that a threat can exploit. When a threat and a vulnerability align, it creates a risk of harm. A threat actor then performs an exploit to leverage that vulnerability, leading to undesirable impact, such as data loss or downtime. After an impact, the focus shifts to response and recovery to mitigate damage and restore operations. 08:23 Lois: Ok, let’s zero in on vulnerabilities. What counts as a vulnerability, and what categories do attackers usually target first? Orlando: Software and hardware bugs are simply unintended flaws in a system's core programming or design. Misconfigurations arise when systems aren't set up securely, leaving gaps. Weak passwords and authentication provide easy entry points for attackers. A lack of encryption means sensitive data is openly exposed. Human error involves mistakes made by people that unintentionally create security risks. Understanding these common vulnerability types is the first step in building more resilient and secure systems as they represent the critical entry points attackers leverage to compromise systems and data. By addressing these, we can significantly reduce our attack surface and enhance overall security. 09:28 Nikita: Can we get more specific here? What are the most common cybersecurity threats that go after vulnerabilities in our systems and data? Orlando: Malware is a broad category, including viruses, worms, Trojans, and spyware. Its goal is to disrupt or damage systems. Ransomware has been on the rise, targeting everything from hospitals to government agencies. It lock your files and demands a ransom, usually in cryptocurrency. Phishing relies on deception. Attackers impersonate legitimate contacts to trick users into clicking malicious links or giving up credentials. Insider threats are particularly dangerous because they come within employees, contractors, or even former staff with lingering access. Lastly, DDoS attacks aim to make online services unavailable by overwhelming them with traffic, often using a botnet—a network of compromised devices. 10:34 Lois: Orlando, can you walk us through how each of these common cybersecurity threats work? Orlando: Malware, short for malicious software, is one of the oldest and most pervasive types of threats. It comes in many forms, each with unique methods and objectives. A virus typically attaches itself to executable files and documents and spreads when those are shared or opened. Worms are even more dangerous in networked environments as they self-replicate and spread without any user action. Trojans deceive users by posing as harmless or helpful applications. Once inside, they can steal data or open backdoors for remote access. Spyware runs silently in the background, collecting sensitive information like keystrokes or login credentials. Adware might seem like just an annoyance, but it can also track your activity and compromise privacy. Finally, rootkits are among the most dangerous because they operate at a low system level, often evading detection tools and allowing attackers long-term access. In practice, malware can be a combination of these types. Attackers often bundle different techniques to maximize damage. 12:03 Nikita: And what about ransomware? Why it is such a serious threat? Orlando: Ransomware has become one of the most disruptive and costly types of cyber attacks in recent years. Its goal is simple but devastating, to encrypt your data and demand payment in exchange for access. It usually enters through phishing emails, insecure remote desktop protocol ports or known vulnerabilities. Once inside, it often spreads laterally across the network before activating, ensuring maximum impact. There are two common main forms. Crypto ransomware encrypts user files, making them inaccessible. Locker ransomware goes a step further, locking the entire system interface, preventing any use at all. Victims are then presented with a ransom note, typically requesting cryptocurrency payments in exchange for the decryption key. What makes ransomware so dangerous is not just the encryption itself, but the pressure it creates. Healthcare institutions, for instance, can't afford the downtime, making them prime targets. 13:18 Lois: Wow. Thanks, Orlando, for joining us today. Nikita: Yeah, thanks Orlando. We’ll be back next week with more on how you use security models to tackle these threats head-on. And if you want to learn about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 13:42 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/38831460
info_outline
Networking & Security Essentials
11/11/2025
Networking & Security Essentials
How do all your devices connect and stay safe in the cloud? In this episode, Lois Houston and Nikita Abraham talk with OCI instructors Sergio Castro and Orlando Gentil about the basics of how networks work and the simple steps that help protect them. You’ll learn how information gets from one place to another, why tools like switches, routers, and firewalls are important, and what goes into keeping access secure. The discussion also covers how organizations decide who can enter their systems and how they keep track of activity. Cloud Tech Jumpstart: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! In the last episode, we spoke about local area networks and domain name systems. Today, we’ll continue our conversation on the fundamentals of networking, covering a variety of important topics. 00:50 Lois: That’s right, Niki. And before we close, we’ll also touch on the basics of security. Joining us today are two OCI instructors from Oracle University: Sergio Castro and Orlando Gentil. So glad to have you both with us guys. Sergio, with so many users and devices connecting to the internet, how do we make sure everyone can get online? Can you break down what Network Address Translation, or NAT, does to help with this? Sergio: The world population is bigger than 4.3 billion people. That means that if we were to interconnect every single human into the internet, we will not have enough addresses. And not all of us are connected to the internet, but those of us who are, you know that we have more than one device at our disposal. We might have a computer, a laptop, mobile phones, you name it. And all of them need IP addresses. So that's why Network Address Translation exists because it translates your communication from a private IP to a public IP address. That's the main purpose: translate. 02:05 Nikita: Okay, so with NAT handling the IP translation, how do we ensure that the right data reaches the right device within a network? Or to put it differently, what directs external traffic to specific devices inside a network? Sergio: Port forwarding works in a reverse way to Network Address Translation. So, let's assume that this PC here, you want to turn it into a web server. So, people from the outside, customers from the outside of your local area network, will access your PC web server. Let's say that it's an online store. Now all of these devices are using the same public IP address. So how would the traffic be routed specifically to this PC and not to the camera or to the laptop, which is not a web server, or to your IP TV? So, this is where port forwarding comes into play. Basically, whenever it detects a request coming to port, it will route it and forward that request to your PC. It will allow anybody, any external device that wants to access this particular one, this particular web server, for the session to be established. So, it's a permission that you're allowing to this PC and only to this PC. The other devices will still be isolated from that list. That's what port forwarding is. 03:36 Lois: Sergio, let’s talk about networking devices. What are some of the key ones, and what role do they play in connecting everything together? Sergio: There's plenty of devices for interconnectivity. These are devices that are different from the actual compute instances, virtual machines, cameras, and IPTV. These are for interconnecting networks. And they have several functionalities. 03:59 Nikita: Yeah, I often hear about a default gateway. Could you explain what that is and why it’s essential for a network to function smoothly? Sergio: A gateway is basically where a web browser goes and asks a service from a web server. We have a gateway in the middle that will take us to that web server. So that's basically is the router. A gateway doesn't necessarily have to be a router. It depends on what device you're addressing at a particular configuration. So, a gateway is a connectivity device that connects two different networks. That's basically the functionality. 04:34 Lois: Ok. And when does one use a default gateway? Sergio: When you do not have a specific route that is targeting a specific router. You might have more than one router in your network, connecting to different other local area networks. You might have a route that will take you to local area network B. And then you might have another router that is connecting you to the internet. So, if you don't have a specific route that will take you to local area network B, then it's going to be utilizing the default gateway. It directs data packets to other networks when no specific route is known. In general terms, the default gateway, again, it doesn't have to be a router. It can be any devices. 05:22 Nikita: Could you give us a real-world example, maybe comparing a few of these devices in action, so we can see how they work together in a typical network? Sergio: For example, we have the hub. And the hub operates at the physical layer or layer 1. And then we have the switch. And the switch operates at layer 2. And we also have the router. And the router operates at layer 3. So, what's the big difference between these devices and the layers that they operate in? So, hubs work in the physical layer of the OSI model. And basically, it is for connecting multiple devices and making them act as a single network segment. Now, the switch operates at the data link layer and is basically a repeater, and is used for filtering content by reading the addresses of the source and destination. And these are the MAC addresses that I'm talking about. So, it reads where the packet is coming from and where is it going to at the local area network level. It connects multiple network segments. And each port is connected to a different segment. And the router is used for routing outside of your local area network, performs traffic directing functions on the internet. A data packet is typically forwarded from one router to another through different networks until it reaches its destination node. The switch connects multiple network segments. And each port of the switch is connected to a different segment. And the router performs traffic directing functions on the internet. It takes data from one router to another, and it works at the TCP/IP network layer or internet layer. 07:22 Lois: Sergio, what kind of devices help secure a network from external threats? Sergio: The network firewall is used as a security device that acts as a barrier between a trusted internal network and an untrusted external network, such as the internet. The network firewall is the first line of defense for traffic that passes in and out of your network. The firewall examines traffic to ensure that it meets the security requirements set by your organization, or allowing, or blocking traffic based on set criteria. And the main benefit is that it improves security for access management and network visibility. 08:10 Are you keen to stay ahead in today's fast-paced world? We’ve got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you’re always in the know, we offer New Features courses that give you an insider’s look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started. 08:36 Nikita: Welcome back! Sergio, how do networks manage who can and can’t enter based on certain permissions and criteria? Sergio: The access control list is like the gatekeeper into your local area network. Think about the access control list as the visa on your passport, assuming that the country is your local area network. Now, when you have a passport, you might get a visa that allows you to go into a certain country. So the access control list is a list of rules that defines which users, groups, or systems have permissions to access specific resources on your networks. It is a gatekeeper, that is going to specify who's allowed and who's denied. If you don't have a visa to go into a specific country, then you are denied. Similar here, if you are not part of the rule, if the service that you're trying to access is not part of the rules, then you cannot get in. 09:37 Lois: That’s a great analogy, Sergio. Now, let’s turn our attention to one of the core elements of network security: authentication and authorization. Orlando, can you explain why authentication and authorization are such crucial aspects of a secure cloud network? Orlando: Security is one of the most critical pillars in modern IT systems. Whether you are running a small web app or managing global infrastructure, every secure system starts by answering two key questions. Who are you, and what are you allowed to do? This is the essence of authentication and authorization. Authentication is the first step in access control. It's how a system verifies that you are who you claim to be. Think of it like showing your driver's license at a security checkpoint. The guard checks your photo and personal details to confirm your identity. In IT systems, the same process happens using one or more of these factors. It will ask you for something you know, like a password. It will ask you for something that you have, like a security token, or it will ask you for something that you are, like a fingerprint. An identity does not refer to just a person. It's any actor, human or not, that interacts with your systems. Users are straightforward, think employees logging into a dashboard. But services and machines are equally important. A backend API may need to read data from a database, or a virtual machine may need to download updates. Treating these non-human identities with the same rigor as human ones helps prevent unauthorized access and improves visibility and security. After confirming your identity, can the system move on to deciding what you're allowed to access? That's where authorization comes in. Once authentication confirms who you are, authorization determines what you are allowed to do. Sticking with the driver's license analogy, you've shown your license and proven your identity, but that doesn't mean that you can drive anything anywhere. Your license class might let you drive a car, not a motorcycle or a truck. It might be valid in your country, but not in others. Similarly, in IT systems, authorization defines what actions you can take and on which resources. This is usually controlled by policies and roles assigned to your identity. It ensures that users or services only get access to the things they are explicitly allowed to interact with. 12:34 Nikita: How can organizations ensure secure access across their systems, especially when managing multiple users and resources? Orlando: Identity and Access Management governs who can do what in our systems. Individually, authentication verifies identity and authorization grants access. However, managing these processes at scale across countless users and resources becomes a complex challenge. That's where Identity and Access Management, or IAM, comes in. IAM is an overarching framework that centralizes and orchestrates both authentication and authorization, along with other critical functions, to ensure secure and efficient access to resources. 13:23 Lois: And what are the key components and methods that make up a robust IAM system? Orlando: User management, a core component of IAM, provides a centralized Identity Management system for all user accounts and their attributes, ensuring consistency across applications. Key functions include user provisioning and deprovisioning, automating account creation for new users, and timely removal upon departure or role changes. It also covers the full user account lifecycle management, including password policies and account recovery. Lastly, user management often involves directory services integration to unify user information. Access management is about defining access permissions, specifically what actions users can perform and which resources they can access. A common approach is role-based access control, or RBAC, where permissions are assigned to roles and users inherit those permissions by being assigned to roles. For more granular control, policy-based access control allows for rules based on specific attributes. Crucially, access management enforces the principle of least privilege, granting only the minimum necessary access, and supports segregation of duties to prevent conflicts of interest. For authentication, IAM systems support various methods. Single-factor authentication, relying on just one piece of evidence like a password, offers basic security. However, multi-factor authentication significantly boosts security by requiring two or more distinct verification types, such as a password, plus a one-time code. We also have biometric authentication, using unique physical traits and token-based authentication, common for API and web services. 15:33 Lois: Orlando, when it comes to security, it's not just about who can access what, but also about keeping track of it all. How does auditing and reporting maintain compliance? Orlando: Auditing and reporting are essential for security and compliance. This involves tracking user activities, logging all access attempts and permission changes. It's vital for meeting compliance and regulatory requirements, allowing you to generate reports for audits. Auditing also aids in security incident detection by identifying unusual activities and providing data for forensic analysis after an incident. Lastly, it offers performance and usage analytics to help optimize your IAM system. 16:22 Nikita: That was an incredibly informative conversation. Thank you, Sergio and Orlando, for sharing your expertise with us. If you’d like to dive deeper into these concepts, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: I agree! This was such a great conversation! Don’t miss next week’s episode, where we’ll continue exploring key security concepts to help organizations operate in a scalable, secure, and auditable way. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 16:56 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/38831365
info_outline
Inside Cloud Networking
11/04/2025
Inside Cloud Networking
In this episode, hosts Lois Houston and Nikita Abraham team up with Senior Principal OCI Instructor Sergio Castro to unpack the basics of cloud networking and the Domain Name System (DNS). You’ll learn how local and virtual networks connect devices, and how DNS seamlessly translates familiar names like oracle.com into addresses computers understand. Cloud Tech Jumpstart: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last few weeks, we’ve been talking about different aspects of cloud data centers. Today, we’re focusing on something that’s absolutely key to how everything works in the cloud: networking and domain name systems. 00:52 Lois: And to guide us through it, we’ve got Sergio Castro, Senior Principal OCI Instructor at Oracle University. We’ll start by trying to understand why networking is so crucial and how it connects everything behind the scenes. Sergio, could you explain what networking means in simple terms, especially for folks new to cloud tech? Sergio: Networking is the backbone of cloud computing. It is a fundamental service because it provides the infrastructure for connecting users, applications, and resources within a cloud environment. It basically enables data transfers. It facilitates remote access. And ensures that cloud services are accessible to users. This provided that these users have the correct credentials. 01:38 Nikita: Ok, can you walk us through how a typical network operates? Sergio: In networking, typically starts with the local area network. Basically, networking is a crucial component for any IT service because it's the foundation for the architecture framework of any of the services that we consume today. So, a network is two or more computers interconnected to each other. And not necessarily it needs to be a computer. It can be another device such as a printer or an IP TV or an IP phone or an IP camera. Many devices can be part of a local area network. And a local area network can be very small. Like I mentioned before, two or more computers, or it could grow into a very robust and complicated set of interconnected networks. And if that happens, then it can become very expensive as well. Cloud networking, it's the Achilles heel for many of the database administrators, programmers, quality assurance engineers, any IT other than a network administrator. Actually, when the network starts to grow, managing access and permissions and implementing robust security measures, this coupled with the critical importance of reliable, and secure performance, can create significant hurdles. 03:09 Nikita: What are the different types of networks we have? Sergio: A local area network is basically in one building. It covers… it can be maybe two buildings that are in close proximity in a small campus, but typically it's very small by definition, and they're all interconnected to each other via one router, typically. A metropolitan area network is a typical network that spans into a city or a metro area, hence the name metropolitan area network. So, one building can be on one edge of the city and the other building can be at the other edge of the city, and they are interconnected by a digital circuit typically. So that's the case. It's more than one building, and the separation of those buildings is considerable. It can go into several miles. And a wide area network is a network that spans multiple cities, states, countries, even international. 04:10 Lois: I think we’ll focus on the local area network for today’s conversation. Could you give us a real-world example, maybe what a home office network setup looks like? Sergio: If you are accessing this session from your home office or from your office or corporate office even, but a home office or a home network, typically, you have a router that is being provided to you by the internet vendor—the internet service provider. And then you have your laptop or your computer, your PC connected to that router. And then you might have other devices either connected via cable—ethernet cable—or Wi-Fi. And the interconnectivity within that small building is what makes a local area network. And it looks very similar once you move on into a corporate office. Again, it's two or more computers interconnected. That's what makes a local area network. In a corporate office, the difference with a home office or your home is that you have many more computers. And because you have many more computers, that local area network might be divided into subnets. And for that, you need a switch. So, you have additional devices like a switch and a firewall and the router. And then you might have a server as well. So that's the local area network. Two or more computers. And local area networks are capable of high speeds because they are in close proximity to each other. 05:47 Nikita: Ok… so obviously a local area network has several different components. Let’s break them down. What’s a client, what’s a server, and how do they interact? Sergio: A client basically is a requester of a service. Like when you hop into your browser and then you want to go to a website, for example, oracle.com, you type www.oracle.com, you are requesting a service from a server. And that server typically resides in a data center like oracle.com under the Oracle domain is a big data center with many interconnected servers. Interconnected so they can concurrently serve multiple millions of requests coming into www.oracle.com at the same time. So, servers provide services to client computers. So basically, that's the relation. A client requests a service and the server provides that service. 06:50 Lois: And what does that client-server setup actually look like? Sergio: So, let's continue with our example of a web browser requesting a service from a web server. So, in this case, the physical computer is the server. And then it has a software running on it. And that makes it a web server. So, once you type www.oracle.com, it sends the request and the request is received. And provided that everything's configured correctly and that there are no typos, then it will provide a response and basically give the view of the website. And that's obviously in the local area network, maybe quality assurance when they were testing this for going live. But when it goes live, then you have the internet in the middle. And the internet in the middle then have many routers, hubs, switches. 07:51 Transform the way you work with Oracle Database 23ai! This cutting-edge technology brings the power of AI directly to your data, making it easier to build powerful applications and manage critical workloads. Want to learn more about Database 23ai? Visit mylearn.oracle.com to pick from our range of courses and enroll today! 08:16 Nikita: Welcome back! Sergio, would this client-server model also apply to my devices at home? Sergio: In your own local area network, you have client server even without noticing. For example, let's go back to our home office example. What happens if we add another laptop into the scenario? Then all of these devices, they need a way for them to communicate. And for that, they have an IP address. And who provides that IP address? The minute that you add, the other device is going to send a request to the router. The router, we call it router, but it has multiple functions like the mobile device, the handheld device that we call smartphone. It has many functions like camera and calendar and many other functionalities. The router has an additional functionality called the dynamic host configuration protocol at DHCP server. So basically, the laptop requests, hey, give me an IP address, and then the router or the DHCP server replies, here's your IP address. And it's going to be a different one. So, they don't overlap. So that's an example of client server. 09:32 Lois: And where do virtual networks fit into all this? Sergio: A virtual network is basically, a software version of the physical network. It looks and feels exactly as a physical network does. We do have a path or a communication, in this case, in the physical network, you have either Wi-Fi or you have internet cable. And then you add your workstations or devices on top of that. And then you might create subnets. So, in a software-defined network or in a virtual network, you have a software-defined connectivity, physical cable and all of that. Everything is software-defined. And it looks exactly the same, except that everything is software. In a software or a virtual network, you can communicate with a physical network as if that software or that virtual network was another physical network. Again, this is a software network or a software-defined network, a virtual network, no longer a physical network. 10:42 Lois: Let’s switch gears a little and talk about Domain Name Systems. Sergio, can you explain what DNS is, and why it’s important when we browse the web? Sergio: DNS is the global database for internet addressing. The DNS plays a very important role on the internet. And many internet services are closely related to DNS. The main functionality of DNS is to translate easy-to-remember names into IP addresses. Some IP addresses might be very easy to remember. But however, if you have many of them, then it's easier to remember oracle.com or ucla.edu or navy.mil for military or eds.org for organization or gobierno.mx for Mexico. So that's the main feature of the DNS. It's very similar to a mobile phone to the contacts application in your mobile phone, because the contacts application maps names to phone numbers. It's easier to remember Bob's phone than 555-123-4567. So, it's easier to remember the name of the persons in your contacts list, like it is easier to remember, as previously mentioned, oracle.com than 138.1.33.162. Again, 138.1.33.162 might be easy for you to remember if that's the only one that you need to remember. But if you have 20, 40, 50, like we do with phone numbers, it's easier to remember oracle.com or ucla.edu. And this is essential, this mapping, again, because we work with names it's easier for us to remember. However, the fact is that computers, they still need to use IP addresses. And remember that this is the decimal representation of the binary number. It's a lot harder for us to remember the 32 bits or each one of the octets in binary. So that's the main purpose of DNS. Now the big difference is that the contact list in a cell phone is unique to that individual phone. However, DNS is global. It applies to everybody in the world. Anybody typing oracle.com will translate that into 138.1.33.162. Now this is an actual IP address of oracle.com. Oracle.com has many IP addresses. If you ping oracle.com, chances are that this is one of the many addresses that maps to oracle.com. 13:35 Nikita: You mentioned that a domain name like oracle.com can have many IP addresses. So how does DNS help my computer find the right one? Sergio: So, let's say that you want to look for www.example.com, how do you do that? So, you type in your computer instance or in your terminal, in your laptop, in your computer, you type in your browser "www.example.com." If the browser doesn't have that information in cache, then it's going to first ask your DNS server, the one that you have assigned and indicating in your browser's configuration. And if the DNS server then it will relate that the information is 96.7.128.198. This address is real, and your browser will go to this address once you type www.example.com. 14:34 Nikita: But what happens if the browser doesn’t know the address? Sergio: This is where it gets interesting. Your browser wants to go to www.example.com. And it's going to go and look within its cache. If it doesn't have it, then the first step is to go ahead to your DNS server and ask them, hey, if you don't know this address, go ahead and find out. So, it goes to the root server. All the servers are administrated by IANA. And it's going to send the information, hey, what's the IP address for www.example.com? And if the root server doesn't know it, it's going to let you know, hey, ask the top-level domain name server, in this case, the .com. It's a top-level domain name server. So, you go ahead and ask this top-level domain name server to do that for you. In this case, again, the .com and you asked, hey, what's the IP address for example.com? And if the top-level domain name server doesn't know, it's going to ask you, hey, ask example.com. And example.com is actually within the customer's domain. And then based on these instructions you ask, what is the IP address for www.example.com? So, it will provide you with the IP address. And once your DNS server has the IP address, then it's going to relate to your web browser. And this is where your web browser actually reaches 96.7.128.198. Very interesting, isn't it? 16:23 Lois: Absolutely! Sergio, you mentioned top-level domain names. What are they and how are they useful? Sergio: A top level domain is the rightmost segment of a domain name, and it's located after the last visible dot in the domain name. So oracle.com or cloud.oracle.com is a domain name. So, .com is a top-level domain. And the purpose of the top-level domain is to recognize certain elements of a website. This top-level domain indicates that this is a commercial site. Now, .edu, for example, is a top-level domain name for higher education. We also have .org for nonprofit organizations, .net for network service providers. And we also have country specific. .ca for Canadian websites, .it for Italian websites. Now .it, a lot of companies that are in the information technology business utilizes this one to indicate that they're in information technology. There's also the .us. And for US companies, most of the time this is optional. .com, .org, .net is understood that they are from the US. Now if .com is a top-level domain name, what is that .oracle in cloud? So, Oracle is the second-level domain name. And in this case, Cloud is the third-level domain name. And lately you've been seeing a lot more top-level domain names. These are the classic ones. But now you get .AI, .media, .comedy, .people, and so on and so forth. You have many, many, even companies now have the option of registering their company name as the top-level domain name. 18:24 Nikita: Thank you, Sergio, for this deep dive into local area networks and domain name systems. If you want to learn about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: And don’t forget to join us next week for another episode on networking essentials. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 18:46 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/38827345
info_outline
Cloud Data Centers: Core Concepts - Part 4
10/28/2025
Cloud Data Centers: Core Concepts - Part 4
In this episode, hosts Lois Houston and Nikita Abraham, along with Principal OCI Instructor Orlando Gentil, break down the differences between Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service. The conversation explores how each framework influences control, cost efficiency, expansion, reliability, and contingency planning. Cloud Tech Jumpstart: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about how hypervisors, virtual machines, and containers have transformed data centers. Today, we’re moving on to something just as important—the main cloud models that drive modern cloud computing. Nikita: Orlando Gentil, Principal OCI Instructor at Oracle University, joins us once again for part four of our discussion on cloud data centers. 01:01 Lois: Hi Orlando! Glad to have you with us today. Can you walk us through the different types of cloud models? Orlando: These are commonly categorized into three main service models: Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service. Let's use the idea of getting around town to understand cloud service models. IaaS is like renting a car. You don't own the car, but you control where it goes, how fast, and when to stop. In cloud terms, the provider gives you the infrastructure—virtual machines, storage, and networking—but you manage everything on top—the OS, middleware, runtime, and application. Thus, it's like using a shuttle service. You bring your bags—your code, pick your destination—your app requirements, but someone else drives and maintains the vehicle. You don't worry about the engine, fuel, or routing planning. That's the platform's job. Your focus stays on development and deployment, not on servers or patching. SaaS is like ordering a taxi. You say where you want to go and everything else is handled for you. It's the full-service experience. In the cloud, SaaS is software UXs over the web—Email, CRM, project management. No infrastructure, no updates, just productivity. 02:32 Nikita: Ok. How do the trade-offs between control and convenience differ across SaaS, PaaS, and IaaS? Orlando: With IaaS, much like renting a car, you gain high control. You are managing components like the operating system, runtime, your applications, and your data. In return, the provider expertly handles the underlying virtual machines, storage, and networking. This model gives you immense flexibility. Moving to PaaS, our shuttle service, you shift to a medium level of control but gain significantly higher convenience. Your primary focus remains on your application code and data. The provider now takes on the heavy lifting of managing the runtime environment, the operating system, the servers themselves, and even the scaling. Finally, SaaS, our taxi service, offers the highest convenience with the lowest control level. Here, your responsibility is essentially just using the application and managing your specific configurations or data within it. The cloud provider manages absolutely everything else—the entire infrastructure, the platform, and the application itself. 03:52 Nikita: One of the top concerns for cloud users is cost optimization. How can we manage this? Orlando: Each cloud service model offers distinct strategies to help you manage and reduce your spending effectively, as well as different factors that drives those costs. For Infrastructure-as-a-Service, where you have more control, optimization largely revolves around smart resource management. This means rightsizing your VMs, ensuring they are not overprovisioned, and actively turning off idle resources when not in use. Leveraging preemptible or spot instances for flexible workloads can also significantly cut costs. Your charges here are directly tied to your compute, storage, and network usage, so efficiency is key. Moving to Platform-as-a-Service, where the platform is managed for you, optimization shifts slightly. Strategies include choosing scalable platforms that can efficiently handle fluctuating demand, opting for consumption-based pricing where available, and diligently optimizing your runtime usage to minimize processing time. Costs in PaaS are typically based on your application usage, runtime hours, and storage consumed. Finally, for Software-as-a-Service where you can consume a ready-to-use application, cost optimization centers on licensing and usage. This involves consolidating tools to avoid redundant subscriptions, selecting usage-based plans if they align better with your needs, and crucially, eliminating any unused license. SaaS costs are generally based on subscription or per user fees. Understanding these nuances is essential for effective cloud financial management. 05:52 Lois: Ok. And what about scalability? How does each model handle the ability to grow and shrink with demand, without needing manual hardware changes? Orlando: How you achieve and manage that scalability varies significantly across our three service models. For Infrastructure-as-a-Service, you have the most direct control over scaling. You can implement manual or auto scaling by adding or removing virtual machines as needed, often leveraging load balancers to distribute traffic. In this model, you configure the scaling policies and parameters based on your specific workload. Moving to Platform-as-a-Service, the scaling becomes more automated and elastic. The platform automatically adjusts resources based on your application's demand, allowing it to seamlessly handle traffic spikes or dips. Here, the provider manages the underlying scaling behavior, freeing you from that operational burden. Finally, with Software-as-a-Service, scalability is largely abstracted and invisible to the user. The application scales automatically in the background, with the entire process fully managed by the provider. As a user, you simply benefit from the application's ability to handle millions of users without ever needing to worry about the infrastructure. Understanding these scaling differences is crucial for selecting the right model for your application's need. 07:34 Join the Oracle University Learning Community and tap into a vibrant network of over 1 million members, including Oracle experts and fellow learners. This dynamic community is the perfect place to grow your skills, connect with likeminded learners, and celebrate your successes. As a MyLearn subscriber, you have access to engage with your fellow learners and participate in activities in the community. Visit community.oracle.com/ou to check things out today! 08:05 Nikita: Welcome back! We’ve talked about cost optimization and scalability in cloud environments. But what about ensuring availability? How does that work? Orlando: Availability refers to the ability of a system or service to remain accessible in operational, even in the face of failures or extremely high demand. The approach of achieving and managing availability, and crucially, your role versus the provider's differs greatly across each model. With Infrastructure-as-a-Service, you have the most direct control over your availability strategy. You will be responsible for designing an architecture that includes redundant VMs, deploying load balancers, and potentially even multi-region setups for disaster recovery. Your specific roles involves designing this architecture and managing your failover process and data backups. The provider’s role, in turn, is to deliver the underlying infrastructure with defined service level agreements, SLAs, and health monitoring. For Platform-as-a-Service, the platform itself offers a higher degree of built-in, high availability, and automated failover. While the provider maintains the runtime platform’s availability, your role shifts. You need to ensure your application's logic is designed to gracefully handle retries and potential transient failures that might occur. Finally, with Software-as-a-Service, availability is almost entirely handled for you. The provider ensures fully abstracted redundancy and failover behind the scenes. Your role becomes largely minimal, often just involving a specific application’s configurations. The provider is entirely responsible for the full application uptime and the underlying high availability infrastructure. Understanding these distinct roles in ensuring availability is essential for setting expectations and designing your cloud strategy efficiently. 10:19 Lois: Building on availability, let’s talk Disaster Recovery. Orlando: DR is about ensuring your systems and data can be recovered and brought back online in the event of a significant failure, whether it's a hardware crash, a natural disaster, or even human error. Just like the other aspects, the strategy and responsibilities for DR vary significantly across the cloud service models. For Infrastructure-as-a Service, you have the most direct involvement in your DR strategy. You need to design and execute custom DR plans. This involves leveraging capabilities like multi-region backups, taking VM snapshots, and setting up failover clusters. A real-world example might be using Oracle Cloud compute to replicate your VMs to a secondary region with block volume backups to ensure business continuity. Essentially, you manage your entire DR process here. Moving to Platform-as-a-Service, disaster recovery becomes a shared responsibility. The platform itself offers built-in redundancy and provide APIs for backup and restore. Your role will be to configure the application-level recovery and ensure your data is backed up appropriately, while the provider handles the underlying infrastructure's DR capability. An example could be Azure app service, Oracle APEX applications, where your apps are redeployed from source control like Git after an incident. Finally, with Software-as-a-Service, disaster recovery is almost entirely vendor managed. The provider takes full responsibility, offering features like auto replication and continuous backup, often backed by specific Recovery Point Objective (RPO) and Recovery Time Objective (RTO) SLAs. A common example is how Microsoft 365 or Salesforce manage user data backups in restoration. It's all handled seamlessly by the provider without your direct intervention. Understanding these different approaches to DR is crucial for defining your own business continuity plans in the cloud. 12:46 Lois: Thank you, Orlando, for this insightful discussion. To recap, we spoke about the three main cloud models: IaaS, PaaS, and SaaS, and how each one offers a different mix of control and convenience, impacting cost, scalability, availability, and recovery. Nikita: Yeah, hopefully this helps you pick the right cloud solution for your needs. If you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. In our next episode, we’ll take a close look at the essentials of networking. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 13:26 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/38792265
info_outline
Cloud Data Centers: Core Concepts - Part 3
10/21/2025
Cloud Data Centers: Core Concepts - Part 3
Have you ever considered how a single server can support countless applications and workloads at once? In this episode, hosts Lois Houston and Nikita Abraham, together with Principal OCI Instructor Orlando Gentil, explore the sophisticated technologies that make this possible in modern cloud data centers. They discuss the roles of hypervisors, virtual machines, and containers, explaining how these innovations enable efficient resource sharing, robust security, and greater flexibility for organizations. Cloud Tech Jumpstart: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last two weeks, we’ve been talking about different aspects of cloud data centers. In this episode, Orlando Gentil, Principal OCI Instructor at Oracle University, joins us once again to discuss how virtualization, through hypervisors, virtual machines, and containers, has transformed data centers. 00:58 Lois: That’s right, Niki. We’ll begin with a quick look at the history of virtualization and why it became so widely adopted. Orlando, what can you tell us about that? Orlando: To truly grasp the power of virtualization, it's helpful to understand its journey from its humble beginnings with mainframes to its pivotal role in today's cloud computing landscape. It might surprise you, but virtualization isn't a new concept. Its roots go back to the 1960s with mainframes. In those early days, the primary goal was to isolate workloads on a single powerful mainframe, allowing different applications to run without interfering with each other. As we moved into the 1990s, the challenge shifted to underutilized physical servers. Organizations often had numerous dedicated servers, each running a single application, leading to significant waste of computing resources. This led to the emergence of virtualization as we know it today, primarily from the 1990s to the 2000s. The core idea here was to run multiple isolated operating systems on a single physical server. This innovation dramatically improved the resource utilization and laid the technical foundation for cloud computing, enabling the scalable and flexible environments we rely on today. 02:26 Nikita: Interesting. So, from an economic standpoint, what pushed traditional data centers to change and opened the door to virtualization? Orlando: In the past, running applications often meant running them on dedicated physical servers. This led to a few significant challenges. First, more hardware purchases. Every new application, every new project often required its own dedicated server. This meant constantly buying new physical hardware, which quickly escalated capital expenditure. Secondly, and hand-in-hand with more servers came higher power and cooling costs. Each physical server consumed power and generated heat, necessitating significant investment in electricity and cooling infrastructure. The more servers, the higher these operational expenses became. And finally, a major problem was unused capacity. Despite investing heavily in these physical servers, it was common for them to run well below their full capacity. Applications typically didn't need 100% of server's resources all the time. This meant we were wasting valuable compute power, memory, and storage, effectively wasting resources and diminishing the return of investment from those expensive hardware purchases. These economic pressures became a powerful incentive to find more efficient ways to utilize data center resources, setting the stage for technologies like virtualization. 04:05 Lois: I guess we can assume virtualization emerged as a financial game-changer. So, what kind of economic efficiencies did virtualization bring to the table? Orlando: From a CapEx or capital expenditure perspective, companies spent less on servers and data center expansion. From an OpEx or operational expenditure perspective, fewer machines meant lower electricity, cooling, and maintenance costs. It also sped up provisioning. Spinning a new VM took minutes, not days or weeks. That improved agility and reduced the operational workload on IT teams. It also created a more scalable, cost-efficient foundation which made virtualization not just a technical improvement, but a financial turning point for data centers. This economic efficiency is exactly what cloud providers like Oracle Cloud Infrastructure are built on, using virtualization to deliver scalable pay as you go infrastructure. 05:09 Nikita: Ok, Orlando. Let’s get into the core components of virtualization. To start, what exactly is a hypervisor? Orlando: A hypervisor is a piece of software, firmware, or hardware that creates and runs virtual machines, also known as VMs. Its core function is to allow multiple virtual machines to run concurrently on a single physical host server. It acts as virtualization layer, abstracting the physical hardware resources like CPU, memory, and storage, and allocating them to each virtual machine as needed, ensuring they can operate independently and securely. 05:49 Lois: And are there types of hypervisors? Orlando: There are two primary types of hypervisors. The type 1 hypervisors, often called bare metal hypervisors, run directly on the host server's hardware. This means they interact directly with the physical resources offering high performance and security. Examples include VMware ESXi, Oracle VM Server, and KVM on Linux. They are commonly used in enterprise data centers and cloud environments. In contrast, type 2 hypervisors, also known as hosted hypervisors, run on top of an existing operating system like Windows or macOS. They act as an application within that operating system. Popular examples include VirtualBox, VMware Workstation, and Parallels. These are typically used for personal computing or development purposes, where you might run multiple operating systems on your laptop or desktop. 06:55 Nikita: We’ve spoken about the foundation provided by hypervisors. So, can we now talk about the virtual entities they manage: virtual machines? What exactly is a virtual machine and what are its fundamental characteristics? Orlando: A virtual machine is essentially a software-based virtual computer system that runs on a physical host computer. The magic happens with the hypervisor. The hypervisor's job is to create and manage these virtual environments, abstracting the physical hardware so that multiple VMs can share the same underlying resources without interfering with each other. Each VM operates like a completely independent computer with its own operating system and applications. 07:40 Lois: What are the benefits of this? Orlando: Each VM is isolated from the others. If one VM crashes or encounters an issue, it doesn't affect the other VMs running on the same physical host. This greatly enhances stability and security. A powerful feature is the ability to run different operating systems side-by-side on the very same physical host. You could have a Windows VM, a Linux VM, and even other specialized OS, all operating simultaneously. Consolidate workloads directly addresses the unused capacity problem. Instead of one application per physical server, you can now run multiple workloads, each in its own VM on a single powerful physical server. This dramatically improves hardware utilization, reducing the need of constant new hardware purchases and lowering power and cooling costs. And by consolidating workloads, virtualization makes it possible for cloud providers to dynamically create and manage vast pools of computing resources. This allows users to quickly provision and scale virtual servers on demand, tapping into these shared pools of CPU, memory, and storage as needed, rather than being tied to a single physical machine. 09:10 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest technology. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 09:54 Nikita: Welcome back! Orlando, let’s move on to containers. Many see them as a lighter, more agile way to build and run applications. What’s your take? Orlando: A container packages an application in all its dependencies, like libraries and other binaries, into a single, lightweight executable unit. Unlike a VM, a container shares the host operating system's kernel, running on top of the container runtime process. This architectural difference provides several key advantages. Containers are incredibly portable. They can be taken virtually anywhere, from a developer's laptop to a cloud environment, and run consistently, eliminating it works on my machine issues. Because containers share the host OS kernel, they don't need to bundle a full operating system themselves. This results in significantly smaller footprints and less administration overhead compared to VMs. They are faster to start. Without the need to boot a full operating system, containers can start up in seconds, or even milliseconds, providing rapid deployment and scaling capabilities. 11:12 Nikita: Ok. Throughout our conversation, you’ve spoken about the various advantages of virtualization but let’s consolidate them now. Orlando: From a security standpoint, virtualization offers several crucial benefits. Each VM operates in its own isolated sandbox. This means if one VM experiences a security breach, the impact is generally contained to that single virtual machine, significantly limiting the spread of potential threats across your infrastructure. Containers also provide some isolation. Virtualization allows for rapid recovery. This is invaluable for disaster recovery or undoing changes after a security incident. You can implement separate firewalls, access rules, and network configuration for each VM. This granular control reduces the overall exposure and attack surface across your virtualized environments, making it harder for malicious actors to move laterally. Beyond security, virtualization also brings significant advantages in terms of operational and agility benefits for IT management. Virtualization dramatically improves operational efficiency and agility. Things are faster. With virtualization, you can provision new servers or containers in minutes rather than days or weeks. This speed allows for quicker deployment of applications and services. It becomes much simpler to deploy consistent environment using templates and preconfigured VM images or containers. This reduces errors and ensures uniformity across your infrastructure. It's more scalable. Virtualization makes your infrastructure far more scalable. You can reshape VMs and containers to meet changing demands, ensuring your resources align precisely with your needs. These operational benefits directly contribute to the power of cloud computing, especially when we consider virtualization's role in enabling cloud and scalability. Virtualization is the very backbone of modern cloud computing, fundamentally enabling its scalability. It allows multiple virtual machines to run on a single physical server, maximizing hardware utilization, which is essential for cloud providers. This capability is core of infrastructure as a service offerings, where users can provision virtualized compute resources on demand. Virtualization makes services globally scalable. Resources can be easily deployed and managed across different geographic regions to meet worldwide demand. Finally, it provides elasticity, meaning resources can be automatically scaled up or down in response to fluctuating workloads, ensuring optimal performance and cost efficiency. 14:21 Lois: That’s amazing. Thank you, Orlando, for joining us once again. Nikita: Yeah, and remember, if you want to learn more about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: Well, that’s all we have for today. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 14:40 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/38647135
info_outline
Cloud Data Centers: Core Concepts - Part 2
10/14/2025
Cloud Data Centers: Core Concepts - Part 2
Have you ever wondered where all your digital memories, work projects, or favorite photos actually live in the cloud? In this episode, Lois Houston and Nikita Abraham are joined by Principal OCI Instructor Orlando Gentil to discuss cloud storage. They explore how data is carefully organized, the different ways it can be stored, and what keeps it safe and easy to find. Cloud Tech Jumpstart: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about the differences between traditional and cloud data centers, and covered components like CPU, RAM, and operating systems. If you haven’t listened to the episode yet, I’d suggest going back and listening to it before you dive into this one. Nikita: Joining us again is Orlando Gentil, Principal OCI Instructor at Oracle University, and we’re going to ask him about another fundamental concept: storage. 01:04 Lois: That’s right, Niki. Hi Orlando! Thanks for being with us again today. You introduced cloud data centers last week, but tell us, how is data stored and accessed in these centers? Orlando: At a fundamental level, storage is where your data resides persistently. Data stored on a storage device is accessed by the CPU and, for specialized tasks, the GPU. The RAM acts as a high-speed intermediary, temporarily holding data that the CPU and the GPU are actively working on. This cyclical flow ensures that applications can effectively retrieve, process, and store information, forming the backbone for our computing operations in the data center. 01:52 Nikita: But how is data organized and controlled on disks? Orlando: To effectively store and manage data on physical disks, a structured approach is required, which is defined by file systems and permissions. The process began with disks. These are the raw physical storage devices. Before data can be written to them, disks are typically divided into partitions. A partition is a logical division of a physical disk that acts as if it were a separated physical disk. This allows you to organize your storage space and even install multiple operating systems on a single drive. Once partitions are created, they are formatted with a file system. 02:40 Nikita: Ok, sorry but I have to stop you there. Can you explain what a file system is? And how is data organized using a file system? Orlando: The file system is the method and the data structure that an operating system uses to organize and manage files on storage devices. It dictates how data is named, is stored, retrieved, and managed on the disk, essentially providing the roadmap for data. Common file systems include NTFS for Windows and ext4 or XFS for Linux. Within this file system, data is organized hierarchically into directories, also known as folders. These containers help to logically group related files, which are the individual units of data, whether they are documents, images, videos, or applications. Finally, overseeing this entire organization are permissions. 03:42 Lois: And what are permissions? Orlando: Permissions define who can access a specific files and directories and what actions they are allowed to perform-- for example, read, write, or execute. This access control, often managed by user, group, and other permissions, is fundamental for security, data integrity, and multi-user environments within a data center. 04:09 Lois: Ok, now that we have a good understanding of how data is organized logically, can we talk about how data is stored locally within a server? Orlando: Local storage refers to storage devices directly attached to a server or computer. The three common types are Hard Disk Drive. These are traditional storage devices using spinning platters to store data. They offer large capacity at a lower cost per gigabyte, making them suitable for bulk data storage when high performance isn't the top priority. Unlike hard disks, solid state drives use flash memory to store data, similar to USB drives but on a larger scale. They provide significantly faster read and write speeds, better durability, and lower power consumption than hard disks, making them ideal for operating systems, applications, and frequently accessed data. Non-Volatile Memory Express is a communication interface specifically designed for solid state that connects directly to the PCI Express bus. NVME offers even faster performance than traditional SATA-based solid state drives by reducing latency and increasing bandwidth, making it the top choice for demanding workloads that require extreme speed, such as high-performance databases and AI applications. Each type serves different performance and cost requirements within a data center. While local storage is essential for immediate access, data center also heavily rely on storage that isn't directly attached to a single server. 05:59 Lois: I’m guessing you’re hinting at remote storage. Can you tell us more about that, Orlando? Orlando: Remote storage refers to data storage solutions that are not physically connected to the server or client accessing them. Instead, they are accessed over the network. This setup allows multiple clients or servers to share access to the same storage resources, centralizing data management and improving data availability. This architecture is fundamental to cloud computing, enabling vast pools of shared storage that can be dynamically provisioned to various users and applications. 06:35 Lois: Let’s talk about the common forms of remote storage. Can you run us through them? Orlando: One of the most common and accessible forms of remote storage is Network Attached Storage or NAS. NAS is a dedicated file storage device connected to a network that allows multiple users and client devices to retrieve data from a centralized disk capacity. It's essentially a server dedicated to serving files. A client connects to the NAS over the network. And the NAS then provides access to files and folders. NAS devices are ideal for scenarios requiring shared file access, such as document collaboration, centralized backups, or serving media files, making them very popular in both home and enterprise environments. While NAS provides file-level access over a network, some applications, especially those requiring high performance and direct block level access to storage, need a different approach. 07:38 Nikita: And what might this approach be? Orlando: Internet Small Computer System Interface, which provides block-level storage over an IP network. iSCSI or Internet Small Computer System Interface is a standard that allows the iSCSI protocol traditionally used for local storage to be sent over IP networks. Essentially, it enables servers to access storage devices as if they were directly attached even though they are located remotely on the network. This means it can leverage standard ethernet infrastructure, making it a cost-effective solution for creating high performance, centralized storage accessible over an existing network. It's particularly useful for server virtualization and database environments where block-level access is preferred. While iSCSI provides block-level access over standard IP, for environments demanding even higher performance, lower latency, and greater dedicated throughput, a specialized network is often deployed. 08:47 Nikita: And what’s this specialized network called? Orlando: Storage Area Network or SAN. A Storage Area Network or SAN is a high-speed network specifically designed to provide block-level access to consolidated shared storage. Unlike NAS, which provides file level access, a SAN presents a storage volumes to servers as if they were local disks, allowing for very high performance for applications like databases and virtualized environments. While iSCSI SANs use ethernet, many high-performance SANs utilize fiber channel for even faster and more reliable data transfer, making them a cornerstone of enterprise data centers where performance and availability are paramount. 09:42 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest technology. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 10:26 Nikita: Welcome back! Orlando, are there any other popular storage paradigms we should know about? Orlando: Beyond file level and block level storage, cloud environments have popularized another flexible and highly scalable storage paradigm, object storage. Object storage is a modern approach to storing data, treating each piece of data as a distinct, self-contained unit called an object. Unlike file systems that organize data in a hierarchy or block storage that breaks data into fixed size blocks, object storage manages data as flat, unstructured objects. Each object is stored with unique identifiers and rich metadata, making it highly scalable and flexible for massive amounts of data. This service handles the complexity of storage, providing access to vast repositories of data. Object storage is ideal for use cases like cloud-native applications, big data analytics, content distribution, and large-scale backups thanks to its immense scalability, durability, and cost effectiveness. While object storage is excellent for frequently accessed data in rapidly growing data sets, sometimes data needs to be retained for very long periods but is accessed infrequently. For these scenarios, a specialized low-cost storage tier, known as archive storage, comes into play. 12:02 Lois: And what’s that exactly? Orlando: Archive storage is specifically designed for long-term backup and retention of data that you rarely, if ever, access. This includes critical information, like old records, compliance data that needs to be kept for regulatory reasons, or disaster recovery backups. The key characteristics of archive storage are extremely low cost per gigabyte, achieved by optimizing for infrequent access rather than speed. Historically, tape backup systems were the common solution for archiving, where data from a data center is moved to tape. In modern cloud environments, this has evolved into cloud backup solutions. Cloud-based archiving leverages high-cost, effective during cloud storage tiers that are purpose built for long term retention, providing a scalable and often more reliable alternative to physical tapes. 13:05 Lois: Thank you, Orlando, for taking the time to talk to us about the hardware and software layers of cloud data centers. This information will surely help our listeners to make informed decisions about cloud infrastructure to meet their workload needs in terms of performance, scalability, cost, and management. Nikita: That’s right, Lois. And if you want to learn more about what we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: In our next episode, we’ll take a look at more of the fundamental concepts within modern cloud environments, such as Hypervisors, Virtualization, and more. I can’t wait to learn more about it. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 13:47 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/38575155
info_outline
Cloud Data Centers: Core Concepts - Part 1
10/07/2025
Cloud Data Centers: Core Concepts - Part 1
Curious about what really goes on inside a cloud data center? In this episode, Lois Houston and Nikita Abraham chat with Principal OCI Instructor Orlando Gentil about how cloud data centers are transforming the way organizations manage technology. They explore the differences between traditional and cloud data centers, the roles of CPUs, GPUs, and RAM, and why operating systems and remote access matter more than ever. Cloud Tech Jumpstart: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Today, we’re covering the fundamentals you need to be successful in a cloud environment. If you’re new to cloud, coming from a SaaS environment, or planning to move from on-premises to the cloud, you won’t want to miss this. With us today is Orlando Gentil, Principal OCI Instructor at Oracle University. Hi Orlando! Thanks for joining us. 01:01 Lois: So Orlando, we know that Oracle has been a pioneer of cloud technologies and has been pivotal in shaping modern cloud data centers, which are different from traditional data centers. For our listeners who might be new to this, could you tell us what a traditional data center is? Orlando: A traditional data center is a physical facility that houses an organization's mission critical IT infrastructure, including servers, storage systems, and networking equipment, all managed on site. 01:32 Nikita: So why would anyone want to use a cloud data center? Orlando: The traditional model requires significant upfront investment in physical hardware, which you are then responsible for maintaining along with the underlying infrastructure like physical security, HVAC, backup power, and communication links. In contrast, cloud data centers offer a more agile approach. You essentially rent the infrastructure you need, paying only for what you use. In the traditional data center, scaling resources up and down can be a slow and complex process. On cloud data centers, scaling is automated and elastic, allowing resources to adjust dynamically based on demand. This shift allows business to move their focus from the constant upkeep of infrastructure to innovation and growth. The move represents a shift from maintenance to momentum, enabling optimized costs and efficient scaling. This fundamental shift is how IT infrastructure is managed and consumed, and precisely what we mean by moving to the cloud. 02:39 Lois: So, when we talk about moving to the cloud, what does it really mean for businesses today? Orlando: Moving to the cloud represents the strategic transition from managing your own on-premise hardware and software to leveraging internet-based computing services provided by a third-party. This involves migrating your applications, data, and IT operations to a cloud environment. This transition typically aims to reduce operational overhead, increase flexibility, and enhance scalability, allowing organizations to focus more on their core business functions. 03:17 Nikita: Orlando, what’s the “brain” behind all this technology? Orlando: A CPU or Central Processing Unit is the primary component that performs most of the processing inside the computer or server. It performs calculations handling the complex mathematics and logic that drive all applications and software. It processes instructions, running tasks, and operations in the background that are essential for any application. A CPU is critical for performance, as it directly impacts the overall speed and efficiency of the data center. It also manages system activities, coordinating user input, various application tasks, and the flow of data throughout the system. Ultimately, the CPU drives data center workloads from basic server operations to powering cutting edge AI applications. 04:10 Lois: To better understand how a CPU achieves these functions and processes information so efficiently, I think it’s important for us to grasp its fundamental architecture. Can you briefly explain the fundamental architecture of a CPU, Orlando? Orlando: When discussing CPUs, you will often hear about sockets, cores, and threads. A socket refers to the physical connection on the motherboard where a CPU chip is installed. A single server motherboard can have one or more sockets, each holding a CPU. A core is an independent processing unit within a CPU. Modern CPUs often have multiple cores, enabling them to handle several instructions simultaneously, thus increasing processing power. Think of it as having multiple mini CPUs on a single chip. Threads are virtual components that allow a single CPU core to handle multiple sequence of instructions or threads concurrently. This technology, often called hyperthreading, makes a single core appear as two logical processors to the operating system, further enhancing efficiency. 05:27 Lois: Ok. And how do CPUs process commands? Orlando: Beyond these internal components, CPUs are also designed based on different instruction set architectures which dictate how they process commands. CPU architectures are primarily categorized in two designs-- Complex Instruction Set Computer or CISC and Reduced Instruction Set Computer or RISC. CISC processors are designed to execute complex instructions in a single step, which can reduce the number of instructions needed for a task, but often leads to a higher power consumption. These are commonly found in traditional Intel and AMD CPUs. In contrast, RISC processors use a simpler, more streamlined set of instructions. While this might require more steps for a complex task, each step is faster and more energy efficient. This architecture is prevalent in ARM-based CPUs. 06:34 Are you looking to boost your expertise in enterprise AI? Check out the Oracle AI Agent Studio for Fusion Applications Developers course and professional certification—now available through Oracle University. This course helps you build, customize, and deploy AI Agents for Fusion HCM, SCM, and CX, with hands-on labs and real-world case studies. Ready to set yourself apart with in-demand skills and a professional credential? Learn more and get started today! Visit mylearn.oracle.com for more details. 07:09 Nikita: Welcome back! We were discussing CISC and RISC processors. So Orlando, where are they typically deployed? Are there any specific computing environments and use cases where they excel? Orlando: On the CISC side, you will find them powering enterprise virtualization and server workloads, such as bare metal hypervisors in large databases where complex instructions can be efficiently processed. High performance computing that includes demanding simulations, intricate analysis, and many traditional machine learning systems. Enterprise software suites and business applications like ERP, CRM, and other complex enterprise systems that benefit from fewer steps per instruction. Conversely, RISC architectures are often preferred for cloud-native workloads such as Kubernetes clusters, where simpler, faster instructions and energy efficiency are paramount for distributed computing. Mobile device management and edge computing, including cell phones and IoT devices where power efficiency and compact design are critical. Cost optimized cloud hosting supporting distributed workloads where the cumulative energy savings and simpler design lead to more economical operations. The choice between CISC and RISC depends heavily on the specific workload and performance requirements. While CPUs are versatile generalists, handling a broad range of tasks, modern data centers also heavily rely on another crucial processing unit for specialized workloads. 08:54 Lois: We’ve spoken a lot about CPUs, but our conversation would be incomplete without understanding what a Graphics Processing Unit is and why it’s important. What can you tell us about GPUs, Orlando? Orlando: A GPU or Graphics Processing Unit is distinct from a CPU. While the CPU is a generalist excelling at sequential processing and managing a wide variety of tasks, the GPU is a specialist. It is designed specifically for parallel compute heavy tasks. This means it can perform many calculations simultaneously, making it incredibly efficient for workloads like rendering graphics, scientific simulations, and especially in areas like machine learning and artificial intelligence, where massive parallel computation is required. In the modern data center, GPUs are increasingly vital for accelerating these specialized, data intensive workloads. 09:58 Nikita: Besides the CPU and GPU, there’s another key component that collaborates with these processors to facilitate efficient data access. What role does Random Access Memory play in all of this? Orlando: The core function of RAM is to provide faster access to information in use. Imagine your computer or server needing to retrieve data from a long-term storage device, like a hard drive. This process can be relatively slow. RAM acts as a temporary high-speed buffer. When your CPU or GPU needs data, it first checks RAM. If the data is there, it can be accessed almost instantaneously, significantly speeding up operations. This rapid access to frequently used data and programming instructions is what allows applications to run smoothly and systems to respond quickly, making RAM a critical factor in overall data center performance. While RAM provides quick access to active data, it's volatile, meaning data is lost when power is off, or persistent data storage, the information that needs to remain available even after a system shut down. 11:14 Nikita: Let’s now talk about operating systems in cloud data centers and how they help everything run smoothly. Orlando, can you give us a quick refresher on what an operating system is, and why it is important for computing devices? Orlando: At its core, an operating system, or OS, is the fundamental software that manages all the hardware and software resources on a computer. Think of it as a central nervous system that allows everything else to function. It performs several critical tasks, including managing memory, deciding which programs get access to memory and when, managing processes, allocating CPU time to different tasks and applications, managing files, organizing data on storage devices, handling input and output, facilitate communication between the computer and its peripherals, like keyboards, mice, and displays. And perhaps, most importantly, it provides the user interface that allows us to interact with the computer. 12:19 Lois: Can you give us a few examples of common operating systems? Orlando: Common operating system examples you are likely familiar with include Microsoft Windows and MacOS for personal computers, iOS and Android for mobile devices, and various distributions of Linux, which are incredibly prevalent in servers and increasingly in cloud environments. 12:41 Lois: And how are these operating systems specifically utilized within the demanding environment of cloud data centers? Orlando: The two dominant operating systems in data centers are Linux and Windows. Linux is further categorized into enterprise distributions, such as Oracle Linux or SUSE Linux Enterprise Server, which offer commercial support and stability, and community distributions, like Ubuntu and CentOS, which are developed and maintained by communities and are often free to use. On the other side, we have Windows, primarily represented by Windows Server, which is Microsoft's server operating system known for its robust features and integration with other Microsoft products. While both Linux and Windows are powerful operating systems, their licensing modes can differ significantly, which is a crucial factor to consider when deploying them in a data center environment. 13:43 Nikita: In what way do the licensing models differ? Orlando: When we talk about licensing, the differences between Linux and Windows become quite apparent. For Linux, Enterprise Distributions come with associated support fees, which can be bundled into the initial cost or priced separately. These fees provide access to professional support and updates. On the other hand, Community Distributions are typically free of charge, with some providers offering basic community-driven support. Windows server, in contrast, is a commercial product. Its license cost is generally included in the instance cost when using cloud providers or purchased directly for on-premise deployments. It's also worth noting that some cloud providers offer a bring your own license, or BYOL program, allowing organizations to use their existing Windows licenses in the cloud, which can sometimes provide cost efficiencies. 14:46 Nikita: Beyond choosing an operating system, are there any other important aspects of data center management? Orlando: Another critical aspect of data center management is how you remotely access and interact with your servers. Remote access is fundamental for managing servers in a data center, as you are rarely physically sitting in front of them. The two primary methods that we use are SSH, or secure shell, and RDP, remote desktop. Secure shell is widely used for secure command line access for Linux servers. It provides an encrypted connection, allowing you to execute commands, transfer files, and manage your servers securely from a remote location. The remote desktop protocol is predominantly used for graphical remote access to Windows servers. RDP allows you to see and interact with the server's desktop interface, just as if you were sitting directly in front of it, making it ideal for tasks that require a graphical user interface. 15:54 Lois: Thank you so much, Orlando, for shedding light on this topic. Nikita: Yeah, that's a wrap for today! To learn more about what we discussed, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. In our next episode, we’ll take a close look at how data is stored and managed. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 16:16 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/38488075
info_outline
Oracle AI for Fusion Apps
09/23/2025
Oracle AI for Fusion Apps
Want to make AI work for your business? In today’s episode, Lois Houston and Nikita Abraham continue their discussion of AI in Oracle Fusion Applications by focusing on three key AI capabilities: predictive, generative, and agentic. Joining them is Principal Instructor Yunus Mohammed, who explains how predictive, generative, and agentic AI can optimize efficiency, support decision-making, and automate tasks—all without requiring technical expertise. AI for You: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! In our last episode, we explored the essential components of the Oracle AI stack and spoke about Oracle’s suite of AI services. Nikita: Yeah, and in today’s episode, we’re going to go down a similar path and take a closer look at the AI functionalities within Oracle Fusion Applications. 00:53 Lois: With us today is Principal Instructor Yunus Mohammed. Hi Yunus! It’s lovely to have you back with us. For anyone who doesn’t already know, what are Oracle Fusion Cloud Applications? Yunus: Oracle Fusion Applications are a suite of cloud-based enterprise applications designed to run for your business across finance, HR, supply chain, sales, services and more, all on a unified platform. They are designed to help enterprises operate smarter, faster by embedding AI directly into business process. That means better forecasts in finance, faster hiring decisions in HR, and optimized supply chains, and more personalized customer experience. 01:42 Nikita: And we know they’ve been built for today's fast-paced, AI-driven business environment. So, what are the different functional pillars within Oracle Fusion Apps? Yunus: The first one is the ERP, Enterprise Resource Planning, which supports financials, procurements, and project management. It's the backbone of many organizations, or day-to-day operations. HCM or Human Capital Management, handles workforce-related processes such as hiring, payroll, performance, and talent development, helping HR teams operate more efficiently. SCM, the Supply Chain Management, enables businesses to manage their logistics, inventory, and suppliers and manufacturers in the business. It's particularly critical in industries with complex operations like retail and manufacturing. The CX, which is the Customer Experience, covers the full customer life cycle, which includes sales, marketing, and service. These models help the businesses connect with their customers more personally and proactively, whether through the targeted campaigns or responsive support. 03:02 Lois: Yunus, what sets Fusion apart? Yunus: What sets Fusion apart is how these applications work seamlessly together. They share data natively and continuously improve with AI and automation, giving you not just tools, but intelligence at scale. Oracle applications are built to be AI first, with a complete suite of finance, supply chain, manufacturing, HR, sales, service, and marketing, that is tightly coupled with our industry and data intelligence applications. The easiest and the most effective way to start building your organization’s AI muscle is with AI embedded in Fusion applications. For example, if the customer needs to return a defective product, the service representative simply clicks on Ask Oracle for the answers. Since the AI agent is embedded in the application, it has contextual information about the customer, the order, and any special service, contract, or any other feature that is required for this process. The AI agent automatically figures out the return policy, including the options to send a replacement product immediately or offer a discount for the inconvenience, and also expedite shipping. Another AI agent sends a personalized email confirming details of the return, and different AI agent creates the replacement order for fulfillment and shipping. Our AI-embedded Fusion Applications can automate an end-to-end business process from service request to return order to fulfillment and shipping and then accounting. These are pre-built and tested so that all the worry and hard work is removed from the implementation point of view. They cover the core workflows. Basically, they address tasks that form part of the organization's core workflow. User requires no technical knowledge in the scenarios. 05:16 Lois: That’s great! So, you don’t need to be an AI expert or a data scientist to get going. Yunus: The outcomes are super fast in business softwares and context is everything. Just having the right information isn't enough. This is about having the information in the right place at the right time for it to be instantly actionable. They are ready from day one and can be optimized over time. They are powerful out of the box and only get better with day-to-day processes and performance. 05:55 Are you working towards an Oracle Certification this year? Join us at one of our certification prep live events in the Oracle University Learning Community. Get insider tips from seasoned experts and learn from others who have already taken their certifications. Go to community.oracle.com/ou to jump-start your journey towards certification today! 06:20 Nikita: Welcome back! So, when we talk about the AI capabilities in Fusion apps, I know we have different types. Can you tell us more about them? Yunus: Predictive AI is where it all started. These models analyze historical patterns and data to anticipate what might happen next. For example, predicting employee attrition, forecasting demand in supply chain, or flagging potential late payments in finance workflows. These are embedded into business processes to surface insights before action is needed. Then we have got the generative AI, which takes this a step more further. Instead of just providing insights, it creates content, such as auto-generating job descriptions, summarizing performance reviews, or even crafting draft responses to supplier queries. This saves time and boosts productivity across functions like HR, CX, and procurement. Last but not the least, we have got the agentic AI, which is the most advanced layer. These agents don't just provide suggestions, they take actions on behalf of the users. Think of an agent that not only recommends actions in a workflow, but also executes them, creating tasks, filling tickets, updating systems, and communicating with stakeholders, all autonomously but under user control. And importantly, many business scenarios today benefit from a blend of these types. For example, an AI assistant in Fusion HCM might predict employees turnover, which is predictive AI, generates tailored retention plans, which is generative, and it is generative AI, and initiate outreach or next steps, which is done by the process of agents, which is called agentic AI. So, Oracle integrates these capabilities in a harmonious way, enabling users to act faster, personalize at scale, and drive better business outcomes. 08:39 Lois: Ok, let’s get into the specifics. How does Oracle use predictive AI across its Fusion apps, helping businesses anticipate what’s coming and act proactively. Yunus: So in HCM, things like recommended jobs, in this, candidates visiting a potential employer’s website encountered an improved online experience, whereby if they have uploaded their resumes, they will be shown job opportunities that match their skills and experience mix. This helps candidates who are unsure what to search by showing them roles and titles they may not have considered. Time to hire provides an estimated as to how long it will take for an HR team to fill an open role, but this is really useful not only in terms of planning, recruitment, but also in terms of understanding whether you might need some temporary cover and for how long will it actually take the process to complete. In the process of supply chain management, the predictive AI is leveraged to revolutionize transit time and estimated time of arrival, which is called as the predictive analysis, enhancing efficiency, and optimizing operations. It can flag abnormal patterns in supply or inventory. For example, if a batch of parts is behaving differently in the production line and predict future demands, helping avoid overstocking or stockouts is a process that can be done by using the SCM predictive analysis or predictive AI. In ERPs, where you can audit your expenses, plan for future expenses, and do dynamic discounting for vendors who are likely to accept earlier payments or earlier payment discounts, it can also speed up reimbursements by automated expense entries. In CX, you have the options to go with adaptive intelligence for sales, which helps representatives prioritize the leads and the likelihood that a specific lead will close, helping representatives focus their time and effort. So predictive scheduling and routing in service delivery ensures that the right resource is assigned to the right customer at the right time, boosting operational efficiency and customer satisfaction, also known as fatigue analysis. 11:23 Lois: Now let’s shift our focus to generative AI. How does Oracle implement generative AI across HCM, ERP, Supply Chain, and CX? Yunus: So, in HCM, the generative AI can automatically generate performance review summaries from raw data, saving time for HR teams, and can help you in providing candidates with summaries of their interview process, feedback, and next steps, all auto generated. With AI assistance, goal creation for employees can be automated, and the system analyzes performance data and trends to propose meaningful and attainable goals, aligning them with organizational objectives and employee capabilities. In SCM, similarly, the generative AI process helps you in defining drafting summaries of purchase orders. So generative AI can automatically create clear, readable synopses, and can be summarized with complex negotiations and discussions, making it easier for supply chain managers to analyze supplier proposals, track negotiations, processes, and understand key takeaways. With predictive AI embedded, it is helping you to leverage to help generate the repairs of master definitions of summaries, and can generate descriptions for item based on their specification, helping product teams automatically generate catalog contents. With ERPs, you can automate the creation of business reports, offering more insights and actionable narratives, rather than just showing the raw data. The AI can provide context, interpretations, and recommendations. AI can also take raw project data and generate a comprehensive, easy-to-read project status, reports that stakeholders can quickly review. In CX, we have got service request summarization, which can provide these long summaries for the customer services and the tickets that have been requested by the customers, allowing support teams to understand the key points in the fraction of time, and can also create knowledge base articles directly from common service requests or inquiries, which not only improves internal knowledge management but also empowers customers by enabling self-service. So generative AI can automatically generate success stories or case studies from successful opportunities or sales, which can be used as marketing content or for internal knowledge sharing. 14:20 Nikita: And what about Oracle's Agentic AI? What are its capabilities across the different pillars? Yunus: In HCM, Agentic AI handles the end-to-end onboarding experience, from explaining policies to guiding document submissions, even booking orientation sessions, allowing the HR staff to focus on human engagement. It can further support HR teams during performance review cycles by surfacing high potential employees, pulling in performance data, and recommending next actions like promotions or learning paths. It helps manage time with requests by checking eligibility, policy constraints, and suggesting appropriate substitutes, reducing administrative frictions and errors. With SCM, the Agentic AI Fusion Applications act as a real time Assistant to ensure buyers follow procurement policies, and reducing compliance risk and manual errors. It can also support sales representatives with real-time insights and next best actions during the quoting or ordering process, improving customer satisfaction and sales performance. With ERP, you can handle document intake, extraction, and routing, saving significant time on manual document management across financial functions using Fusion Applications. AI automates reconciliation tasks by matching transactions, flagging anomalies, and suggesting resolutions. It helps you in reducing close cycle timelines and continuously analyzes profit margins. And it recommends the pricing adjustments that can be done in your ERPs. In CX, the Agentic AI Fusion Application supports staff by instantly compiling full customer histories, orders, service requests, interactions, and can act like a real-time assistant, summarizing open tickets and resolutions, helping agents take over or escalate without needing to dig through the notes, and dynamically adjust technicals and technician routes based on traffic, priority, or cancelation, increasing the field efficiency and customer satisfaction. 17:04 Lois: Thank you so much, Yunus. To learn more about the topics covered today, visit mylearn.oracle.com and search for the AI for You course. Nikita: Join us next week as we cover how AI is being applied across sectors like healthcare, finance, and retail, and tackle the big question: how do we keep these technologies aligned with human values? Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 17:30 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/38287680
info_outline
Oracle's AI Ecosystem
09/16/2025
Oracle's AI Ecosystem
In this episode, Lois Houston and Nikita Abraham are joined by Principal Instructor Yunus Mohammed to explore Oracle’s approach to enterprise AI. The conversation covers the essential components of the Oracle AI stack and how each part, from the foundational infrastructure to business-specific applications, can be leveraged to support AI-driven initiatives. They also delve into Oracle’s suite of AI services, including generative AI, language processing, and image recognition. AI for You: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we discussed why the decision to buy or build matters in the world of AI deployment. Lois: That’s right, Niki. Today is all about the Oracle AI stack and how it empowers not just developers and data scientists, but everyday business users as well. Then we’ll spend some time exploring Oracle AI services in detail. 01:00 Nikita: Yunus Mohammed, our Principal Instructor, is back with us today. Hi Yunus! Can you talk about the different layers in Oracle’s end-to-end AI approach? Yunus: The first base layer is the foundation of AI infrastructure, the powerful compute and storage layer that enables scalable model training and inferences. Sitting above the infrastructure, we have got the data platform. This is where data is stored, cleaned, and managed. Without a reliable data foundation, AI simply can't perform. So base of AI is the data, and the reliable data gives more support to the AI to perform its job. Then, we have AI and ML services. These provide ready-to-use tools for building, training, and deploying custom machine learning models. Next, to the AI/ML services, we have got generative AI services. This is where Oracle enables advanced language models and agentic AI tools that can generate content, summarize documents, or assist users through chat interfaces. Then, we have the top layer, which is called as the applications, things like Fusion applications or industry specific solutions where AI is embedded directly into business workflows for recommendations, forecasting or customer support. Finally, Oracle integrates with a growing ecosystem of AI partners, allowing organizations to extend and enhance their AI capabilities even further. In short, Oracle doesn't just offer AI as a feature. It delivers it as a full stack capability from infrastructure to the layer of applications. 02:59 Nikita: Ok, I want to get into the core AI services offered by Oracle Cloud Infrastructure. But before we get into the finer details, broadly speaking, how do these services help businesses? Yunus: These services make AI accessible, secure, and scalable, enabling businesses to embed intelligence into workflows, improve efficiency, and reduce human effort in repetitive or data-heavy tasks. And the best part is, Oracle makes it easy to consume these through application interfaces, APIs, software development kits like SDKs, and integration with Fusion Applications. So, you can add AI where it matters without needing a data scientist team to do that work. 03:52 Lois: So, let’s get down to it. The first core service is Oracle's Generative AI service. What can you tell us about it? Yunus: This is a fully managed service that allows businesses to tap into the power of large language models. You can actually work with these models from scratch to a well-defined develop model. You can use these models for a wide range of use cases like summarizing text, generating content, answering questions, or building AI-powered chat interfaces. 04:27 Lois: So, what will I find on the OCI Generative AI Console? Yunus: OCI Generative AI Console highlights three key components. The first one is the dedicated AI cluster. These are GPU powered environments used to fine tune and host your own custom models. It gives you control and performance at scale. Then, the second point is the custom models. You can take a base language model and fine tune it using your own data, for example, company manuals or HR policies or customer interactions, which are your own personal data. You can use this to create a model that speaks your business language. And last but not the least, the endpoints. These are the interfaces through which your application connect to the model. Once deployed, your app can query the model securely and at different scales, and you don't need to be a developer to get started. Oracle offers a playground, which is a non-core environment where you can try out models, craft parameters, and test responses interactively. So overall, the generative AI service is designed to make enterprise-grade AI accessible and customizable. So, fitting directly into business processes, whether you are building a smart assistant or you're automating the content generation process. 06:00 Lois: The next key service is OCI Generative AI Agents. Can you tell us more about it? Yunus: OCI Generative AI agents combines a natural language interface with generative AI models and enterprise data stores to answer questions and take actions. The agent remembers the context, uses previous interactions, and retrieves deeper product speech details. They aren't just static chat bots. They are context aware, grounded in business data, and able to handle multi-turns, follow-up queries with relevant accurate responses, and driving productivity and decision-making across departments like sales, support, or operations. 06:54 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest tech. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 07:37 Nikita: Welcome back! Yunus, let’s move on to the OCI Language service. Yunus: OCI Language helps business understand and process natural language at scale. It uses pretrained models, which means they are already trained on large industry data sets and are ready to be used right away without requiring AI expertise. It detects over 100 languages, including English, Japanese, Spanish, and more. This is great for global business that receive multilingual inputs from customers. It works with identity sentiments. For different aspects of the sentence, for example, in a review like, “The food was great, but the service sucked,” OCI Language can tell that food has a positive sentiment while service has a negative one. This is called aspect-based sentiment analysis, and it is more insightful than just labeling the entire text as positive or negative. Then we have got to identify key phrases representing important ideas or subjects. So, it helps in extracting these key phrases, words, or terms that capture the core messages. They help automate tagging, summarizing, or even routing of content like support tickets or emails. In real life, the businesses are using this for customer feedback analysis, support ticket routing, social media monitoring, and even regulatory compliances. 09:21 Nikita: That’s fantastic. And what about the OCI Speech service? Yunus: The OCI Speech is an AI service that transcribes speech to text. Think of it as an AI-powered transcription engine that listens to the spoken English, whether in audio or video files, and turns it into usable and searchable and readable text. It provides timestamps, so you know exactly when something was said. A valuable feature for reviewing legal discussions, media footages, or compliance audits. OCI Speech even understands different speakers. You don't need to train this from scratch. It is pre-trained model hosted on an API. Just send your audio to the service, and you get an accurate timestamp text back in return. 10:17 Lois: I know we also have a service for object detection… called OCI Vision? Yunus: OCI Vision uses pretrained, deep learning models to understand and analyze visual content. Just like a human might, you can upload an image or videos, and the AI can tell you what is in it and where they might be useful. There are two primary use cases, which you can use this particular OCI Vision for. One is for object detection. You have got a red color car. So OCI Vision is not just identifying that’s a car. It is detecting and labeling parts of the car too, like the bumper, the wheels, the design components. This is a critical in industries like manufacturing, retail, or logistics. For example, in quality control, OCI Vision can scan product images to detect missing or defective parts automatically. Then we have got the image classification. This is useful in scenarios like automated tagging of photos, managing digital assets, classifying this particular scene or context of this particular scene. So basically, when we talk about OCI Vision, which is actually a fully managed, no complex model training is required for this particular service. It's available via API. It is also working with defining their own custom model for working with the environments. 11:51 Nikita: And the final service is related to text and called OCI Document Understanding, right? Yunus: So OCI Document Understanding allows businesses to automatically extract structured insights from unstructured documents like invoices, contracts, recipes, and also sometimes resumes, or even business documents. 12:13 Nikita: And how does it work? Yunus: OCI reads the content from the scanned document. The OCR is smarter. It recognizes both printed and handwritten text. Then determines what type of document it is. So document classification is done. Text recognition recognizes text, then classifies the document. For example, if this is a purchase order, or bank statement, or any medical report. If your business handles documents in multiple languages, then the AI can actually help in language detection also, which helps you in routing the language or translating that particular language. Many documents contain structured data in table format. Think pricing tables or line items. OCI will help you in extracting these with high accuracy for reporting on feeding into ERP systems. And finally, I would say the key value extraction. It puts our critical business values like invoice numbers, payment amounts, or customer names from fields that may not always allow a fixed format. So, this service reduces the need for manual review, cuts down processes time, and ensures high accuracy for your system. 13:36 Lois: What are the key takeaways our listeners should walk away with after this episode? Yunus: The first one, Oracle doesn't treat AI as just a standalone tool. Instead, AI is integrated from the ground up. Whether you're talking about infrastructure, data platforms, machine learning services, or applications like HCM, ERP, or CX. In real world, the Oracle AI Services prioritize data management, security, and governance, all essential for enterprise AI use cases. So, it is about trust. Can your AI handle sensitive data? Can it comply with regulations? Oracle builds its AI services with strong foundation in data governance, robust security measures, and tight control over data residency and access. So this makes Oracle AI especially well-suited for industries like health care, finance, logistics, and government, where compliance and control aren't optional. They are critical. 14:44 Nikita: Thank you for another great conversation, Yunus. If you’re interested in learning more about the topics we discussed today, head on over to mylearn.oracle.com and search for the AI for You course. Lois: In our next episode, we’ll get into Predictive AI, Generative AI, Agentic AI, all with respect to Oracle Fusion Applications. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 15:10 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/38169755