Oracle University Podcast
Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.
info_outline
Modernize Your Business with Oracle Cloud Apps – Part 1
07/22/2025
Modernize Your Business with Oracle Cloud Apps – Part 1
Join hosts Lois Houston and Nikita Abraham, along with Cloud Delivery Lead Sarah Mahalik, as they unpack the core pillars of Oracle Fusion Cloud Applications—ERP, HCM, SCM, and CX. Learn how Oracle’s SaaS model, Redwood UX, and built-in AI are reshaping business productivity, adaptability, and user experience. From quarterly updates to advanced AI agents, discover how Oracle delivers agility, lower costs, and smarter decision-making across departments. Oracle Fusion Cloud Applications: Process Essentials Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! In our last two episodes, we explored the Oracle Cloud Success Navigator platform. This week and next, we’re diving into Oracle Fusion Cloud Applications with Sarah Mahalik, a Cloud Delivery Lead here at Oracle. We’ll ask Sarah about Oracle’s cloud apps suite, the Redwood design system, and also look at some of Oracle’s AI capabilities. 01:02 Nikita: Yeah, let’s jump right in. Hi Sarah! How does Oracle approach the SaaS model? Sarah: Oracle's Cloud Applications suite is a complete enterprise cloud designed to modernize your business. Our cloud suite of SaaS applications, which includes Enterprise Resource Planning, or ERP, Supply Chain Management, or SCM, Human Capital Management, or HCM, and Customer Experience, or CX, brings consistent processes and a single source of truth across the most important business functions. At Oracle, we own all of the technology stacks that power our suite of cloud applications. Oracle Cloud Applications are built on Oracle Cloud Infrastructure and ensure the performance, resiliency, and security that enterprises need. Your business no longer needs to worry about maintaining a data center, hardware, operating systems, database, network, or all of the security. With deep integrations, a common data model, and a unified user interface, these applications help improve customer engagement, increase agility, and accelerate response to change. Oracle's Cloud Applications are updated quarterly with new features and improvements. These updates are based on our deep understanding of customer's functional needs, as well as modern technologies such as artificial intelligence, machine learning, blockchain, and digital assistants. Expectations for user experience only go up. Oracle's Redwood User Experience methodology ensures those expectations are matched and exceeded by including powerful and predictive search, a look and feel that actually helps users see what they need to in the order they need to see it, and by providing conversational and micro-interactions. Oracle, as a SaaS provider, puts the customer first by having enough dedicated resources to ensure zero downtime and increasing the speed of implementation by eliminating much of the hardware and software setup activity. 02:59 Nikita: What are the advantages of adopting Oracle Cloud Apps? Sarah: First off, Oracle provides automatic quarterly updates, and they're usable immediately. Customers can focus on leveraging the new functionality instead of spending cycles on installing it. There's much more accessibility because Oracle hosts the heavy part of the applications and customers access it via thin clients. The applications can be used from nearly anywhere and on a wide range of devices, including smartphones and tablets. Another great advantage is speed and agility. A lot of the benefits you see here result from Oracle's provider model. That means customers aren't spending time on customization, application testing, and report development. Instead, they work on the much lighter and faster tasks of configuration, validation, and leveraging embedded analytics. And finally, it's just better economics. Because of the pricing model, it is easy to compare an on-premises implementation cost. While upfront costs are almost always lower, overall operational costs and risk are usually lower as well. This translates to better total cost of ownership and improved overall economics and agility for your business. 04:10 Lois: Sarah, in your experience, why do customers love Oracle Cloud Apps? Sarah: At Oracle, we empower you with embedded AI that drives real breakthroughs in productivity and efficiency, helping you stay ahead of the curve. With the power of Oracle Cloud Infrastructure, you get the best of performance, security, and scalability, making it the perfect foundation for your business. Our modern user experience is intuitive and designed with your needs in mind, while our relentless innovation is focused on what truly matters to you. Above all, our commitment to your success is unwavering. We're here to support you every step of the way, ensuring you thrive and grow with Oracle. 04:49 Lois: Let’s talk about Oracle’s Redwood design system. What is it? And how does it enhance the user experience? Sarah: Redwood is the name of Oracle's next-generation user experience. Redwood design system is a collection of prefabricated components, templates, and patterns to enable developers to quickly create very sophisticated and polished interactions that are upgrade safe. It provides a consumer grade-plus experience, where you have high-quality functionality that can be used across multiple devices. You have access to insightful data readily at your fingertips for quick access and decision making, with the option to personalize your application to create your own state of the art experience. Processes and entry time will now be more efficient and streamlined by having fewer clicks and faster downloads, which will lead to high productivity in areas that matter the most. The Redwood design is intelligent, meaning you have access to AI, where you will receive recommendations and guidance based on your preferences and business processes. It's also adaptable, allowing you to use the same tools to create new experiences by using the Business Rule Framework with modern UX components. Oracle's Redwood user experience will help you to be more productive, efficient, and engaged with a highly personalized experience. 06:11 Are you keen to stay ahead in today's fast-paced world? We’ve got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you’re always in the know, we offer New Features courses that give you an insider’s look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started. 06:37 Nikita: Welcome back! Sarah, you said the Redwood design system is adaptable. Can you elaborate on what you mean by that? Sarah: In a nutshell, this means that developers can extend their applications using the same development platform that Oracle Cloud Applications are built on. Oracle Visual Builder Studio is a robust application development platform that enables users to rapidly create and extend web, mobile, and progressive web interfaces using a visual development environment. It streamlines application development and reduces coding, while also providing flexibility and support for popular build and testing frameworks. With Oracle Visual Builder Studio, users can build apps for the web, create progressive web apps, and develop on-device mobile apps. The tool also offers access to REST services and allows for planning and managing development processes, as well as managing the code lifecycle. Additionally, Oracle Visual Builder Studio provides hosting for apps along with easy publishing and version management. Changes made using Visual Builder Studio are called Application Extensions. Visual Builder Studio Express Mode has two key components: Business Rules and Constants. Use Business Rules, which is the Redwood equivalent to Transaction Design Studio for responsive pages, to leverage delivered best practices or create your own rules based on various criteria, such as country and business unit. Make fields and regions required or optional, read-only or editable, and show or hide fields in regions, depending on specific criteria. Use the various delivered Constants to customize your Redwood pages to best fit your specific business needs, such as hide the evaluation panel and connections or reorder the columns in the person search result table. 08:23 Lois: Sarah, here’s a question that's probably on everyone's mind—what about AI for Fusion Applications? Sarah: Oracle integrates AI into Fusion Applications, enabling faster, better decision making and empowering your workforce. With both classic and generative AI embedded, customers can access AI-driven insights seamlessly within their everyday software environment. In HCM, AI helps to automate routine tasks. It's also used to attract and manage talent more efficiently by doing things like reducing the time to hire and performing automatic skill matching for job vacancies. It also uses some of that skill matching for existing employees to optimize their engagement, improve productivity, and maximize career growth. All the while, it provides suggested actions so that those tasks are quick, accurate, and easy. In SCM, AI helps predict order cycle times by analyzing historical data and trends, allowing for more accurate planning. It also generates item descriptions automatically and uncovers potential suppliers by analyzing market data, thereby improving efficiency and sourcing decisions. With ERP, it's all about delivering efficiencies and improving strategic contributions. You can use AI to automate some of the core processes and provide guided actions for users in the rest of the processes. This is our recipe for improved efficiency and reduced human error. For example, in Payables, you can use AI features to accelerate and simplify invoice processing and identify duplicate transactions. And in CX, AI helps you to identify which sales leads offer the greatest potential, provides real-time news alerts, and then recommends actions to ensure that reps are working on the right opportunity at the right time and improving conversion to sale. 10:11 Lois: Everyone’s heard about AI agents, but I’ve always wondered how they work. Sarah: AI agents are a combination of large language models and other advanced technologies that interact with their environments, automate complex tasks, and collaborate with employees in real time. Reasoning capabilities in these LLMs differentiate AI agents from the brittle rules-based automation of the past. Since they can make judgment calls, AI agents can create action plans and manage workflows, either independently or with human supervision. At the core of their functionality is the capability to learn from previous interactions, use data from internal systems, and collaborate with both people and other agents. This ability to continuously adapt makes AI agents particularly valuable for complex business environments, where flexibility and scalability are key. 11:01 Nikita: And how do they work specifically in Fusion Apps? Sarah: Oracle Fusion AI agents are autonomous assistants designed to help organizations streamline operations, improve decision making, and reduce manual workloads. They can assist with simple or complex tasks and work across departments. 11:19 Lois: Sarah, what are the different types of AI agents? Sarah: Functional agents act as digital assistants for different personas within the enterprise and perform domain-specific tasks. Supervisory agents manage other agents, overseeing complex workflows and making decisions on whether human intervention is needed. Utility agents perform routine low-risk tasks such as retrieving data, sending notifications, or running reports. They're often optimized to help with specific roles, so an AI agent or collection of agents might act as a finance clerk, hiring manager, or a customer service representative. 11:54 Nikita: Can you give us some real-world use cases? Sarah: In human resources, agents will assist employees with benefit inquiries and policy clarifications. In finance, agents will automate invoice approvals and help optimize financial workflows. In supply chain management, a field service agent can guide technicians through repairs by providing real-time diagnostic data, troubleshooting steps, and automating orders for parts. In customer experience, the contracts researcher agent enables sales teams to automate routine contract workflows and approvals so they can focus on selling rather than administrative tasks. Oracle Fusion AI agents represent a leap beyond traditional AI. They don't just automate, they collaborate with human workers, making AI agents more than just tools. By integrating advanced AI within business systems, Oracle continues to lead the way in improving productivity and operational efficiency. As AI technology evolves, expect to see even more sophisticated AI agents capable of managing entire business processes autonomously, giving your team the freedom to focus on strategic, high-impact activities. 13:04 Nikita: Thank you so much for taking us through all that, Sarah. We’re really excited to have you back next week to continue this discussion. Lois: And if you liked what you heard today, head over to mylearn.oracle.com and take a look at the free Oracle Fusion Cloud Applications Process Essentials courses to learn more. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 13:27 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/37333380
info_outline
Oracle Cloud Success Navigator – Part 2
07/15/2025
Oracle Cloud Success Navigator – Part 2
Hosts Lois Houston and Nikita Abraham continue their discussion with Mitchell Flinn, VP of Program Management for the CSS Platform, by exploring how Oracle Cloud Success Navigator helps teams align faster, reduce risk, and drive value. Learn how built-in quality benchmarks, modern best practices, and Starter Configuration tools accelerate cloud adoption, and explore ways to stay ahead with a mindset of continuous innovation. Oracle Cloud Success Navigator Essentials: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University and with joining me today is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! In our last episode, we gave you a broad overview of the Oracle Cloud Success Navigator platform—what it is, how it works, and its key features and benefits. Today, we’re continuing that discussion with Mitchell Flinn. Mitchell is VP of Program Management for Oracle Cloud Success Navigator, and in this episode, we’re going to ask him to walk us through some of the core components of the platform that we couldn’t get into last week. 01:04 Lois: Right, Niki. Hi Mitchell! You spoke a little about Cloud Quality Standards in our last episode. But how do they contribute or align with the vision of Oracle Cloud Success Navigator? Mitchell: The vision for Navigator is to support customers throughout every phase of their cloud journey, providing timely advice to help improve outcomes to reduce cost and increase overall value. This model is driven through Oracle Cloud Quality Standards. These standards are intended to improve the transparency and collaboration between customer, partner, and Oracle members of a project. This is a project blueprint to include the ability for business and IT users to align on project coordination, expectations, and ultimately drive tighter alignment. Tracking key milestones and activities can help visualize and measure progress. You can build assessments and help answer questions so that at the right time, you have the right resources to make the right decisions for an organization. Cloud Quality Standards represent the key milestone dates and accomplishments along the journey. You can leverage these to increase project transparency, reduce risk, and increase the overall collaboration. Cloud Quality Standards are proactive list of must haves leveraged by customers, partners, and Oracle. They're a collection of knowledge and lessons learned from thousands of implementations globally. Cloud Quality Standards are partner agnostic and complimentary to all SI methodologies and tool sets. And they've been identified to address delivery issues before they happen and reduce the risk of implementations. 02:34 Lois: Ok, and a crucial component of Oracle Cloud Success Navigator is Oracle Modern Best Practice, or OMBP, right? Can you tell us more about what this is? Mitchell: Oracle Modern Best Practices are based on distilled knowledge of our customers' needs gained from 10,000 successful delivery projects. They illustrate the business process components and their optimization to take advantage of the latest Oracle applications and technologies. Oracle Modern Best Practices comprise industry best practices and processes powered by Oracle technology. Engineered in Fusion Applications, OMBPs simplify and streamline workflows. They enable organizations to leverage modern, efficient, and scalable practices. As we align our assets with OMBPs, there's a stronger connection between global process owners and business process innovation within a customer's organization. 03:21 Nikita: And how do they help deliver end-to-end success for businesses? Mitchell: An OMBP approach involves a digital business process, so evolving and adapting in real time to changing market dynamics. End-to-end across the organization, so we're breaking down silos and ensuring there's operational agility and a seamless collaboration between departments. We're leveraging emerging technologies, so utilizing AI, other cutting-edge technologies to automate routine tasks, enabling greater human creativity and unlocking new value and insights. And radically superior results, driving a significant improvement in measurable outcomes. OMBPs are dynamic, and when regularly updated, they meet evolving customer needs and technologies. They're trusted, tested, and validated by Oracle experts and publicly available and download on oracle.com. If you go to oracle.com and search modern best practice, you'll find more detailed introduction to Oracle Modern Best Practices. You'll also find Oracle Modern Best Practice business processes for domains such as ERP, EPM, Supply Chain, HCM, and Customer Experience. We also have Oracle Modern Best Practices for specific industries. 04:25 Nikita: What are the key benefits of OMBP? Mitchell: Revolutionary new technologies are available for organizations and business leaders. You might wonder how existing business processes are optimized with old technology and how they can drive the best solution. With more emerging technologies reaching commercial availability, existing best practices become outdated. And to stay competitive, organizations need to continuously innovate and incorporate new technology within their best practices. In Oracle's definition of OMBPs, common business processes are considered historic input, but we also factor in what could be done with new technologies. And based on this approach, Oracle Modern Best Practices help us evolve with the organizational needs as market dynamics change, work end to end across organizations to eliminate department silos and ensure agility. It allows us to use technologies such as AI to automate the mundane and unlock human creativity for new value and insight. This allows us to incorporate next generation digital technologies to enable radically superior, measurable results. To achieve these, Oracle makes use of key differentiators such as analytics and AI and machine learning. Analytics are also known as business intelligence provides you with information in the form of pre-built dashboards, showing your key metrics in real time. Embedded analytic capabilities enable you to monitor business performance and make better decisions. 05:44 Lois: And what about AI and machine learning? Mitchell: These focus on building systems that learn or improve performance based on the data that they consume. Smart digital assistants, recommendation engines, predictive analytics, they're all used within AI and machine learning to help organizations automate operations and drive innovation, and ultimately make better decisions faster. 06:02 Nikita: Mitchell, let’s move on to the Starter Configuration. Can you explain what it is and how it helps during a cloud implementation? Mitchell: Starter Configuration is a predefined configuration of Oracle Cloud Applications aligned with the Oracle Modern Best Practices. It's very comprehensive and includes business processes in several domains, such as ERP, HCM, Supply Chain, EPM, and so on. It includes sample, master, and transactional data, and predetermined usernames, which aligns and tests based on the same use cases you saw in Oracle Modern Best Practices in Cloud Success Navigator. Customers can request deployment of a Starter Configuration into their test environment. Oracle will run an automated process for replicating the configuration, master data, transaction data, and predetermined usernames from Oracle to the Oracle Cloud Applications Test Environment of the customer's choice. For best user experience, customers can add a basic level of personalization, such as their customer name, limited number of employees, suppliers, customers, and a few other items. Starter Configuration’s delivered with predetermined step guides for comprehensive set of use cases. Using these, customers can relay the same use cases they've seen in Oracle Modern Best Practices and Success Navigator. In the Oracle Cloud Applications Test Environment Customer, we've been able to enable an in-app guidance using Oracle Guided Learning. This helps to make it easier for navigation through the business processes supported by the application. Oracle can deploy the Starter Configuration in days, not weeks or months, which means the implementation partners don't need to invest time and effort for the first configuration of an Oracle Cloud Application environment before they can even get the chance to show it to a customer. In turn, once Starter Configuration is deployed, it's ready to be used for solution familiarization and design activities. Using Starter Configuration of Oracle Cloud Applications early in the cloud journey will offer several benefits to customers. 08:00 Lois: What are these benefits? Mitchell: The first, it helps to cut down on environment configuration time from several weeks or months to potentially just days. Next, implementation partners can engage stakeholders early, and get them familiar with Oracle Cloud Applications, especially those that maybe have never participated in the sales cycle. Because customer stakeholders actually see what Oracle Cloud solutions might look like in the future, it becomes easier to take design decisions. Starter Configuration provides hands-on familiarization with Oracle Cloud Applications and Oracle Leading Practices. This makes it easier to understand what leading practices and standard features can be adopted to support future business processes. It also reduces the level of customization and accelerates implementation. 08:45 Transform the way you work with Oracle Database 23ai! This cutting-edge technology brings the power of AI directly to your data, making it easier to build powerful applications and manage critical workloads. Want to learn more about Database 23ai? Visit mylearn.oracle.com to pick from our range of courses and enroll today! 09:10 Nikita: Welcome back! Mitchell, how can customers and implementation partners best use the Starter Configuration? Mitchell: Customers and implementation partners will work in close collaboration to make the implementation successful. Hence, Oracle recommends that customers and implementation partner discuss how the best use of Starter Configuration will take place. This is one of the key activities in the mobilize stage of the cloud journey. First, Oracle recommends to use Starter Configuration to prepare the customer stakeholders for the project. Customer stakeholders who participate in the project should go to the Oracle Modern Best Practice section of Success Navigator platform in order to learn more about the modern best practices, business processes, personas, leading practices, and use cases. Project team can request Starter Configuration early in the project to allow customer stakeholders to get their hands-on experience with performing use cases in the Starter Configuration. Customer stakeholders will perform use cases in Starter Config to learn more about modern best practices. They'll use the step-by-step guides and Guided Learning to easily perform the use cases within the Starter Configuration. This is how they'll visualize use cases in Oracle Cloud Applications and get a good understanding of Oracle Modern Best Practices. Next, mobilize stage of the journey, project team can use Starter Configuration to visualize the solution and make design decisions with confidence. First, by requesting Starter Configuration, implementation partners can engage stakeholders early and create the space to get familiar with Oracle Applications. This applies especially to those that may have not participated during the sales cycle. You could personalize Starter Configuration to enhance the user experience to help the customer connect to the application and, for example, change the company name, the logo, few supplier names, customer names, employee names, etc. And implementation partners are going to be able to run sessions to familiarize the customer with modern best practices and show how cloud applications support use cases. For structured guidance, the implementation partners can use the step guides. It includes screenshots of OGL within cloud applications environments. And you could run design workshop and use Starter Configuration, show and explain which design decisions must take place to define a customer-centric configuration. Finally, you can show use cases that help you explain what the impact of design decisions might be. 11:20 Lois: Mitchell, before we wrap up, can you take us through how the Release Readiness features facilitate innovation? Mitchell: In order to innovate with the Release Readiness features, it's important to learn about the new features in a one-stop shop, and then connect with the capability. The first item is to be able to find and familiarize yourself with the content as they exist within Release Notes. From there, it's important to be able to actually experience those items by looking at the text, and pictures, and the Oracle University videos that we provide in the Feature Overviews, as well as additional capabilities that will be coming with the Navigator in the preview environment, your ability to get your hands on in a demo experience through Cloud Success Navigator. Furthermore, it's important for you to be able to explore across theme-based items, which we call Adoption Centers, currently ready for AI in Redwood. This gives you the ability to span across Release Notes and different releases in order to understand the themes of the trends around AI and Redwood, and how those capabilities in our technology can advance your innovation in the Cloud. And finally, you need to be able to understand those opportunities based off of business processes, data insights, and industry benchmarks. That way, you can understand the capabilities as they exist, not just for your business specifically, but in the context of the broader industry and technology trends. From there, it's important for you to then think about your ability to collaborate to drive continuous innovation. We want to be able to leverage Cloud Success Navigator to drive collaboration to increase confidence across all members of the project team, whether it be you as a customer, our partners, or the Oracle team. It should also be able to drive an increased efficiency within decision making, driving greater value and readiness as you think about the proposed adoption changes. Finally, we want to think about the ability to reduce cycles related to features and decisions so that you can more quickly adapt, and adjust, and consume innovations as they're produced on a quarterly basis. 13:09 Nikita: I think we can end with that. Thank you so much, Mitchell, for taking us through the Navigator platform. Lois: And if you liked what you heard today, head over to mylearn.oracle.com and take a look at the Oracle Cloud Success Navigator Essentials course to learn more. It’s available for free! Until next time, this is Lois Houston…. Nikita: And Nikita Abraham, signing off! 13:33 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/37333310
info_outline
Oracle Cloud Success Navigator – Part 1
07/08/2025
Oracle Cloud Success Navigator – Part 1
In this episode of the Oracle University Podcast, Lois Houston and Nikita Abraham are joined by Mitchell Flinn, VP of Program Management for the CSS Platform, to explore Oracle Cloud Success Navigator. This interactive platform is designed to help customers optimize their cloud journey, offering best practices, AI tools, and personalized guidance from implementation to innovation. Don’t miss this insider look at maximizing your Oracle Cloud investment! Oracle Cloud Success Navigator Essentials: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and joining me is my co-host Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Today is the first of a two-part special on Oracle Cloud Success Navigator. This is a tool that provides you with a clear path to cloud transformation and helps you get the most out of your cloud investment. 00:52 Nikita: And to tell us more about this, we have Mitchell Flinn joining us today. Mitchell is VP of Program Management for Oracle Cloud Success Navigator. In this episode, we’ll ask Mitchell about the ins and outs of this powerful platform, its benefits, key features, and the role it plays in streamlining cloud journeys. Lois: Yeah. Hi Mitchell! What is Oracle's approach to cloud technology and customer success, and how does the Cloud Success Navigator support this philosophy? 01:22 Mitchell: Oracle has an amazing amount of industry-leading enterprise cloud technologies across our entire portfolio. All of this is at your disposal. That, coupled with the sole focus of your success, forms the crux of the company's transformational journey. In other words, we put your success at the heart of everything we do. For each organization, the path to achieve maximum value from our technology is unique. Success Navigator reflects our emphasis on being there with you throughout the entire journey to steer you to success. 01:53 Nikita: Ok, what about from a business’s viewpoint? Why would they need the Navigator? Mitchell: Businesses across every industry are moving mission-critical applications to the cloud. However, business leaders understand that there's no one-size-fits-all model for cloud development and deployment. Some fundamentals for success are your need to ensure new technologies are seamlessly integrated into day-to-day operations and continually optimize to align with evolving business requirements. You must ensure stakeholder visibility through the journey with updates at every stage. Building system efficiencies into other key tasks, which has to be done at the forefront when considering your cloud transformation. You also need to quickly identify risks and address them during the implementation process and beyond. Beyond the technical execution, cloud deployments also require significant process and organizational changes to ensure that adoption is aligned with business goals and delivers tangible benefits. Moreover, the training process for new features after cloud adoption can be an organization wide initiative that needs special attention. These requirements or more can be addressed through Oracle Cloud Success Navigator, which is a new interactive digital platform to guide you through all stages of your cloud journey. 03:09 Lois: Mitchell, how does Cloud Success Navigator platform enhance the user experience? How does it support customers at different stages of their cloud journey? Mitchell: Platform is included for free for all cloud application customers. And core to Success Navigator is the goal of increasing transparency among customers, partners in the Oracle team, from project kickoff through quarterly releases. Included in the platform are implementation best practices, Oracle Modern Best Practices focused on solutions provided by our applications, and guidance on living within the cloud. Success Navigator supports you for every stage of your Oracle journey. You can first get your bearings and understand what's possible with your cloud solution using preconfigured starter environments to support your design decisions. It helps you chart a proven course by providing access to Oracle expertise and Oracle Modern Best Practices, so you can use cloud quality standards to guide your implementation approach. You can find value from quarterly releases using AI assistants and preview environments to experience and adopt latest features that matter to you. And you can blaze new trails by building your own cloud roadmap based on your organization's goals, keeping you focused on the capabilities you need for day-to-day and the road ahead. 04:24 Nikita: How does the Navigator cater to the needs of all the different customers? Mitchell: For customers just getting started with Oracle implementations, Navigator provides a framework with success criteria for each stakeholder involved in the implementation, and provides recommended milestones and checklists to keep everyone on track. For existing customers and experienced cloud customers thriving in the cloud, it provides contextually relevant insights based on your cloud landscape. It prepares you for quarterly releases and preview environments, and enables the use of AI and optimization within your cloud investment. For our partners, it allows Oracle to work in a collaborative way to really team up for our customers. Navigator gives transparency to all stakeholders and helps determine what success criteria we should be thinking about at each milestone phase of the journey. And it also helps customers and partners get more out of their Oracle investment through a seamless process. 05:20 Lois: Right. Mitchell, can you elaborate on the use cases of the platform? How does it address challenges and requirements during cloud implementations? Mitchell: We can create transparency and alignment between you, your partner, and the Oracle team using shared view of progress measured through standard criteria. We can incorporate recommended key milestones and activities to help you visualize and measure your progress. And we can use built-in assessments, remove risk, and ask the right questions at the right time to make the right implementation decisions for your organization. Additionally, we can use Starter Configuration, which allows you to experience the latest capabilities and leading practices to enrich design decisions for your organization with Starter Configuration. This can activate Starter Configuration early in your journey to understand what delivered capability can do for you. It allows you to evaluate modern best practices to determine how your design process can work in the future. And it empowers and educates your organization by interacting with real capability, using real processes to make the right decisions for your cloud implementation. You're able to access new features in Fusion updates. You can do this to learn what you need to know about new features in a one-stop shop and connect your company in a compelling capacity. You can find, familiarize, and prioritize new and existing features in one place. And you can experience new features in hands-on preview environments available with you with each quarterly release. You can explore new theme-based approaches using adoption centers for AI and Redwood to understand where to start and how to get there. And you can understand innovation opportunities based on business processes, data insights, and industry benchmarks. 07:01 Nikita: Now that we’ve covered the basics, can you walk us through some of the key features of the platform? Let’s start with the Home page. Mitchell: This is the starting point of the customer journey and the central location for everything Navigator has to offer, including best practice content. You'll find content focused on implementation phase, the innovation phase, and administrative elements like the team structure, program and projects, and other relevant tips and tricks. Cloud Quality Standards provides learning content and checklists for successful business transformation. This helps support the effective adoption and adherence to Cloud Quality Standards and enables individuals to leverage AI and predictive insights. The feature Sunburst allows capability for features to be reviewed in an interactive graphic, illustrating new features by pillar, other attributes, which enable customers to review features curated to identify and adopt new features that meet their needs. It helps you understand recommended features across your application base based off of a production profile, covering mandatory adoption requirements, efficiency gains, innovation capabilities like AI and Redwood to drive business change. Next is the Adoption Center, which addresses the need of our existing and implementing customers. It covers the concept of how Redwood is an imperative for our application customers, what it means, and how, and when we could translate some of the requirements to a business user or an IT user. Roadmap is an opportunity for the team to evaluate which features are most interesting at any given moment, the items that they would like to adopt next, and save features or items that they might require later. 08:36 Lois: That’s great. Mitchell, I know we have two new features rolling out in 2025. Can you tell us a little bit about them? Mitchell: Preview Environment. This allows users to explore new features and orderly release through a shared environment by logging into Navigator, eliminating potential regression needs, data adjustments and loads, and other common pre-work. You can explore the feature directly with the support of Oracle Guided Learning. The second additional feature for 2025 is AI Assist. We've taken everything within Navigator and trained an LLM to provide customers the ability to inquire about best practices, solutions, and features within the applications and ultimately make them smarter as they prepare for areas like design workshops, quarterly release readiness, and engaging across our overall Oracle team. Customers can use, share, and apply Oracle content knowledge to their day-to-day responsibilities. 09:32 Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like Large Language Models, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.com and find out more. 10:01 Nikita: Welcome back! Mitchell, how can I get started with the Navigator? Mitchell: To request customer access to Success Navigator's environment, you need to submit a request form via the Success Navigator page on oracle.com. You need to be able to provide your customer support identifier with the help icon around the CSI number. If you don't have your CSI number, you can still submit your request and a member of the Success Navigator team will reach out and coordinate with you to understand the information that's required. Once access is granted, you'll receive a welcome email to Oracle Cloud Success Navigator. 10:35 Lois: Alright, and after that’s done? Mitchell: Before implementing Oracle Cloud Applications in your organization, you need to think of a structure that helps you organize and manage that implementation. To implement a solution footprint for a relatively small organization operating in a single country, and with an implementation time of only a few months, defining a single project might be a good enough organization to manage the implementation. For example, if you're implementing Core Human Capital Management applications, you could define a single project for that. But if you're implementing a broad solution footprint for a large business across multiple organizational entities in multiple countries, and with an implementation time of several years, a single project isn't enough. In such cases, organizations typically define a structure of a program with multiple underlying projects that reflect the scope, approach, timelines of business transformation. For example, you're implementing both HCM and ERP applications in a two-phased approach. You might decide to define a program with two underlying projects, one for HCM and one for ERP. For large, international business transformations, you can define multiple programs, each with multiple underlying projects. For example, you might first need to define a global design for a common way of working with HCM and then start rolling out that global design to individual countries. In such a scenario, you might define a program called HCM Transformation with a first project for global design, followed by multiple projects associated with individual countries. You could do the same for ERP. 12:16 Nikita: Mitchell, we’ve mentioned “cloud journey” a few times in this conversation, but can you tell us a little more about it? How can teams use it to guide their work? Mitchell: The main purpose of Oracle's cloud journey is to provide leading practices and actionable guidance to customers and implementation partners for successful implementations, operations within the cloud, and innovation through Oracle Cloud solutions. These are defined in terms of stages, activities, topics, and milestones. They focus on, how do you do the right things at the right time the first time. For customers, implementation partners in the cloud journey, this allows us to facilitate collaboration and transparency, accelerate the journey, and visualize and measure our progress. 12:59 Lois: We’ll stop here for today. Join us next week as we continue our discussion on Oracle Cloud Success Navigator. Nikita: And if want to look at some demos of everything we touched upon today, head over to mylearn.oracle.com and take a look at the Oracle Cloud Success Navigator Essentials course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 13:22 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/37333230
info_outline
Oracle GoldenGate 23ai: Parameters, Data Selection, Filtering, & Transformation
07/01/2025
Oracle GoldenGate 23ai: Parameters, Data Selection, Filtering, & Transformation
In the final episode of this series on Oracle GoldenGate 23ai, Lois Houston and Nikita Abraham welcome back Nick Wagner, Senior Director of Product Management for GoldenGate, to discuss how parameters shape data replication. This episode covers parameter files, data selection, filtering, and transformation, providing essential insights for managing GoldenGate deployments. Oracle GoldenGate 23ai: Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Podcast Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! This is the last episode in our Oracle GoldenGate 23ai series. Previously, we looked at how you can manage Extract Trails and Files. If you missed that episode, do go back and give it a listen. 00:50 Lois: Today, Nick Wagner, Senior Director of Product Management for GoldenGate, is back on the podcast to tell us about parameters, data selection, filtering, and transformation. These are key components of GoldenGate because they allow us to control what data is replicated, how it's transformed, and where it's sent. Hi Nick! Thanks for joining us again. So, what are the different types of parameter files? Nick: We have a GLOBALS parameter file and your runtime parameter files. The global one is going to affect all processes within a deployment. It's going to be things like where's your checkpoint table located in name, things like the heartbeat table. You want to have a single one of these across your entire deployment, so it makes sense to keep it within a single file. We also have runtime parameter files. This are going to be associated with a specific extract or replicat process. These files are located in your OGG_ETC_HOME/conf/ogg. The GLOBALS file is just simply named GLOBALS and all capitals, and your parameter file names for the processes themselves are named with the process.prm. So if my extract process is EXT demo, my parameter file name will be extdemo.prm. When you make changes to parameter files, they don't take effect until the process is restarted. So in the case of a GLOBALS parameter file, you need to restart the administration service. And in a runtime parameter file, you need to restart that specific process before any changes will take effect. We also have what we call a managed process setting profile. And this allows you to set up auto restart profiles for each process. And the GoldenGate Gate classic architecture, this was contained within the GLOBALS parameter file and handled by the manager. And microservices is a little bit different, it's handled by the service manager itself. But now we actually set up profiles. 02:41 Nikita: Ok, so what can you tell us about the extract parameter file specifically? Nick: There's a couple things within the extract parameter file is common use. First, we want to tell what the group name is. So in this case, it would be our extract name. We need to put in information on where the extract process is going to be writing the data it captures to and that would be our trail files, and extract process can write to one or more trail files. We also want to list out the list of tables and schemas that we're going to be capturing, as well as any kind of DDL changes. If we're doing an initial load, we want to set up the SQL predicate to determine which tables are being captured and put a WHERE clause on those to speed up performance. We can also do filtering within the extract process as well. So we write just the information that we need to the trail file. 03:27 Nikita: And what are the common parameters within an extract process? Nick: There are a couple of common parameters within your extract process. We have table to list out the list of tables that GoldenGate is going to be capturing from. These can be wildcarded. So I can simply do table.star and GoldenGate will capture all the tables in that database. I can also do schema.star and it will capture all the tables within a schema. We have our EXTTRAIL command, which tells GoldenGate which trail to write to. If I want to filter out certain rows and columns, I can use the filter cols and cols except parameter. GoldenGate can also capture sequence changes. So we would use the sequence parameter. And then we can also set some high-level database options for GoldenGate that affect all the tables and that's configured using the tranlog options parameter. 04:14 Lois: Nick, can you talk a bit about the different types of tranlogoptions settings? How can they be used to control what the extract process does? Nick: So one of the first ones is ExcludeTag. So GoldenGate has the ability to exclude tagged transactions. Within the database itself, you can actually specify a transaction to be tagged using a DBMS set tag option. GoldenGate replicat also sets its transactions with a tag so that the GoldenGate process knows which transactions were done by the replicat and it can exclude them automatically. You can do exclude tag with a plus. That simply means to exclude any transaction that's been tagged with any value. You can also exclude specific tags. Another good option for TranLogOptions is enable procedural replication. This allows GoldenGate to actually capture and replicate database procedure calls, and this would be things like DBMS AQ, NQ operations, or DQ operations. So if you're using Oracle advanced queuing and you need GoldenGate to replicate those changes, it can. Another valuable tranlogoption setting is enable auto capture. Within the Oracle Database, you can actually set ALTER TABLE command that says ALTER TABLE, enable logical replication. Or when you create a table, you can actually do CREATE TABLE statement and at the end use the enable logical replication option for that CREATE TABLE statement. And this tells GoldenGate to automatically capture that table. One of the nice features about this is that I don't need to specify that table and my parameter file, and it'll automatically enable supplemental logging on that table for me using scheduling columns. So it makes it very easy to set up replication between Oracle databases. 06:01 Nikita: Can you tell us about replicat parameters, Nick? Nick: Within a replicat, we'll have the group name, some common other parameters that we'll use is a mapping parameter that allows us to map the source to target table relationships. We can do transformation within the replicat, as well as error handling and controlling group operations to improve performance. Some common replicat parameters include the replicat parameter itself, which tells us what the name of that replicat is. We have our map statement, which allows us to map a source object to a target object. We have things like rep error that control how to handle errors. Insert all records allows us to change and convert, update, and delete operations into inserts. We can do things like compare calls, which helps with active-active replication in determining which columns are used in the GoldenGate WHERE clause. We also have the ability to use macros and column mapping to do additional transformation and make the parameter file look elegant. 07:07 AI is being used in nearly every industry…healthcare, manufacturing, retail, customer service, transportation, agriculture, you name it! And it’s only going to get more prevalent and transformational in the future. It’s no wonder that AI skills are the most sought-after by employers. If you’re ready to dive in to AI, check out the OCI AI Foundations training and certification that’s available for free! It’s the perfect starting point to build your AI knowledge. So, get going! Head on over to mylearn.oracle.com to find out more. 07:47 Nikita: Welcome back! Let’s move on to some of the most interesting topics within GoldenGate… data mapping, selection, and transformation. As I understand, users can do pretty cool things with GoldenGate. So Nick, let’s start with how GoldenGate can manipulate, change, and map data between two different databases. Nick: The map statement within a Replicat parameter allows you to provide specifications on how you're going to map source and target objects. You can also use a map and an extract, but it's pretty rare. And that would be used if you needed to write the object name. Inside the trail files is a different name than the actual object name that you're capturing from. GoldenGate can also do different data selection, mapping, and manipulation, and this is all controlled within the Extract and Replicat parameter files. In the classic architecture of GoldenGate, you could do a rudimentary level of transformation and filtering within the extract pump. Now, the distribution service is only allowing you to do filtering. Any transformation that you had within the pump would need to be moved to the Extract or the Replicat process. The other thing that you can do within GoldenGate is select and filter data based on different levels and conditions. So within your parameter clause, you have your Table and Map statement. That's the core of everything. You have your filtering. You have COLS and COLSEXCEPT, which allow you to determine which columns you're going to include or exclude from replication. The Table and Map statement works at the table level. The FILTER works at the row level. And COLS and COLSEXCEPTs works at the column level. We also have the ability to filter by operation type too. So GoldenGate has some very easy parameters called GitInserts, GitUpdates, GitDeletes, and conversely ignore updates, ignore deletes, ignore inserts. And that will affect the operation type. 09:40 Lois: Nick, are there any features that GoldenGate provides to make data replication easier? Nick: The first thing is that GoldenGate is going to automatically match your source and target column names with a parameter called USEDEFAULTS. You can specify it inside of your COLMAP clause, but again, it's a default, so you don't need to worry about it. We also handle all data type and character set conversion. Because we store the metadata in the trail, we know what that source data type is like. When we go to apply the record to the target table, the Replicat process is going to look up the definition of that record and keep a repository of that in memory. So that when it knows that, hey, this value coming in from the trail file is going to be of a date data type, and then this value in the target database is going to be a character data type, it knows how to convert that date to a character, and it'll do it for you. Most of the conversion is going to be done automatically for data types. Things where we don't do automatic data type conversion is if you're using abstract data types or user-defined data types, collections arrays, and then some types of CLOB operations. For example, if you're going from a BLOB to a JSON, that's not really going to work very well. Character set conversion is also done automatically. It's not necessarily done directly by GoldenGate, but it's done by the database engine. So there is a character set value inside that source database. And when GoldenGate goes to apply those changes into the target system, it's ensuring that that character set is visible and named so that that database can do the necessary translation. You can also do advanced filtering transformation. There's tokens that you can attach from the source environment, database, or records into a record itself on the trail file. And then there's also a bunch of metadata that GoldenGate can use to attach to the record itself. And then of course, you can use data transformation within your COLMAP statement. 11:28 Nikita: Before we wrap up, what types of data transformations can we perform, Nick? Nick: So there's quite a few different data transformations. We can do constructive or destructive transformation, aesthetic, and structural. 11:39 Lois: That’s it for the Oracle GoldenGate 23ai: Fundamentals series. I think we covered a lot of ground this season. Thank you, Nick, for taking us through it all. Nikita: Yeah, thank you so much, Nick. And if you want to learn more, head over to mylearn.oracle.com and search for the Oracle GoldenGate 23ai: Fundamentals course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 12:04 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/36823240
info_outline
Oracle GoldenGate 23ai: Managing Extract Trails and Files
06/24/2025
Oracle GoldenGate 23ai: Managing Extract Trails and Files
In this episode of the Oracle University Podcast, Lois Houston and Nikita Abraham explore the intricacies of trail files in Oracle GoldenGate 23ai with Nick Wagner, Senior Director of Product Management. They delve into how trail files store committed operations, preserving the order of transactions and capturing essential metadata. Nick explains that trail files are self-describing, containing database and table definition records, making them easier to work with. The episode also covers trail file management, including the purge trail task and the ability to download trail files directly from the web UI, providing flexibility in various deployment scenarios. Oracle GoldenGate 23ai: Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Podcast Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome back to another episode of the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I’m joined by Lois Houston, Director of Innovation Programs. Lois: Hi there! In our last episode, we discussed the Replicat process. That was a good introduction, and you should give it a listen if you’re interested in the fundamentals of GoldenGate 23ai. 00:49 Nikita: Nick Wagner, Senior Director of Product Management for Oracle GoldenGate, is back with us today to talk about how to manage Extract Trails and Files. Hi Nick, it’s a pleasure to have you with us. So, we’ve spoken about trail files in our earlier episodes. But can you tell us about the kind of information that’s actually stored in these files? Nick: The trail files contain committed operations only. In an Oracle environment, the extract process is actually able to understand and read both committed and uncommitted transactions. It holds the uncommitted activity and the cache manager associated settings. As a transaction is committed, it's then flushing that information to the trail file. All this information in the transaction is preserved, so we have not only the transaction itself, but the order of the operations within that transaction. All the changed columns, including the primary key and any scheduling columns are also captured, and this is controlled by the log or sub calls parameter and other parameters within the extract process. The data captured depends on settings in the extract file and you can include additional information, including tokens. The trail files also contain metadata information, where the trail files are what we call self-describing, which means that as we start reading in new objects, we start writing the definition of those objects into the trail file themselves. 02:11 Lois: Nick, what does the structure of a trail file look like? Nick: The trail files have a header information, which simply keeps information about what version of trail file this is, where GoldenGate is handling it, information about that trail file itself. You'll also have three different types of records. You'll have a data record, which contains the actual before and after images, the table update statement, the type of operations. You have a database definition record, which includes information about the database that GoldenGate is capturing from, and then you'll also have a table definition record. As GoldenGate starts up and creates a trail file for the first time, it's always going to write the trail file header and associated database definition record, and then it's going to start reading data out of the source database. As it encounters a new table for the first time in that trail file, it's going to write the metadata for that object as well. This makes it very easy. This means that within a single trail file, any data records I have in there, that trail file also contains the associated table definition record for that table. 03:20 Nikita: Let’s talk about compatibility between different versions of GoldenGate. How do the trail files fit into that? Nick: The GoldenGate trail files themselves have information built into them to help understand what they're compatible with as far as GoldenGate releases. If I'm replicating from a new version of GoldenGate to an older version of GoldenGate, I can set the format release value to tell the extract process to write these trail files in an older version. In this case, I can simply say format release 19 and it'll write the trail files in the 19C version. If you're going from an older version to a newer version of GoldenGate, it's automatically able to process the old version trail file structure without having to change anything. 04:02 Nikita: Now, GoldenGate is constantly generating these trail files as it runs. So, how do we manage them over time. What’s the cleanup process like? Nick: Within the GoldenGate microservices architecture, the web UI has a way to manage your trail files and clean them up. So there's a purge trail task that allows you to go in and set up rules on how long to keep the trail files around for before they're purged. We have customers that want to reposition extract and so you'll want to make sure that you keep trail files around long enough so that you can handle any reposition that you intend to do. Trail files will always be kept around even past their purge rules if they're still needed for GoldenGate recovery. Also new to GoldenGate 23ai is the ability to download trail files directly from the web UI. This is extremely helpful if you're using OCI GoldenGate or you don't have OS access on the machine where GoldenGate is running. 04:56 Lois: What if we want to look inside these trail files and actually see the individual records? How can we examine them directly? Nick: Well, that can be seen using a tool called Logdump. Logdumps utility, that's installed in your ogghome/bindirectory. It has online help as well as full documentation. 05:14 Lois: And how do you use Logdump? Nick: So to use Logdump, the first thing you'll do is launch the service and then you'll open a trail file. You would specify the full path of the trail file along with the path name and the sequence number of that trail file. Once you've set it up, you'll position into that location within that trail file. Normally people position at record 0 and then they'll do a next, which allows them to get the next information. There's a couple other commands in there, such as POS, which allows you to set the position, scan for header, allows you to scan to the next record if you position within the middle of a record. So, when you first run Logdump, it's not going to have very much information available for you. So, you'll want to turn on a couple of settings. You'll want to enable File Header, GHDR, and Detail to be able to see more information about what's going on within that record within the trail. Logdump also has the ability to show you the actual ASCII values as opposed to the text value. This is very useful for dealing with multibyte data as well as unprintable characters. You can also specify the length of the record to show for each Logdump record. And this is in the reclen parameter, 280 is a rough number and it will usually show about enough that'll fit on a single page. 06:40 Join the Oracle University Learning Community and tap into a vibrant network of over 1 million members, including Oracle experts and fellow learners. This dynamic community is the perfect place to grow your skills, connect with likeminded learners, and celebrate your successes. As a MyLearn subscriber, you have access to engage with your fellow learners and participate in activities in the community. Visit community.oracle.com/ou to check things out today! 07:12 Nikita: Welcome back! Nick, earlier you mentioned data records in trail files. What kind of information do these records contain? Nick: When we start looking at data records within the trail file, we're going to see a little bit different format. It's going to give us information about what type of operation this was, the before, after indicator, is this an after image or a before image? It's going to give us the time information. It's going to tell us what table this record was on and the values within that record. We can also count the number of records in a trail using the count option that tells us how many records in the trail, the average size, and then the operation type breakdown. We can also get some additional details on that count, including having it broken out by table and operation within those tables. This is really useful if you're trying to track down a missing record or an out of sync condition and you want to make sure that GoldenGate is appropriately capturing all the changes. We can also use an option within Logdump called scan for metadata. The shorthand for this command is sfmd, it allows you to scan for something like a database definition record. You may have multiple database definition records versions within the same trail file. It tells us what type of database this was, the character set, which is important because this information is used by the replica when it goes to apply changes into the target database. We can also scan for metadata to get table definition records. The data types are numeric values that are associated with an internal GoldenGate data type. 08:43 Lois: Thank you, Nick, for your insights. There’s a lot more you can find in the Oracle GoldenGate 23ai: Fundamentals course on MyLearn. So, make sure you check that out by visiting mylearn.oracle.com. Nikita: Join us next week for a discussion on parameters, data selection, filtering, and transformation. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 09:07 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/36823200
info_outline
Oracle GoldenGate 23ai: The Replicat Process
06/17/2025
Oracle GoldenGate 23ai: The Replicat Process
In this episode, Lois Houston and Nikita Abraham, along with Nick Wagner, Senior Director of Product Management, dive into the Replicat process in Oracle GoldenGate 23ai. They discuss how Replicat applies changes to the target database, highlighting the different types: Classic, Coordinated, and Parallel Replicat. Oracle GoldenGate 23ai: Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to another episode of the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! If you’ve been listening to us these last few weeks, you’ll know we’ve been discussing the fundamentals of GoldenGate 23ai. Today is going to be all about the Replicat process. Again, this is something we’ve discussed briefly in earlier episodes, but just to recap, the Replicat process applies changes from the source database to the target database. It's responsible for reading trail files and applying the changes to the target system. 01:04 Lois: That’s right, Niki. And we’ll be chatting with Nick Wagner, Senior Director of Product Management for Oracle GoldenGate. Hi Nick! Thanks for joining us again today. Let’s get straight into it. Can you give us an overview of the Replicat process? Nick: One thing that's very important is the Replicat is extremely chatty with that target database. So it's going to be going in and trying to make lots of little transactions on that system. The Replicat process only issues single row DML. So if you can imagine a source database that's generating hundreds of thousands of changes per second, we're going to have to have a Replicat process that can do 100,000 changes per second on that target site. That means that it's going to have to send a lot of little one record commands. And so we've got a lot of ways to optimize that. But in all situations you're really going to want very, very low ping time between that Replicat process and that target database. This often means that if you're going to be running GoldenGate in a cloud, you're going to want the Cloud GoldenGate environment to be running in that target data center, wherever that target database is. 02:06 Lois: What are the key characteristics of the process, Nick? Nick: Replicat process is going to read the changes from the trail file and then apply them to the target system, just like any database user would. It's not doing anything special where it's going under the covers and trying to apply directly to the database blocks. It's just applying regular standard insert, update, delete, and DDL statements to that target database. A single trail file does support high volume of data replication activity depending on the type of Replicat. Replicats do preserve the boundary of their transactions. So in the situations, by default, a transaction that's on the source, let's say five inserts followed by a commit will remain five inserts followed by a commit on the target site. There are some operations and changes that do affect this, but they're not turned on by default. There are things like group transactions that allows you to group multiple transactions into a single commit. This one could actually improve performance in some cases. We also have batch SQL that can change the boundaries of a transaction as well. And then in a Parallel Replicat, you actually have the ability to split a large transaction into multiple chunks and apply those chunks in Parallel. So again, by default, it's going to preserve the boundaries, but there are ways to change that. And then the Replicats use a checkpoint table to help with recovery and to know where they're applying data and what they've done. The other thing in here is, like an Extract process can write to multiple trails and write subsets of data to each one, a Replicat can only process a single set of trail files at once. So it's going to be attached to a specific trail file like trail file AB, and will only be able to read changes from trail file AB. If I have multiple trails that need to be applied into a target system, then I have to set up multiple Replicats to handle that. 03:54 Nikita: So, what are the different Replicat types, Nick? Nick: We have three types in the product today. We have Classic Replicat, which should really only be used for testing purposes or in environments that don't support any of the other specialized Replicats. We have Coordinated Replicat, which is a high speed apply mechanism to apply data into a target system. It does have some parallelism in it, but it's user defined parallelism. And then we have our flagship and that's Parallel Replicat. And this is the most performant lowest latency Replicat that we have. 04:25 Lois: Ok. Let’s dive a little deeper into each of them, starting with the Classic Replicat. How does it work? Nick: It's pretty straightforward. You're going to have a process that reads the trail files, and then in a single threaded fashion it's going to take the trail file logical change record, convert it to an insert, update, or delete, and then apply it into that target database. Each transaction that it does is preceded by a change to the checkpoint table. So when the transaction that the Replicat is currently doing is committed, that checkpoint table update also gets committed. That way when the Replicat restarts, it knows exactly what transaction it left off and how it last applied the record. And all the Replicats work the same way with regards to checkpoint tables. They each have their own little method of ensuring that the transaction they're applying is also reflected within the checkpoint table so that when it restarts, it knows exactly where it happened. That way, if a Replicat dies in the middle of a transaction, it can be restarted without any duplicate data or without missing data. 05:29 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You’ll find training on everything from multicloud, database, networking, and security to artificial intelligence and machine learning, all free for our subscribers. So, what are you waiting for? Pick a topic, head over to mylearn.oracle.com, and get started. 05:53 Nikita: Welcome back! Moving on, what about Coordinated Replicat? Nick: The Coordinated Replicat is going to read from a set of trail files. It's going to have multiple threads that do this. So you have your base thread, your coordinated thread that's going to be thread 1. It's going to process the data and apply it into that target database. You then have thread 2, 4, 5, 6, and so on. When you set up your Replicat parameter file for a Coordinated Replicat, the map commands that maps from one table on the source to a table on the target has an additional option. So you'll have an option called a range or thread range. With the range and thread range option, you can actually tell which table to go into which thread. 06:38 Lois: Can you give us an example of this? Nick: So I could say map Scott.M into thread 1 and I want Scott.Dept into thread 2. Well, this is fantastic until you realize that Scott.M and Scott.Dept have a foreign key between them or a child dependencies, parent-child relationships. What that means is that now I'm going to have to disable that foreign key on the target site, because there's no way for GoldenGate to coordinate the changes in one thread to another thread. And so you really have to be careful on how you pair your tables together. If you don't have any referential integrity on that target database, then you can use parallel coordinated Replicat to really high degrees of parallelism, and you get some very good performance out of it. Let's say that you have a table that's really got too much data for even a single thread to process, that's where the thread range comes in. And thread range command will use something like the table's primary key to split transactions on that table across multiple threads. So I can say, hey, take my table Scott.M and I want to spread transactions across threads 10, 11, 12, 13, and 14 and then spread them evenly based on the primary key. And Coordinated Replicat will do that. So you can get some very high performance numbers out of it and you can really fine tune the tables, especially if you know the amount of data coming into each one. While this does work great, we observed that a lot of customers really don't know their applications to that level of detail, and so we needed a different method to push data into that target database, where we could define the parallelism based on the database expectations. So instead of the customer having to try and figure out what are the parent-child relationships, why can't GoldenGate do it for me? And that led to Parallel Replicat. 08:26 Nikita: And what are the benefits and features of the Parallel Replicat process? Nick: So Parallel Replicat has been around for quite a few years now. It supports most targets, it was Oracle initially, but now it's been expanded out to a lot of the non-Oracle targets and even some of the nonrelational database targets. It has absolutely the best performance of any Replicat process out there. You can use it to split large transactions as well. So if all of a sudden you have a batch job that does a million inserts followed by a single commit, I can split that across 10 threads, each thread doing 100,000 inserts. And it's aware of your transaction dependencies, that's the cool thing. So in Coordinated Replicat, you had to worry about how to split your tables up, in Parallel Replicat, we do it for you. 09:11 Lois: And how does Parallel Replicat work? Nick: So there's three main processes to the Parallel Replicat. You have your first is the mapper process. This is going to be responsible for taking the data out of the trail files and putting them into kind of our collator and scheduler box. As transactions go from the trail file, they get put into this box in memory where they're processed. There's a collator process that will look at these processes and go, OK, as they're coming in, let me read some of the data in them to determine how they can be applied in Parallel or not. And so the collator process understands the foreign key dependencies on that target database. And it's able to say, hey, I know that my two tables are these two tables, have a parent-child relationship, I need to make sure that changes on those tables go in the correct order. And so if all of a sudden you see an insert using the parent record and then another insert into the child record and they're mapped together, GoldenGate will ensure that those two transactions go serially and not parallel where they could get applied out of order. There's then a scheduler process that's going to look at this and say, OK, now that I'm taking transactions from the collator process, who's already identified whether or not transactions can be applied in parallel or serial, and I'm going to feed them off to applier processes that are ready and waiting for me to apply those changes into the database. And then the applier process is waiting for the scheduler process to send its transactions and say, OK, what's my next one? Where's the next transaction I should be working on and applying? And then the applier process is the one actually applying the changes into that target database, again, just using standard DML operations. So there's a lot of benefits to this one. You don't need to worry about your foreign key dependencies, you can leave all your foreign keys enabled. The collator process will actually use information within the trail file to determine which transactions can be applied in parallel, and which one needs to be applied serially. 11:13 Lois: Thank you, Nick, for this insightful conversation. There’s loads more to discover about the Replicat process, and you can do that by heading over to mylearn.oracle.com and searching for the Oracle GoldenGate 23ai: Fundamentals course. Nikita: In our next episode, Nick will take us through managing Extract Trails and Files. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 11:37 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/36823140
info_outline
Oracle GoldenGate: Distribution Path, Target Initiated Path, Receiver Server, and Initial Load
06/10/2025
Oracle GoldenGate: Distribution Path, Target Initiated Path, Receiver Server, and Initial Load
In this episode, Lois Houston and Nikita Abraham dive into key components of Oracle GoldenGate 23ai with expert insights from Nick Wagner, Senior Director of Product Management. They break down the Distribution Service, explaining how it moves trail files between environments, replaces the classic extract pump, and ensures secure data transfer. Nick also introduces Target Initiated Paths, a method for connecting less secure environments to more secure ones, and discusses how the Receiver Service simplifies monitoring and management. The episode wraps up with a look into Initial Load, covering different methods for syncing source and target databases without downtime. Oracle GoldenGate 23ai: Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about the Extract process and today we’re going to spend time discussing the Distribution Path, Target Initiated Path, Receiver Server, and Initial Load. These are all critical components of the GoldenGate architecture, and understanding how they work together is essential for successful data replication. 00:58 Nikita: To help us navigate these topics, we’ve got Nick Wagner joining us again. Nick is a Senior Director of Product Management for Oracle GoldenGate. Hi Nick! Thanks for being with us today. To kick things off, can you tell us what the distribution service is and how it works? Nick: A distribution path is used when we need to send trail files between two different GoldenGate environments. The distribution service replaces the extract pump that was used in GoldenGate classic architecture. And so the distribution service will send the trail files as they're being created to that receiver service and it will write the trail files over on the target system. The distribution service works in a kind of a streaming fashion, so it's constantly pulling the trail files that the extract is creating to see if there's any new data. As soon as it sees new data, it'll packet it up and send it across the network to the receiver service. It can use a couple of different methods to do this. The most secure and recommended method is using a WebSocket secure connection or WSS. If you're going between a microservices and a classic architecture, you can actually tell the distribution service to send it using the classic architecture method. In that case, it's the OGG option when you're configuring the distribution service. There's also some unsecured methods that would send the trail files in plain text. The receiver service is then responsible for taking that data and rewriting it into the trail file on the target site. 02:23 Lois: Nick, what are some of the key features and responsibilities of the distribution service? Nick: It's responsible for command deployment. So any time that you're going to actually make a command to the distribution service, it gets handled there directly. It can handle multiple commands concurrently. It's going to dispatch trail files to one or more receiver servers so you can actually have a single distribution path, send trail files to multiple targets. It can provide some lightweight filtering so you can decide which tables get sent to the target system. And it also is integrated in with our data streams, our pub and subscribe model that we've added in GoldenGate 23ai. 03:01 Lois: Interesting. And are there any protocols to remember when using the distribution service? Nick: We always recommend a secure WebSocket. You also have proxy support for use within cloud environments. And then if you're going to a classic architecture GoldenGate, you would use the Oracle GoldenGate protocol. So in order to communicate with the distribution service and send it commands, you can communicate directly from any web browser, client software-- installation is not required-- or you can also do it through the admin client if necessary, but you can do it directly through browsers. 03:33 Nikita: Ok, let's move on to the target initiated path. Nick, what is it and what does it do essentially? Nick: This is used when you're communicating from a less secure environment to a more secure environment. Often, this requires going through some sort of DMZ. In these situations, a connection cannot be established from the less secure environment into the more secure environment. It actually needs to be established from the more secure environment out. And so if we need to replicate data into a more secure environment, we need to actually have the target GoldenGate environment initiate that connection so that it can be established. And that's what a target-initiated path does. 04:12 Lois: And how do you set it up? Nick: It's pretty straightforward to set up. You actually don't even need to worry about it on the source side. You actually set it up and configure it from the target. The receiver service is responsible for receiving the trail file data and writing it to the local trail file. In this situation, we have a target-initiated path created. And so that receiver service is going to write the trail files locally and the replicat is going to apply that data into that target system. 04:37 Nikita: I also want to ask you about the Receiver service. What is it really? Nick: Receiver service is pretty straightforward. It's a centrally controlled service. It allows you to view the status of your distribution path and replaces target side collectors that were available in the classic architecture of GoldenGate. You can also get statistics about the receiver service directly from the web UI. You can get detailed information about these paths by going into the receiver service and identifying information like network details, transfer protocols, how many bytes it's received, how many bytes it's sent out. If you need to issue commands from the admin client to the receiver service, you can use the info command to get details about it. Info all will tell you everything that's running. And you can see that your receiver service is up and running. 05:28 Are you working towards an Oracle Certification this year? Join us at one of our certification prep live events in the Oracle University Learning Community. Get insider tips from seasoned experts and learn from others who have already taken their certifications. Go to community.oracle.com/ou to jump-start your journey towards certification today! 05:53 Nikita: Welcome back. In the last section of today’s episode, we’ll cover what Initial Load is. Nick, can you break down the basics for us? Nick: So, the initial load is really used when you need to synchronize the source and target systems. Because GoldenGate is designed for 24/7 environments, we need to be able to do that initial load without taking downtime on the source. And so all the methods that we talk about do not require any downtime for that source database. 06:18 Lois: How do you do the initial load? Nick: So there's a couple of different ways to do the initial load. And it really depends on what your topology is. If I'm doing like-to-like replication in a homogeneous environment, we'll say Oracle-to-Oracle, the best options are to use something that's integrated with GoldenGate, some sort of precise instantiation method that does not require HandleCollisions. That's something like a database backup and restoring it to a specific SDN or CSN value using a Database Snapshot. Or in some cases, we can use Oracle Data Pump integration with GoldenGate. There are some less precise instantiation options, which do require HandleCollisions. We also have dissimilar initial load methods. And this is typically when you're going between heterogeneous environments. When my source and target databases don't match and there isn't any kind of fast unload or fast load utility that I could use between those two databases. In almost all cases, this does require HandleCollisions to be used. 07:16 Nikita: Got it. So, with so many options available, are there any advantages to using GoldenGate's own initial load method? Nick: While some databases do have very good fast load and unload utilities, there are some advantages to using GoldenGate's own initial load method. One, it supports heterogeneous replication environments. So if I'm going from Postgres to Oracle, it'll do all the data type transformation, character set transformation for me. It doesn't require any downtime, if certain conditions are met. It actually performs transformation as the data is loaded, too, as well as filtering. And so any transformation that you would be doing in your normal transaction log replication or CDC replication can also go through the same transformation for the initial load process. GoldenGate's initial load process does read directly from the source tables. And it fetches the data in arrays. It also uses parallel processing to speed up the replication. It does also handle activity on the source tables during the initial load process, so you do not need to worry about quiescing that source database. And a lot of the initial load methods directly built into GoldenGate support distributed application analytics targets, including things like Databricks, Snowflake, BigQuery. 08:28 Lois: And what about its limitations? Or to put it differently, when should users consider using different methods? Nick: So the first thing to consider is system proximity. We want to make sure that the two systems we're working with are close together. Or if not, how are we going to send the data across? One thing to keep in mind, when we do the initial load, the source database is not quiesced. So if it takes an hour to do the initial load or 10 hours, it really doesn't matter to GoldenGate. So that's something to keep in mind. Even though we talk about performance of this, the performance really isn't as critical as one might suspect. So the important thing about data system proximity is the proximity to the extract and replicat processes that are going to be pulling the data out and pushing it across. And then how much data is generated? Are we talking about a database that's just a couple of gigabytes? Or are we talking about a database that's hundreds of terabytes? Do we want to consider outage time? Would it be faster to take a little bit of outage and use some other method to move the data across? What kind of outage or downtime windows do we have for these environments? And then another consideration is disk space. As we're pulling the data out of that source database, we need to have somewhere to store it. And if we don't have enough disk space, we need to run to temporary space or to use multiple external drives to be able to support it. So these are all different considerations. 09:50 Nikita: I think we can wind up our episode with that. Thanks, Nick, for giving us your insights. Lois: If you'd like to learn more about the topics we covered today, head over to mylearn.oracle.com and check out the Oracle GoldenGate 23ai: Fundamentals course. Nikita: In our next episode, Nick will take us through the Replicat process. Until then, this is Nikita Abraham… Lois: And, Lois Houston signing off! 10:14 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/36822865
info_outline
Oracle GoldenGate 23ai: The Extract Process
06/03/2025
Oracle GoldenGate 23ai: The Extract Process
The Extract process is the heart of Oracle GoldenGate 23ai, capturing data changes with precision. In this episode, Lois Houston and Nikita Abraham sit down with Nick Wagner, Senior Director of Product Management, to break down Extract’s role, architecture, and best practices. Learn how Extract works across different setups, from running on source databases to using a Hub model for greater flexibility. Additionally, understand how trail files, parameter files, and naming conventions impact performance. Oracle GoldenGate 23ai: Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we spoke about installing GoldenGate and today, we’re diving into the Extract process. We’ve discussed it briefly in an earlier episode, but to recap, the Extract process captures changes from the source database and writes them to a trail file. 00:54 Lois: Joining us again is Nick Wagner, Senior Director of Product Management for Oracle GoldenGate. Hi Nick! Before we get into the Extract process, can you walk us through the different architecture options available for GoldenGate. Let’s start with when GoldenGate is installed on the same server as the source database. What are the benefits of this architecture? Nick: There's a couple of advantages to this. It means that GoldenGate can use the same resources on that source database. It means that you don't need another host to support the GoldenGate environment. It also means that GoldenGate can use a bequeathed connection to connect from the Extract process into the source database to make it run faster. The restrictions on this are that the Replicat process is highly communicative with the target database. What that really means is that the Replicat process is constantly doing lots of little transactions. And so the network latency between the Replicat process and the target database should really be around 4 milliseconds or less for optimal performance. So that means that a lot of people can't really run GoldenGate on the source system, even though it's an option, because they need that Replicat latency performance. And so they'll often install GoldenGate on the same server as the target database. In this case, they can use the Replicat to connect using a bequeath connection to that target system, you know that it's going to be highly performant and that latency is not going to be an issue. This works really well because the Extract process has actually been optimized to do remote capture. And so it's actually able to handle 80 milliseconds round trip ping time or less between the actual Extract process and the source database itself. And so a lot of customers will opt for this method, where they're actually running GoldenGate away from the target, or excuse me, away from the source database. 02:44 Nikita: Interesting. And is there an option where you don’t need to install GoldenGate on the actual source or target database? Nick: We also have another architecture pattern called a Hub model. And this is what you would see in something like OCI GoldenGate or OCI Marketplace, or even in third party clouds environments where you don't have the ability to install GoldenGate on the actual source or target database. In these cases, GoldenGate is just going to run on a virtual machine or an environment that you have set up specifically for GoldenGate. Now, this GoldenGate Hub doesn't need to have any database software installed. It doesn't need to have any database information on it. It's simply working as a client. So GoldenGate Extract process is a client connecting into the source database and the Replicat is a client connecting into the target database. And this really gives you a lot of flexibility. However, in some cases, there may be too much of a distance, so you won't be able to get both less than 80 milliseconds on the source side in less than 4 milliseconds on the round trip on the target side. And so in that case, you can have multiple GoldenGate Hubs. And so you would have a Hub on the Extract side and another Hub on the Replicat side. And all these are fully accessible. In this case, you'll actually use the distribution service to send the trail files from one system to another. 04:00 Lois: So, coming to the Extract process, what does it actually do? Nick: The Extract process is configured to capture changes from that source database. In different terminology, it can subscribe to a topic if we're pulling data out of a Kafka queue or a topic or some messaging system like a JMS queue and relational database language, we're pulling database from the database transaction logs. There's a lot of different sources and targets. You can always use the GoldenGate Certification Matrix to determine which sources and targets are supported, and where we can extract data from. The capture process also connects to the source table for initial loads. When we do the initial load, instead of reading from the transaction logs, GoldenGate is actually going to do a select star on that table to get the information it needs for that load. 04:49 Lois: And what about the Extract process group? Nick: The process group is kind of a grouping of the process itself, which is either going to be my Extract or Replicat and associated files. So in an Extract environment, we have our parameter file and a report file and our checkpoint files. The parameter file, the .prm file, is going to list out which objects we're going to capture and how we're going to capture that data. It also controls what we're going to be writing to the trail file and where that trail file exists. The report file is really just a log of what's going on in that Extract process, how it's working, what tables it's encountered. It's used for any troubleshooting to make sure everything is running smoothly. And then you also have the checkpoint files. The checkpoint files and report files should not be modified by the user, the parameter file can be. The checkpoint files are going to include information about where that process is reading from, where it's writing to, and any open transaction that it's tracking as part of the bounded recovery or cache manager functionality. 05:54 Nikita: How do you go about creating an Extract group? Nick: The Extract group can be created by doing an Add Extract command or through the UI. Each Extract must also have a unique name. On the Extract process side, there is an eight-character hard limit for the name itself. And so, you can’t have an Extract process called my Extract for today is called Nick. More than eight characters. 06:17 Lois: Nick, I was wondering, is there a simple way to identify what an Extract or Replicat is doing? Nick: If you need something to help identify what that Extract or Replicat is doing or the description of it, we do have a description field. So when you do the Add Extract or Add Replicat, there is a DESC field that allows you to add more details in. And this is really key because it allows you to put a lot more information that’s going to show up in all the log files at the service manager level. And any time you do an info on the service it’ll also bring up that description field so you can see what’s going on. That way, if you get an alert, a watch, you need to keep track of something you can easily identify what that process is doing and what it’s replicating for. 07:06 Adopting a multicloud strategy is a big step towards future-proofing your business and we’re here to help you navigate this complex landscape. With our suite of courses, you'll gain insights into network connectivity, security protocols, and the considerations of working across different cloud platforms. Start your journey to multicloud today by visiting mylearn.oracle.com. 07:32 Nikita: Welcome back! Before the break, we were talking about the description field, which helps identify what the Extract is doing. Nick, are there any best practices to keep in mind when naming a group? Nick: You also don't want to use any special characters when naming the group, especially you know things like slashes or dashes. You don't want to use spaces in them, just really stick to alphanumeric characters only. The group names are also case insensitive, so EDEPT, all capitalized is the same as edept lowercase. The other thing that you don't want to do and this isn't a hard restriction, it's just more of a friendly reminder is don't end your group with a numeric value. The report files themselves end in numeric values, so you'll have a report file, 0123456789, and so on. If you were to end your group name with a numeric value, then it can often be confused for a report file. And so you don't want to really do that. But otherwise you're free to call it whatever you want. 08:39 Lois: Got it. What about naming conventions? Are there any rules that apply? Nick: You can use whatever naming convention you want, but again, try and follow these best practices. No strange characters and don't end your process names with a numeric value. 08:53 Nikita: Can you explain the role of parameter and trail files in the Extract process? Nick: The parameter files is really where GoldenGate is doing all of its hard work. And there's two parameter files, there's the GLOBALS file that's configured at the service manager level, and then you have your actual process parameter files that are configured at your Extract, Replicat, distribution service levels. We also have our trail files. We've talked a little bit about the trail file as they're the continuous source of information for GoldenGate. They can exist on the source or target or intermediate system. An Extract process can write to multiple trail files, but a trail file can only be written to by one process at a time. A trail file also consists of a two-letter abbreviation, and then we have that nine-digit sequence number. And then the processes that read a trail file include the Distribution Path or Target Initiated Path, as well as the Replicat. And again, just as a quick reminder, all the trail files are stored in the ogg_var_home/libdata directory. This is important because they do grow pretty rapidly, and so if you need to keep track of the space used in that directory, this is an important thing to do. 10:05 Nikita: How does GoldenGate handle data extraction for different database systems? Nick: You have a couple of different options. There's for non-Oracle databases, you have a classic extract. And these read directly from the transaction logs themselves or they call an API within that target database. For example, within DB2, we actually use the DB2 API to pull data out of the transaction logs. In something like Postgres, we're getting data directly out of the transaction logs by reading it. And that's put in there by the test decoding plugin. With Oracle, it's a little bit different. Because it's actually integrated in with the database kernel itself, all the log mining is done inside the database engine. All GoldenGate is doing is connecting to that log mining area and getting the data from it. We also have the option of doing standby redo logs or downstream capture, which allows GoldenGate to read data from the standby redo logs. It's still an integrated extract. So GoldenGate isn't directly reading the standby redo logs, but it allows you to set it up in a different way. And then we have an initial load extract, which is to pull out the base data out of the tables by doing a select star on the tables. And this does not include any change data. So the initial loads and the what we call tran log extracts are separate. 11:24 Lois: Before we wrap up, can you quickly walk us through the key steps to configure an extract? Nick: So the first thing we want to do is set up a restart profile so that if the process fails, it'll automatically restart. Next thing we do is we create the extract parameter file. Then we'll go ahead and register the extract with the database. This is important in Oracle and also with a couple of other databases that we support where we actually need to tell the database engine that, hey, we're going to be setting up a process that's going to pull data out of that environment. Then we go ahead and add the extract process. We add the trail file so that it knows where to write to. And then we can go ahead and start the extract process and we'll be on our way to configuring replication. If necessary, we can configure a distribution service path as well. 12:09 Nikita: Thanks, Nick, for telling us about the Extract process. If you'd like to learn more about what we discussed today, head over to mylearn.oracle.com and check out the Oracle GoldenGate 23ai: Fundamentals course. Lois: You'll find detailed instructions to help you get started. In the next episode, we’ll go through the Distribution Path, Target Initiated Path, Receiver Server, and Initial Load. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 12:37 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/36822780
info_outline
Oracle GoldenGate Installation
05/27/2025
Oracle GoldenGate Installation
Installing Oracle GoldenGate 23ai is more than just running a setup file—it’s about preparing your system for efficient, reliable data replication. In this episode, Lois Houston and Nikita welcome back Nick Wagner to break down system requirements, storage considerations, and best practices for installing GoldenGate. You’ll learn how to optimize disk space, manage trail files, and configure network settings to ensure a smooth installation. Oracle GoldenGate 23ai: Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Hello and welcome to Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I’m joined by Lois Houston, Director of Innovation Programs. Lois: Hi there! Last week, we took a close look at the security strategies of Oracle GoldenGate 23ai. In this episode, we’ll discuss all aspects of installing GoldenGate. 00:48 Nikita: That’s right, Lois. And back with us today is Nick Wagner, Senior Director of Product Management for GoldenGate at Oracle. Hi Nick! I’m going to get straight into it. What are the system requirements for a typical GoldenGate installation? Nick: As far as system requirements, we're going to split that into two sections. We've got an operating system requirements and a storage requirements. So with memory and disk, and I know that this isn't the answer you want, but the answer is that it varies. With GoldenGate, the amount of CPU usage that is required depends on the number of extracts and replicats. It also depends on the number of threads that you're going to be using for those replicats. Same thing with RAM and disk usage. That's going to vary on the transaction sizes and the number of long running transactions. 01:35 Lois: And how does the recovery process in GoldenGate impact system resources? Nick: You've got two things that help the extract recovery. You've got the bonded recovery that will store transactions over a certain length of time to disk. It also has a cache manager setting that determines what gets written to disk as part of open transactions. It's not just the simple answer as, oh, it needs this much space. GoldenGate also needs to store trail files for the data that it's moving across. So if there's network latency, or if you expect a certain network outage, or you have certain SLAs for the target database that may not be met, you need to make sure that GoldenGate has enough room to store its trail files as it's writing them. The good news about all this is that you can track it. You can use parameters to set them. And we do have some metrics that we'll provide to you on how to size these environments. So a couple of things on the disk usage. The actual installation of GoldenGate is about 1 to 1.5 gig in size, depending on which version of GoldenGate you're going to be using and what database. The trail files themselves, they default to 500 megabytes apiece. A lot of customers keep them on disk longer than they're necessary, and so there's all sorts of purging options available in GoldenGate. But you can set up purge rules to say, hey, I want to get rid of my trail files as soon as they're not needed anymore. But you can also say, you know what? I want to keep my trail files around for x number of days, even if they're not needed. That way they can be rebuilt. I can restore from any previous point in time. 03:15 Nikita: Let’s talk a bit more about trail files. How do these files grow and what settings can users adjust to manage their storage efficiently? Nick: The trail files grow at about 30% to 35% of the generated redo log data. So if I'm generating 100 gigabytes of redo an hour, then you can expect the trail files to be anywhere from 30 to 35 gigabytes an hour of generated data. And this is if you're replicating everything. Again, GoldenGate's got so many different options. There's so many different ways to use it. In some cases, if you're going to a distributed applications and analytics environment, like a Databricks or a Snowflake, you might want to write more information to the trail file than what's necessary. Maybe I want additional information, such as when this change happened, who the user was that made that change. I can add specific token data. You can also tell GoldenGate to log additional records or additional columns to the trail file that may not have been changed. So I can always say, hey, GoldenGate, replicate and store the entire before and after image of every single row change to your trail file, even if those columns didn't change. And so there's a lot of different ways to do it there. But generally speaking, the default settings, you're looking at about 30% to 35% of the generated redo log value. System swap can fill up quickly. You do want this as a dedicated disk as well. System swap is often used for just handling of the changes, as GoldenGate flushes data from memory down to disk. These are controlled by a couple of parameters. So because GoldenGate is only writing committed activity to the trail file, the log reader inside the database is actually giving GoldenGate not only committed activity but uncommitted activity, too. And this is so it can stay very high speed and very low latency. 05:17 Lois: So, what are the parameters? Nick: There's a cache manager overall feature, and there's a cache directory. That directory controls where that data is actually stored, so you can specify the location of the uncommitted transactions. You can also specify the cache size. And there's not only memory settings here, but there's also disk settings. So you can say, hey, once a cache size exceeds a certain memory usage, then start flushing to disk, which is going to be slower. This is for systems that maybe have less memory but more high-speed disk. You can optimize these parameters as necessary. 05:53 Nikita: And how does GoldenGate adjust these parameters? Nick: For most environments, you're just going to leave them alone. They're automatically configured to look at the system memory available on that system and not use it all. And then as soon as necessary, it'll overflow to disk. There's also intelligent settings built within these parameters and within the cache manager itself that if it starts seeing a lull in activity or your traditional OLTP type responses to actually free up the memory that it has allocated. Or if it starts seeing more activity around data warehousing type things where you're doing large transactions, it'll actually hold on to memory a little bit longer. So it kinda learns as it goes through your environment and starts replicating data. 06:37 Lois: Is there anything else you think we should talk about before we move on to installing GoldenGate? Nick: There's a couple additional things you need to think of with the network as well. So when you're deploying GoldenGate, you definitely want it to use the fastest network. GoldenGate can also use a reverse proxy, especially important with microservices. Reverse proxy, typically we recommend Nginx. And it allows you to access any of the GoldenGate microservices using a single port. GoldenGate also needs either host names or IP addresses to do its communication and to ensure the system is available. It does a lot of communication through TCP and IP as well as WSS. And then it also handles firewalls. So you want to make sure that the firewalls are open for ingress and egress for GoldenGate, too. There's a couple of different privileges that GoldenGate needs when you go to install it. You'll want to make sure that GoldenGate has the ability to write to the home where you're installing it. That's kind of obvious, but we need to say it anyways. There's a utility called oggca.sh. That's the GoldenGate Configuration Assistant that allows you to set up your first deployments and manage additional deployments. That needs permissions to write to the directories where you're going to be creating the new deployments. The extract process needs connection and permissions to read the transaction logs or backups. This is not important for Oracle, but for non-Oracle it is. And then we also recommend a dedicated database user for the extract and replicat connections. 08:15 Are you keen to stay ahead in today's fast-paced world? We’ve got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you’re always in the know, we offer New Features courses that give you an insider’s look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started. 08:41 Nikita: Welcome back! So Nick, how do we get started with the installation? Nick: So when we go to the install, the first thing you're going to do is go ahead and go to Oracle's website and download the software. Because of the way that GoldenGate works, there's only a couple moving parts. You saw the microservices. There's five or six of them. You have your extract, your replicat, your distribution service, trail files. There's not a lot of moving components. So if something does go wrong, usually it affects multiple customers. And so it's very important that when you go to install GoldenGate, you're using the most recent bundle patch. And you can find this within My Oracle Support. It's not always available directly from OTN or from the Oracle e-delivery website. You can still get them there, but we really want people going to My Oracle Support to download the most recent version. There's a couple of environment variables and certificates that you'll set up as well. And then you'll run the Configuration Assistant to create your deployments. 09:44 Lois: Thanks, Nick, for taking us though the installation of GoldenGate. Because these are highly UI-driven topics, we recommend that you take a deep dive into the GoldenGate 23ai Fundamentals course, available on mylearn.oracle.com. Nikita: In our next episode, we’ll talk about the Extract process. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 10:08 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/36283290
info_outline
Oracle GoldenGate 23ai Security Strategies
05/20/2025
Oracle GoldenGate 23ai Security Strategies
GoldenGate 23ai takes security seriously, and this episode unpacks everything you need to know. GoldenGate expert Nick Wagner breaks down how authentication, access roles, and encryption protect your data. Learn how GoldenGate integrates with identity providers, secures communication, and keeps passwords out of storage. Understand how trail files work, why they only store committed data, and how recovery processes prevent data loss. Whether you manage replication or just want to tighten security, this episode gives you the details to lock things down without slowing operations. Oracle GoldenGate 23ai: Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Welcome, everyone! This is our fourth episode on Oracle GoldenGate 23ai. Last week, we discussed the terminology, different processes and what they do, and the architecture of the product at a high level. Today, we have Nick Wagner back with us to talk about the security strategies of GoldenGate. 00:56 Lois: As you know by now, Nick is a Senior Director of Product Management for GoldenGate at Oracle. He’s played a key role as one of the product designers behind the latest version of GoldenGate. Hi Nick! Thank you for joining us again. Can you tell us how GoldenGate takes care of data security? Nick: So GoldenGate authentication and authorization is done in a couple of different ways. First, we have user credentials for GoldenGate for not only the source and target databases, but also for GoldenGate itself. We have integration with third-party identity management products, and everything that GoldenGate does can be secured. 01:32 Nikita: And we must have some access roles, right? Nick: There's four roles built into the GoldenGate product. You have your security role, administrator, operator, and user. They're all hierarchical. The most important one is the security user. This user is going to be the one that provides the administrative tasks. This user is able to actually create additional users and assign roles within the product. So do not lose this password and this user is extremely important. You probably don't want to use this security user as your everyday user. That would be your administrator. The administrator role is able to perform all administrative tasks within GoldenGate. So not only can they go in and create new extracts, create new replicats, create new distribution services, but they can also start and stop them. And that's where the operator role is and the user role. So the operator role allows you to go in and start/stop processes, but you can't create any new ones, which is kind of important. So this user would be the one that could go in and suspend activity. They could restart activity. But they can't actually add objects to replication. The user role is really a read-only role. They can come in. They can see what's going on. They can look at the log files. They can look at the alerts. They can look at all the watches and see exactly what GoldenGate is doing. But they're unable to make any changes to the product itself. 02:54 Lois: You mentioned the roles are hierarchical in nature. What does that mean? Nick: So anything that the user role does can be done by the operator. Anything that the operator and user roles can do can be done by the administrator. And anything that the user, operator, and administrator roles do can be done by the security role. 03:11 Lois: Ok. So, is there a single sign-on available for GoldenGate? Nick: We also have a password plugin for GoldenGate Connections. A lot of customers have asked for integration with whatever their single sign-on utility is, and so GoldenGate now has that with GoldenGate 23ai. So these are customer-created entities. So, we have some examples that you can use in our documentation on how to set up an identity provider or a third-party identity provider with GoldenGate. And this allows you to ensure that your corporate standards are met. As we started looking into this, as we started designing it, every single customer wanted something different. And so instead of trying to meet the needs for every customer and every possible combination of security credentials, we want you to be able to design it the way you need it. The passwords are never stored. They're only retrieved from the identity provider by the plugin itself. 04:05 Nikita: That’s a pretty important security aspect…that when it’s time to authenticate a user, we go to the identity provider. Nick: We're going to connect in and see if that password is matching. And only then do we use it. And as soon as we detect that it's matched, that password is removed. And then for the extract and replicats themselves, you can also use it for the database, data source, and data target connections, as well as for the GoldenGate users. So, it is a full-featured plugin. So, our identity provider plugin works with IAM as well as OAM. These are your standard identity manager authentication methods. The standard one is OAuth 2, as well as OIDC. And any Identity Manager that uses that is able to integrate with GoldenGate. 04:52 Lois: And how does this work? Nick: The way that it works is pretty straightforward. Once the user logs into the database, we're going to hand off authentication to the identity provider. Once the identity provider has validated that user's identity and their credentials, then it comes back to GoldenGate and says that user is able to log in to either GoldenGate or the application or the database. Once the user is logged in, we get that confirmation that's been sent out and they can continue working through GoldenGate. So, it's very straightforward on how it works. There's also a nice little UI that will help set up each additional user within those systems. All the communication is also secured as well. So any communication done through any of the GoldenGate services is encrypted using HTTPS. All the REST calls themselves are all done using HTTPS as well. All the data protection calls and all the communication across the network when we send data across a distribution service is encrypted using a secure WebSocket. And there's also trail file encryption at the operating system level for data at REST. So, this really gives you the full level of encryption for customers that need that high-end security. GoldenGate does have an option for FIPS 140-2 compliance as well. So that's even a further step for most of those customers. 06:12 Nikita: That’s impressive! Because we want to maintain the highest security standards, right? Especially when dealing with sensitive information. I now want to move on to trail files. In our last episode, we briefly spoke about how they serve as logs that record and track changes made to data. But what more can you tell us about them, Nick? Nick: There's two different processes that write to the trail files. The extract process will write to the trail file and the receiver service will write to the trail file. The extract process is going to write to the trail file as it's pulling data out of that source database. Now, the extract process is controlled by a parameter file, that says, hey, here's the exact changes that I'm going to be pulling out. Here's the tables. Here's the rows that I want. As it's pulling that data out and writing it to the trail files, it's ensuring that those trail files have enough information so that the replicat process can actually construct a SQL statement and apply that change to that target platform. And so there's a lot of ways to change what's actually stored in those trail files and how it's handled. The trail files can also be used for initial loads. So when we do the initial load through GoldenGate, we can grab and write out the data for those tables, and that excludes the change data. So initial loads is pulling the data directly from the tables themselves, whereas ongoing replication is pulling it from the transaction logs. 07:38 Lois: But do we need to worry about rollbacks? Nick: Our trail files contain committed data only and all data is sequential. So this is two important things. Because it contains committed data only, we don't need to worry about rollbacks. We also don't need to worry about position within that trail file because we know all data is sequential. And so as we're reading through the trail file, we know that anything that's written in a prior location in that trial file was committed prior to something else. And as we get into the recovery aspects of GoldenGate, this will all make a lot more sense. 08:13 Lois: Before we do that, can you tell us about the naming of trail files? Nick: The trail files as far as naming, because these do reside on the operating system, you start with a two-letter trail file abbreviation and then a nine-digit sequential value. So, you almost look at it as like an archive log from Oracle, where we have a prefix and then an affix, which is numeric. Same kind of thing. So, we have our two-letter, in this case, an ab, and then we have a nine-digit number. 08:47 Transform the way you work with Oracle Database 23ai! This cutting-edge technology brings the power of AI directly to your data, making it easier to build powerful applications and manage critical workloads. Want to learn more about Database 23ai? Visit mylearn.oracle.com to pick from our range of courses and enroll today! 09:12 Nikita: Welcome back! Ok, Nick. Let’s get into the GoldenGate recovery process. Nick: When we start looking at the GoldenGate recovery process, it essentially makes GoldenGate kind of point-in-time like. So on that source database, you have your extract process that's going to be capturing data from the transaction logs. In the case of Oracle, the Oracle Database is actually going to be reading those transaction logs from us and passing the change records directly to GoldenGate. We call them an LCR, Logical Change Record. And so the integrated extract and GoldenGate, the extract portion tells the database, hey, I'm now going to be interested in the following list of tables. And it gives a list of tables to that internal component, the log mining engine within the database. And it says, OK, I'm now pulling data for those tables and I'm going to send you those table changes. And so as the extract process gets sent those changes, it's going to have checkpoint information. So not only does it know where it was pulling data from out of that source database, but what it's also writing to the trail file. The trail files themselves are all sequential and they have only committed data, as we talked about earlier. The distribution service has checkpoint information that says, hey, I know where I'm reading from in the previous trail file, and I know what I've sent across the network. The receiver service is the same thing. It knows what it's receiving, as well as what it's written to the trail file and the target system. The replicat also has a checkpoint. It knows where it's reading from in the trail file, and then it knows what it's been applying into that target database. This is where things start to become a little complicated. Our replicat process in most cases are parallel, so it'll have multiple threads applying data into that target database. Each of those threads is applying different transactions. And because of the way that the parallelism works in the replicat process, you can actually get situations where one replicat thread might be applying a transaction higher than another thread. And so you can eliminate that sequential or serial aspect of it, and we can get very high throughput speeds to the replicat. But it means that the checkpoint needs to be kind of smart enough to know how to rebuild itself if something fails. 11:32 Lois: Ok, sorry Nick, but can you go through that again? Maybe we can work backwards this time? Nick: If the replicat process fails, when it comes back up, it's going to look to its checkpoint tables inside that target database. These checkpoint tables keep track of where each thread was at when it crashed. And so when the replicat process restarts, it goes, oh, I was applying these threads at this location in these SCNs. It'll then go and read from the trail file and say, hey, let me rebuild that data and it only applies transactions that it hasn't applied yet to that target system. There is a synchronized replicat command as well that will tell a crashed replicat to say, hey, bring all your threads up to the same high watermark. It does that process automatically as it restarts and continues normal replication. But there is an option to do it just by itself too. So that's how the replicat kind of repairs and recovers itself. It'll simply look at the trail files. Now, let's say that the replicat crashed, and it goes to read from the trail files when it restarts and that trail profile is missing. It'll actually communicate to the distribution, or excuse me, to the receiver service and say, hey, receiver service, I don't have this trail file. Can you bring it back for me? And the receiver service will communicate downstream and say, hey, distribution service, I need you to resend me trail find number 6. And so the distribution service will resend that trail file so that the replicat can reprocess it. So it's often nice to have redundant environments with GoldenGate so we can have those trail files kind of around for availability. 13:13 Nikita: What if one of these files gets corrupted? Nick: If one of those trail files is corrupt, let's say that a trail file on the target site became corrupt and the replicat can't read from it for one reason or another. Simply stop the replicat process, delete the corrupt trail file, restart the replicat process, and now it's going to rebuild that trail file from scratch based on the information from the source GoldenGate environment. And so it's very recoverable. Handles it all very well. 13:40 Nikita: And can the extract process bounce back in the same way? Nick: The extract process can also recover in a similar way. So if the extract process crashes, when it restarts itself, there's a number of things that it does. The first thing is it has to rebuild any open transactions. So it keeps all sorts of checkpoint information about the oldest transaction that it's keeping track of, any open transactions that haven't been committed, and any other transactions that have been committed that it's already written to the trail file. So as it's reprocessing that data, it knows exactly what it's committed to trail and what hasn't been committed. And there's a number of ways that it does this. There's two main components here. One of them is called bounded recovery. Bounded recovery will allow you to set a time limit on transactions that span a certain length of time that they'll actually get flushed out to disk on that GoldenGate Hub. And that way it'll reduce the amount of time it takes GoldenGate to restart the extract process. And the other component is cache manager. Cache manager stores uncommitted transactions. And so it's a very elegant way of rebuilding itself from any kind of failure. You can also set up restart profiles so that if any process does crash, the GoldenGate service manager can automatically restart that service an x number of times across y time span. So if I say, hey, if my extract crashes, then attempt to restart it 100 times every 5 seconds. So there's a lot of things that you can do there to make it really nice and automatic repair itself and automatically resilient. 15:18 Lois: Well, that brings us to the end of this episode. Thank you, Nick, for going through the security strategies and recovery processes in such detail. Next week, we’ll look at the installation of GoldenGate. Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Nikita Abraham… Lois: And Lois Houston signing off! 15:44 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/36256270
info_outline
GoldenGate 23ai: Terminology & Architecture
05/13/2025
GoldenGate 23ai: Terminology & Architecture
In this episode, Lois Houston and Nikita Abraham, along with Nick Wagner, focus on GoldenGate’s terminology and architectural evolution. Nick defines source and target systems, which are crucial for data replication, and then moves on to explain the data extraction and replication processes. He also talks about the new microservices architecture, which replaces the classic architecture, offering benefits like simplified management, enhanced security, and a user-friendly interface. Nick highlights how this architecture facilitates easy upgrades and provides a streamlined experience for administrators. Oracle GoldenGate 23ai: Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston: Director of Innovation Programs. Lois: Hi there! Thanks for joining us again as we make our way through Oracle GoldenGate 23ai. Last week, we discussed all the new features introduced in 23ai and today, we’ll move on to the terminology, the different processes and what they do, and the architecture of the product at a high level. 00:56 Nikita: Back with us is Nick Wagner, Senior Director of Product Management for Oracle GoldenGate. Hi Nick! Let’s get into some of the terminology. What do we actually call stuff in GoldenGate? Nick: Within GoldenGate, we have our source systems and our target systems. The source is where we're going to be capturing data from, the targets, where we're going to be applying data into. And when we start talking about things like active-active or setting up GoldenGate for high availability, where your source can also be your target, it does become a little bit more complex. And so in some of those cases, we might refer to things as East and West, or America and Europe, or different versions of that. We also have a couple of different things within the product itself. We have what we call our Extract and our Replicat. The Extract is going to be the process that pulls the data out of the database, our capture technology. Our Replicat’s going to be the one that applies the data into the target system, or you can also look at it as a push technology. We have what we call our Distribution Path. Our Distribution Path is going to be how we're sending the data across the network. A lot of times when customers run GoldenGate, they don't have the luxury of just having a single server of GoldenGate that can pull data from one database and push data into another one. They need to set up multiple hops of that data. And so in that case, we would use what we call a Distribution Path to send that data from one system to the next. We also have what we call a Target Initiated Path. It's kind of a subset of your Distribution Path, but it allows you to communicate from a less secure environment into a more secure environment. 02:33 Lois: Nick, what about parameter names. I’ve seen them in uppercase…title case…does that matter? Nick: GoldenGate has a lot of parameters. This is something you'll see all over the place within GoldenGate itself. These parameters are in your Extract and Replicat parameter files during your distribution path parameter files. Parameters for GoldenGate are case insensitive. Within your own environments, you can set it up to have lowercase, mixed case, whatever you want, but just be aware that they are case insensitive. GoldenGate doesn't care, it's just for readability. And then we also have something called trail files. Trail files is where GoldenGate stores all the data before we're able to apply it into that target system. Think about it as our queuing mechanism, and we're queuing everything outside the database so that we're not overloading those database environments. And that's some of the terminology for the product itself. We also have microservices within GoldenGate. 03:31 Nikita: And at the heart of everything is the Service Manager, right? Talk to us about what it is and what it does. Nick: The service manager is responsible for making sure that everything else is up and running. If you are familiar with GoldenGate classic architecture, this is kind of similar to a GoldenGate manager where that process was there to make sure that processes were running the trail files, or excuse me, that certain error logs were getting written out. If a process went down, the manager would restart that process. The service manager is performing a lot of those same functions. Now attached to the service manager, we have our configuration service. This is new in GoldenGate 23ai. This configuration service is going to allow you to set up GoldenGate for highly available environments. So you can build HA into GoldenGate itself using the configuration service. 04:22 Lois: And what does this configuration service do? Nick: This configuration service essentially moves the checkpoint files that used to be on disk into a database so that everything can be stored inside of a database. Also attached to the service manager, we have the performance metric service. This is a service that is going to be gathering all the performance metrics of GoldenGate. So it's going to tell you how fast things are going, what the latencies are, how many bytes per second we're reading from, the transaction logs or writing to our trail files. How quickly a distribution path is sending data across a network. If you want to know any of your lag information, you'll get it from the performance metrics server. We also have the receiver service and the distribution service. These two work hand in hand to establish network communication between two GoldenGate environments. So on what we call our source system, we have a distribution service that's going to send the data to our target system. On the target system, a receiver service is going to receive that data and then rewrite the trail files. We also have the administration service that's responsible for authentication and authorization of the users, as well as making sure that people have access to the right information. 05:33 Nikita: Ok. Moving on the deployment, how is GoldenGate actually deployed, Nick? Nick: GoldenGate is kinda nice. So the way that the product is installed is you install the GoldenGate environment and that's what we call our service manager deployment under a specific GoldenGate home. So the software binaries themselves get installed under a home, we'll say U01/OGG23AI. Now once I've installed GoldenGate once, that's my OGG home. I can now have any number of service managers and deployments tied to that same home. 06:11 Lois: Ok, let’s work with an example to make this simpler. Let’s say I've got a service manager that's going be responsible for three different deployments: Accounting, Finance, and Sales. Nick: Each of these deployments is going to reside in its own directory. Each of these deployments is going to have its own set of microservices. And so this also means that each of these deployments can have their own set of users. So the people that access the GoldenGate accounting deployment can be different than the ones that access the sales deployment. This means with this distribution of roles that I can have somebody come in and administer the sales database, but they wouldn't have any information or any access to accounting or finance. And this is very important, it allows you to really pull that information apart and separate it. Each of these environments also has their own set of parameter files, Extract process, Replicat process, distribution services, and everything. So it's a very nice way of splitting things up, but all having them tied to the same GoldenGate home system. And this home is very important. So I can take a deployment, let's say my finance deployment, and if I want to move it to a new GoldenGate home and that GoldenGate home is a different version, like let's say that my original home is 23.4, my new GoldenGate home is 23.7, I simply stop that GoldenGate deployment. I stopped at a finance deployment. I changed its OGG home from 23.4 to 23.7. I restart the deployment, that deployment is automatically upgraded to the new environment and attached to the new system. So it makes upgrading very, very simple, very easy, very elegant. 07:53 Nikita: Ok. So, we’ve spoken about the services…some of the terminology. Let’s get into the architecture next. Nick: So when we talk about the architecture for GoldenGate, we used to have two different architectures. We had a classic architecture and a microservices architecture. Classic architecture was something that's been around since the very beginning of GoldenGate in the late '90s. We announced that, that architecture was deprecated in 19c. And Oracle deprecated means that feature is no longer going to be enhanced and it'll be patched selectively. And at some point in the future, it'll be entirely desupported. Well, GoldenGate 23ai is that future. And so in 23ai, the classic architecture is desupported, that means that it's no longer in the build at all. And so it's just microservices architecture. 08:41 Lois: Is there a tool to assist with this migration? Nick: We do have a migration utility that will convert an old classic architecture into the new microservices architecture. But there is quite a bit of learning curve to the new microservices architecture. So it's important that we go through how it works in the changes. 09:04 Are you looking to optimize your implementation strategies and improve efficiency? We have a solution for you! Our new Oracle Fusion Cloud Applications Foundations training and certification program. You’ll learn to leverage Oracle Modern Best Practice (OMBP) to re-imagine business processes using advanced technologies in Oracle Fusion Cloud Applications such as AI, mobile, analytics, and more. Visit mylearn.oracle.com to get started today. 09:37 Nikita: Welcome back! Nick, what are the benefits of this microservices architecture? Nick: It's got that simplified lifecycle for patching and upgrading. A lot of the GoldenGate patches that you get, especially these bundle patches, are complete installs as well. So you can go into My Oracle Support and download a complete install of a patch and that way, you don't have to use old patch to apply them. The only time you'll be using old patch is for one-off patches or smaller patches that need to be applied to your GoldenGate system. The microservices product has the same trusted Capture and Apply process that Classic did. There's almost no changes between the two except on how they communicate with their parent processes. And so the same logic that you use to pull data from Oracle or to apply data into Oracle is all the same. 10:25 Lois: And has the interface been upgraded as well? Nick: We've added a really nice, easy to use web interface for the microservices version of GoldenGate. Not only is this web interface work with all your standard browsers, but it's also mobile friendly too. So I can actually control and administer GoldenGate right through my mobile device. It also has new secure remote administration. This is something that the classic architecture was really missing. And so in the classic architecture, to use the command line interface, you had to log into the database server where GoldenGate was installed. Now, the command line interface, as well as the web interface and the REST API, all use remote administration and authentication. So that means that I can install the new command line interface or what we call admin client on my laptop locally and I can connect to any GoldenGate deployment as long as I have the username and password for that deployment. It's also more secure. GoldenGate microservices can also be deployed on premise or in OCI as a service and now also on these third-party clouds like Azure and Google Cloud. And it's also easier for developers to integrate in with the APIs themselves. Everything that GoldenGate does through the admin client as well as the web UI can all be traced. The REST API calls for GoldenGate are all fully published so you can get them right directly from the documentation, you can build your own web interface if you want to. So it makes it very easy. The REST APIs are also streamlined. With a single REST API call, I can do something like add an Extract process, create it, set up my parameter file, and set up the trail files all with a single API command. Whereas in the past, it would require multiple command line interface commands to do that same thing. So it's extremely elegant, very advanced. 12:16 Nikita: What does the microservices architecture look like? I know it’s a bit complicated when we’re not actually looking at a diagram of it, but just a high level, can you explain the different parts of it? Nick: It's pretty straightforward. But essentially what you've got on each system is a service manager. That service manager is then going to have a number of processes or services beneath it. It'll have the configuration service that stores the checkpoint information for GoldenGate. It'll have the administrative service for the authentication and users, the distribution service to send the data across a network, a receiver service to receive that information, performance metrics to get the performance statistics out of GoldenGate. And then of course, you also have your Extracts and Replicats that capture and apply technology. Each of those Extracts and Replicats will then connect to a database on the Extract side of things. That Extract is going to write to trail files. Those trail files are then going to be sent across the network where they're rebuilt on the target system and the Replicat’s going to consume them and apply them into the target database. So the Replicat behaves almost like an end user. So it's taking that trail file data and simply converting it to DML operations, insert, update, delete, or a DDL operation in the case of Oracle, alter table, create table, et cetera, to go into that target database. 13:39 Lois: To look at a diagram of this architecture and learn about it in more detail, check out the Oracle GoldenGate 23ai Fundamentals course on mylearn.oracle.com. So, Nick, if I’m looking to deploy GoldenGate, what should I primarily keep in mind? Nick: So as you go to install GoldenGate and you look at a deployment, there's a couple of important environment variables that you want to make sure you're aware of. So one of the first ones is your OGG_Home. This environment variable is extremely important. This is the location of the GoldenGate software itself. And I want to stress how important it is to always use version numbers when you're setting up your GoldenGate home. When you go to install the software, if you're installing GoldenGate 23.5, use 23.5 within the home directory structure. If you're installing GoldenGate 23.7, use 23.7 inside that directory structure. 14:33 Nikita: Right… that way I’ll always know which versions are which, and it’ll make it really easy to upgrade and move from one version to the next. Ok, got it. What else, Nick? Nick: There's a couple other important directories. You have your OGG_ETC_HOME. This is where things like the configuration files are going to reside, parameter files, all your certificates for security, including the wallets where we store the credentials for not only the database accounts, but also for the GoldenGate user accounts as well. We have our GoldenGate variable home directory or VAR home. This is where all the GoldenGate log files are residing. And these are the log files that allow you to see what's going on in GoldenGate for auditing purposes. Anytime anybody makes a change to GoldenGate, you're going to see information go into the log files on what was happening and how it was working and what they did, what time they did, what command they issued. Another big important feature about these log files is it also gives you error information and troubleshooting details. So if you ever need to find out what happened in GoldenGate, what went wrong, you would look at these log files to find out that information. And then you also have your OGG_DATA_HOME. This is where those trail files are going to go. Essentially, this is kind of the queuing or overflow for GoldenGate. There's a couple of other additional components. We've got the admin client. This is our command line utility. If you don't want to use a web browser or prefer a command line utility, you can use the admin client. The admin client is also fully scriptable. So if you wanted to write scripts that would go off and automate things in GoldenGate, you can do that. A lot of customers did that with GGSCI in the classic architecture. You can do the same thing now with the admin client. The other component is the microservices security authentication and authorization services. These handle communication security, especially making sure that any passwords or usernames and everything like that is all encrypted. And instead of using an actual username and password, everything through the product is going to be done through an alias. And then it also handles all the authorization authentication, permissions, user accountability, and roles within GoldenGate. 16:39 Lois: Anything else you’d like to talk about before we wrap up for today, Nick? Nick: I also wanted to take a minute to talk about the REST API. All the microservices provide REST APIs to administer them and all of these are fully documented. They can be used by any client that can make REST API calls. So if you wanted to use Python, cURL, a web browser, you can do that as well. They're all just HTTP or HTTPS calls, get, put, patch, the standard REST API standards. And then GoldenGate does provide our admin client as well as a WebUI that use these REST APIs under the covers if you ever wanted to get a more advanced look at how it works. 17:18 Nikita: Well, that’s all the time we have for today. Thanks for joining us, Nick. Lois: Yes, thanks Nick. We look forward to having you back next week to talk with us about security strategies and data recovery. Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 17:43 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/36256220
info_outline
Oracle GoldenGate 23ai: New Features & Product Family
05/06/2025
Oracle GoldenGate 23ai: New Features & Product Family
In this episode, Lois Houston and Nikita Abraham continue their deep dive into Oracle GoldenGate 23ai, focusing on its evolution and the extensive features it offers. They are joined once again by Nick Wagner, who provides valuable insights into the product's journey. Nick talks about the various iterations of Oracle GoldenGate, highlighting the significant advancements from version 12c to the latest 23ai release. The discussion then shifts to the extensive new features in 23ai, including AI-related capabilities, UI enhancements, and database function integration. Oracle GoldenGate 23ai: Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we introduced Oracle GoldenGate and its capabilities, and also spoke about GoldenGate 23ai. In today’s episode, we’ll talk about the various iterations of Oracle GoldenGate since its inception. And we’ll also take a look at some new features and the Oracle GoldenGate product family. 00:57 Lois: And we have Nick Wagner back with us. Nick is a Senior Director of Product Management for GoldenGate at Oracle. Hi Nick! I think the last time we had an Oracle University course was when Oracle GoldenGate 12c was out. I’m sure there’s been a lot of advancements since then. Can you walk us through those? Nick: GoldenGate 12.3 introduced the microservices architecture. GoldenGate 18c introduced support for Oracle Autonomous Data Warehouse and Autonomous Transaction Processing Databases. In GoldenGate 19c, we added the ability to do cross endian remote capture for Oracle, making it easier to set up the GoldenGate OCI service to capture from environments like Solaris, Spark, and HP-UX and replicate into the Cloud. Also, GoldenGate 19c introduced a simpler process for upgrades and installation of GoldenGate where we released something called a unified build. This means that when you install GoldenGate for a particular database, you don't need to worry about the database version when you install GoldenGate. Prior to this, you would have to install a version-specific and database-specific version of GoldenGate. So this really simplified that whole process. In GoldenGate 23ai, which is where we are now, this really is a huge release. 02:16 Nikita: Yeah, we covered some of the distributed AI features and high availability environments in our last episode. But can you give us an overview of everything that’s in the 23ai release? I know there’s a lot to get into but maybe you could highlight just the major ones? Nick: Within the AI and streaming environments, we've got interoperability for database vector types, heterogeneous capture and apply as well. Again, this is not just replication between Oracle-to-Oracle vector or Postgres to Postgres vector, it is heterogeneous just like the rest of GoldenGate. The entire UI has been redesigned and optimized for high speed. And so we have a lot of customers that have dozens and dozens of extracts and replicats and processes running and it was taking a long time for the UI to refresh those and to show what's going on within those systems. So the UI has been optimized to be able to handle those environments much better. We now have the ability to call database functions directly from call map. And so when you do transformation with GoldenGate, we have about 50 or 60 built-in transformation routines for string conversion, arithmetic operation, date manipulation. But we never had the ability to directly call a database function. 03:28 Lois: And now we do? Nick: So now you can actually call that database function, database stored procedure, database package, return a value and that can be used for transformation within GoldenGate. We have integration with identity providers, being able to use token-based authentication and integrate in with things like Azure Active Directory and your other single sign-on for the GoldenGate product itself. Within Oracle 23ai, there's a number of new features. One of those cool features is something called lock-free reservation columns. So this allows you to have a row, a single row within a table and you can identify a column within that row that's like an inventory column. And you can have multiple different users and multiple different transactions all updating that column within that same exact row at that same time. So you no longer have row-level locking for these reservation columns. And it allows you to do things like shopping carts very easily. If I have 500 widgets to sell, I'm going to let any number of transactions come in and subtract from that inventory column. And then once it gets below a certain point, then I'll start enforcing that row-level locking. 04:43 Lois: That’s really cool… Nick: The one key thing that I wanted to mention here is that because of the way that the lock-free reservations work, you can have multiple transactions open on the same row. This is only supported for Oracle to Oracle. You need to have that same lock-free reservation data type and availability on that target system if GoldenGate is going to replicate into it. 05:05 Nikita: Are there any new features related to the diagnosability and observability of GoldenGate? Nick: We've improved the AWR reports in Oracle 23ai. There's now seven sections that are specific to Oracle GoldenGate to allow you to really go in and see exactly what the GoldenGate processes are doing and how they're behaving inside the database itself. And there's a Replication Performance Advisor package inside that database, and that's been integrated into the Web UI as well. So now you can actually get information out of the replication advisor package in Oracle directly from the UI without having to log into the database and try to run any database procedures to get it. We've also added the ability to support a per-PDB Extract. So in the past, when GoldenGate would run on a multitenant database, a multitenant database in Oracle, all the redo data from any pluggable database gets sent to that one redo stream. And so you would have to configure GoldenGate at the container or root level and it would be able to access anything at any PDB. Now, there's better security and better performance by doing what we call per-PDB Extract. And this means that for a single pluggable database, I can have an extract that runs at that database level that's going to capture information just from that pluggable database. 06:22 Lois And what about non-Oracle environments, Nick? Nick: We've also enhanced the non-Oracle environments as well. For example, in Postgres, we've added support for precise instantiation using Postgres snapshots. This eliminates the need to handle collisions when you're doing Postgres to Postgres replication and initial instantiation. On the GoldenGate for big data side, we've renamed that product more aptly to distributed applications in analytics, which is really what it does, and we've added a whole bunch of new features here too. The ability to move data into Databricks, doing Google Pub/Sub delivery. We now have support for XAG within the GoldenGate for distributed applications and analytics. What that means is that now you can follow all of our MAA best practices for GoldenGate for Oracle, but it also works for the DAA product as well, meaning that if it's running on one node of a cluster and that node fails, it'll restart itself on another node in the cluster. We've also added the ability to deliver data to Redis, Google BigQuery, stage and merge functionality for better performance into the BigQuery product. And then we've added a completely new feature, and this is something called streaming data and apps and we're calling it AsyncAPI and CloudEvent data streaming. It's a long name, but what that means is that we now have the ability to publish changes from a GoldenGate trail file out to end users. And so this allows through the Web UI or through the REST API, you can now come into GoldenGate and through the distributed applications and analytics product, actually set up a subscription to a GoldenGate trail file. And so this allows us to push data into messaging environments, or you can simply subscribe to changes and it doesn't have to be the whole trail file, it can just be a subset. You can specify exactly which tables and you can put filters on that. You can also set up your topologies as well. So, it's a really cool feature that we've added here. 08:26 Nikita: Ok, you’ve given us a lot of updates about what GoldenGate can support. But can we also get some specifics? Nick: So as far as what we have, on the Oracle Database side, there's a ton of different Oracle databases we support, including the Autonomous Databases and all the different flavors of them, your Oracle Database Appliance, your Base Database Service within OCI, your of course, Standard and Enterprise Edition, as well as all the different flavors of Exadata, are all supported with GoldenGate. This is all for capture and delivery. And this is all versions as well. GoldenGate supports Oracle 23ai and below. We also have a ton of non-Oracle databases in different Cloud stores. On an non-Oracle side, we support everything from application-specific databases like FairCom DB, all the way to more advanced applications like Snowflake, which there's a vast user base for that. We also support a lot of different cloud stores and these again, are non-Oracle, nonrelational systems, or they can be relational databases. We also support a lot of big data platforms and this is part of the distributed applications and analytics side of things where you have the ability to replicate to different Apache environments, different Cloudera environments. We also support a number of open-source systems, including things like Apache Cassandra, MySQL Community Edition, a lot of different Postgres open source databases along with MariaDB. And then we have a bunch of streaming event products, NoSQL data stores, and even Oracle applications that we support. So there's absolutely a ton of different environments that GoldenGate supports. There are additional Oracle databases that we support and this includes the Oracle Metadata Service, as well as Oracle MySQL, including MySQL HeatWave. Oracle also has Oracle NoSQL Spatial and Graph and times 10 products, which again are all supported by GoldenGate. 10:23 Lois: Wow, that’s a lot of information! Nick: One of the things that we didn't really cover was the different SaaS applications, which we've got like Cerner, Fusion Cloud, Hospitality, Retail, MICROS, Oracle Transportation, JD Edwards, Siebel, and on and on and on. And again, because of the nature of GoldenGate, it's heterogeneous. Any source can talk to any target. And so it doesn't have to be, oh, I'm pulling from Oracle Fusion Cloud, that means I have to go to an Oracle Database on the target, not necessarily. 10:51 Lois: So, there’s really a massive amount of flexibility built into the system. 11:00 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now through May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 11:26 Nikita: Welcome back! Now that we’ve gone through the base product, what other features or products are in the GoldenGate family itself, Nick? Nick: So we have quite a few. We've kind of touched already on GoldenGate for Oracle databases and non-Oracle databases. We also have something called GoldenGate for Mainframe, which right now is covered under the GoldenGate for non-Oracle, but there is a licensing difference there. So that's something to be aware of. We also have the OCI GoldenGate product. We are announcing and we have announced that OCI GoldenGate will also be made available as part of the Oracle Database@Azure and Oracle Database@ Google Cloud partnerships. And then you'll be able to use that vendor's cloud credits to actually pay for the OCI GoldenGate product. One of the cool things about this is it will have full feature parity with OCI GoldenGate running in OCI. So all the same features, all the same sources and targets, all the same topologies be able to migrate data in and out of those clouds at will, just like you do with OCI GoldenGate today running in OCI. We have Oracle GoldenGate Free. This is a completely free edition of GoldenGate to use. It is limited on the number of platforms that it supports as far as sources and targets and the size of the database. 12:45 Lois: But it's a great way for developers to really experience GoldenGate without worrying about a license, right? What’s next, Nick? Nick: We have GoldenGate for Distributed Applications and Analytics, which was formerly called GoldenGate for big data, and that allows us to do all the streaming. That's also where the GoldenGate AsyncAPI integration is done. So in order to publish the GoldenGate trail files or allow people to subscribe to them, it would be covered under the Oracle GoldenGate Distributed Applications and Analytics license. We also have OCI GoldenGate Marketplace, which allows you to run essentially the on-premises version of GoldenGate but within OCI. So a little bit more flexibility there. It also has a hub architecture. So if you need that 99.99% availability, you can get it within the OCI Marketplace environment. We have GoldenGate for Oracle Enterprise Manager Cloud Control, which used to be called Oracle Enterprise Manager. And this allows you to use Enterprise Manager Cloud Control to get all the statistics and details about GoldenGate. So all the reporting information, all the analytics, all the statistics, how fast GoldenGate is replicating, what's the lag, what's the performance of each of the processes, how much data am I sending across a network. All that's available within the plug-in. We also have Oracle GoldenGate Veridata. This is a nice utility and tool that allows you to compare two databases, whether or not GoldenGate is running between them and actually tell you, hey, these two systems are out of sync. And if they are out of sync, it actually allows you to repair the data too. 14:25 Nikita: That’s really valuable…. Nick: And it does this comparison without locking the source or the target tables. The other really cool thing about Veridata is it does this while there's data in flight. So let's say that the GoldenGate lag is 15 or 20 seconds and I want to compare this table that has 10 million rows in it. The Veridata product will go out, run its comparison once. Once that comparison is done the first time, it's then going to have a list of rows that are potentially out of sync. Well, some of those rows could have been moved over or could have been modified during that 10 to 15 second window. And so the next time you run Veridata, it's actually going to go through. It's going to check just those rows that were potentially out of sync to see if they're really out of sync or not. And if it comes back and says, hey, out of those potential rows, there's two out of sync, it'll actually produce a script that allows you to resynchronize those systems and repair them. So it's a very cool product. 15:19 Nikita: What about GoldenGate Stream Analytics? I know you mentioned it in the last episode, but in the context of this discussion, can you tell us a little more about it? Nick: This is the ability to essentially stream data from a GoldenGate trail file, and they do a real time analytics on it. And also things like geofencing or real-time series analysis of it. 15:40 Lois: Could you give us an example of this? Nick: If I'm working in tracking stock market information and stocks, it's not really that important on how much or how far down a stock goes. What's really important is how quickly did that stock rise or how quickly did that stock fall. And that's something that GoldenGate Stream Analytics product can do. Another thing that it's very valuable for is the geofencing. I can have an application on my phone and I can track where the user is based on that application and all that information goes into a database. I can then use the geofencing tool to say that, hey, if one of those users on that app gets within a certain distance of one of my brick-and-mortar stores, I can actually send them a push notification to say, hey, come on in and you can order your favorite drink just by clicking Yes, and we'll have it ready for you. And so there's a lot of things that you can do there to help upsell your customers and to get more revenue just through GoldenGate itself. And then we also have a GoldenGate Migration Utility, which allows customers to migrate from the classic architecture into the microservices architecture. 16:44 Nikita: Thanks Nick for that comprehensive overview. Lois: In our next episode, we’ll have Nick back with us to talk about commonly used terminology and the GoldenGate architecture. And if you want to learn more about what we discussed today, visit mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 17:10 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/36102945
info_outline
What is Oracle GoldenGate 23ai?
04/29/2025
What is Oracle GoldenGate 23ai?
In a new season of the Oracle University Podcast, Lois Houston and Nikita Abraham dive into the world of Oracle GoldenGate 23ai, a cutting-edge software solution for data management. They are joined by Nick Wagner, a seasoned expert in database replication, who provides a comprehensive overview of this powerful tool. Nick highlights GoldenGate's ability to ensure continuous operations by efficiently moving data between databases and platforms with minimal overhead. He emphasizes its role in enabling real-time analytics, enhancing data security, and reducing costs by offloading data to low-cost hardware. The discussion also covers GoldenGate's role in facilitating data sharing, improving operational efficiency, and reducing downtime during outages. Oracle GoldenGate 23ai: Fundamentals: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston: Director of Innovation Programs. Lois: Hi everyone! Welcome to a new season of the podcast. This time, we’re focusing on the fundamentals of Oracle GoldenGate. Oracle GoldenGate helps organizations manage and synchronize their data across diverse systems and databases in real time. And with the new Oracle GoldenGate 23ai release, we’ll uncover the latest innovations and features that empower businesses to make the most of their data. Nikita: Taking us through this is Nick Wagner, Senior Director of Product Management for Oracle GoldenGate. He’s been doing database replication for about 25 years and has been focused on GoldenGate on and off for about 20 of those years. 01:18 Lois: In today’s episode, we’ll ask Nick to give us a general overview of the product, along with some use cases and benefits. Hi Nick! To start with, why do customers need GoldenGate? Nick: Well, it delivers continuous operations, being able to continuously move data from one database to another database or data platform in efficiently and a high-speed manner, and it does this with very low overhead. Almost all the GoldenGate environments use transaction logs to pull the data out of the system, so we're not creating any additional triggers or very little overhead on that source system. GoldenGate can also enable real-time analytics, being able to pull data from all these different databases and move them into your analytics system in real time can improve the value that those analytics systems provide. Being able to do real-time statistics and analysis of that data within those high-performance custom environments is really important. 02:13 Nikita: Does it offer any benefits in terms of cost? Nick: GoldenGate can also lower IT costs. A lot of times people run these massive OLTP databases, and they are running reporting in those same systems. With GoldenGate, you can offload some of the data or all the data to a low-cost commodity hardware where you can then run the reports on that other system. So, this way, you can get back that performance on the OLTP system, while at the same time optimizing your reporting environment for those long running reports. You can improve efficiencies and reduce risks. Being able to reduce the amount of downtime during planned and unplanned outages can really make a big benefit to the overall operational efficiencies of your company. 02:54 Nikita: What about when it comes to data sharing and data security? Nick: You can also reduce barriers to data sharing. Being able to pull subsets of data, or just specific pieces of data out of a production database and move it to the team or to the group that needs that information in real time is very important. And it also protects the security of your data by only moving in the information that they need and not the entire database. It also provides extensibility and flexibility, being able to support multiple different replication topologies and architectures. 03:24 Lois: Can you tell us about some of the use cases of GoldenGate? Where does GoldenGate truly shine? Nick: Some of the more traditional use cases of GoldenGate include use within the multicloud fabric. Within a multicloud fabric, this essentially means that GoldenGate can replicate data between on-premise environments, within cloud environments, or hybrid, cloud to on-premise, on-premise to cloud, or even within multiple clouds. So, you can move data from AWS to Azure to OCI. You can also move between the systems themselves, so you don't have to use the same database in all the different clouds. For example, if you wanted to move data from AWS Postgres into Oracle running in OCI, you can do that using Oracle GoldenGate. We also support maximum availability architectures. And so, there's a lot of different use cases here, but primarily geared around reducing your recovery point objective and recovery time objective. 04:20 Lois: Ah, reducing RPO and RTO. That must have a significant advantage for the customer, right? Nick: So, reducing your RPO and RTO allows you to take advantage of some of the benefits of GoldenGate, being able to do active-active replication, being able to set up GoldenGate for high availability, real-time failover, and it can augment your active Data Guard and Data Guard configuration. So, a lot of times GoldenGate is used within Oracle's maximum availability architecture platinum tier level of replication, which means that at that point you've got lots of different capabilities within the Oracle Database itself. But to help eke out that last little bit of high availability, you want to set up an active-active environment with GoldenGate to really get true zero RPO and RTO. GoldenGate can also be used for data offloading and data hubs. Being able to pull data from one or more source systems and move it into a data hub, or into a data warehouse for your operational reporting. This could also be your analytics environment too. 05:22 Nikita: Does GoldenGate support online migrations? Nick: In fact, a lot of companies actually get started in GoldenGate by doing a migration from one platform to another. Now, these don't even have to be something as complex as going from one database like a DB2 on-premise into an Oracle on OCI, it could even be simple migrations. A lot of times doing something like a major application or a major database version upgrade is going to take downtime on that production system. You can use GoldenGate to eliminate that downtime. So this could be going from Oracle 19c to Oracle 23ai, or going from application version 1.0 to application version 2.0, because GoldenGate can do the transformation between the different application schemas. You can use GoldenGate to migrate your database from on premise into the cloud with no downtime as well. We also support real-time analytic feeds, being able to go from multiple databases, not only those on premise, but being able to pull information from different SaaS applications inside of OCI and move it to your different analytic systems. And then, of course, we also have the ability to stream events and analytics within GoldenGate itself. 06:34 Lois: Let's move on to the various topologies supported by GoldenGate. I know GoldenGate supports many different platforms and can be used with just about any database. Nick: This first layer of topologies is what we usually consider relational database topologies. And so this would be moving data from Oracle to Oracle, Postgres to Oracle, Sybase to SQL Server, a lot of different types of databases. So the first architecture would be unidirectional. This is replicating from one source to one target. You can do this for reporting. If I wanted to offload some reports into another server, I can go ahead and do that using GoldenGate. I can replicate the entire database or just a subset of tables. I can also set up GoldenGate for bidirectional, and this is what I want to set up GoldenGate for something like high availability. So in the event that one of the servers crashes, I can almost immediately reconnect my users to the other system. And that almost immediately depends on the amount of latency that GoldenGate has at that time. So a typical latency is anywhere from 3 to 6 seconds. So after that primary system fails, I can reconnect my users to the other system in 3 to 6 seconds. And I can do that because as GoldenGate’s applying data into that target database, that target system is already open for read and write activity. GoldenGate is just another user connecting in issuing DML operations, and so it makes that failover time very low. 07:59 Nikita: Ok…If you can get it down to 3 to 6 seconds, can you bring it down to zero? Like zero failover time? Nick: That's the next topology, which is active-active. And in this scenario, all servers are read/write all at the same time and all available for user activity. And you can do multiple topologies with this as well. You can do a mesh architecture, which is where every server talks to every other server. This works really well for 2, 3, 4, maybe even 5 environments, but when you get beyond that, having every server communicate with every other server can get a little complex. And so at that point we start looking at doing what we call a hub and spoke architecture, where we have lots of different spokes. At the end of each spoke is a read/write database, and then those communicate with a hub. So any change that happens on one spoke gets sent into the hub, and then from the hub it gets sent out to all the other spokes. And through that architecture, it allows you to really scale up your environments. We have customers that are doing up to 150 spokes within that hub architecture. Within active-active replication as well, we can do conflict detection and resolution, which means that if two users modify the same row on two different systems, GoldenGate can actually determine that there was an issue with that and determine what user wins or which row change wins, which is extremely important when doing active-active replication. And this means that if one of those systems fails, there is no downtime when you switch your users to another active system because it's already available for activity and ready to go. 09:35 Lois: Wow, that’s fantastic. Ok, tell us more about the topologies. Nick: GoldenGate can do other things like broadcast, sending data from one system to multiple systems, or many to one as far as consolidation. We can also do cascading replication, so when data moves from one environment that GoldenGate is replicating into another environment that GoldenGate is replicating. By default, we ignore all of our own transactions. But there's actually a toggle switch that you can flip that says, hey, GoldenGate, even though you wrote that data into that database, still push it on to the next system. And then of course, we can also do distribution of data, and this is more like moving data from a relational database into something like a Kafka topic or a JMS queue or into some messaging service. 10:24 Raise your game with the Oracle Cloud Applications skills challenge. Get free training on Oracle Fusion Cloud Applications, Oracle Modern Best Practice, and Oracle Cloud Success Navigator. Pass the free Oracle Fusion Cloud Foundations Associate exam to earn a Foundations Associate certification. Plus, there’s a chance to win awards and prizes throughout the challenge! What are you waiting for? Join the challenge today by visiting visit oracle.com/education. 10:58 Nikita: Welcome back! Nick, does GoldenGate also have nonrelational capabilities? Nick: We have a number of nonrelational replication events in topologies as well. This includes things like data lake ingestion and streaming ingestion, being able to move data and data objects from these different relational database platforms into data lakes and into these streaming systems where you can run analytics on them and run reports. We can also do cloud ingestion, being able to move data from these databases into different cloud environments. And this is not only just moving it into relational databases with those clouds, but also their data lakes and data fabrics. 11:38 Lois: You mentioned a messaging service earlier. Can you tell us more about that? Nick: Messaging replication is also possible. So we can actually capture from things like messaging systems like Kafka Connect and JMS, replicate that into a relational data, or simply stream it into another environment. We also support NoSQL replication, being able to capture from MongoDB and replicate it onto another MongoDB for high availability or disaster recovery, or simply into any other system. 12:06 Nikita: I see. And is there any integration with a customer’s SaaS applications? Nick: GoldenGate also supports a number of different OCI SaaS applications. And so a lot of these different applications like Oracle Financials Fusion, Oracle Transportation Management, they all have GoldenGate built under the covers and can be enabled with a flag that you can actually have that data sent out to your other GoldenGate environment. So you can actually subscribe to changes that are happening in these other systems with very little overhead. And then of course, we have event processing and analytics, and this is the final topology or flexibility within GoldenGate itself. And this is being able to push data through data pipelines, doing data transformations. GoldenGate is not an ETL tool, but it can do row-level transformation and row-level filtering. 12:55 Lois: Are there integrations offered by Oracle GoldenGate in automation and artificial intelligence? Nick: We can do time series analysis and geofencing using the GoldenGate Stream Analytics product. It allows you to actually do real time analysis and time series analysis on data as it flows through the GoldenGate trails. And then that same product, the GoldenGate Stream Analytics, can then take the data and move it to predictive analytics, where you can run MML on it, or ONNX or other Spark-type technologies and do real-time analysis and AI on that information as it's flowing through. 13:29 Nikita: So, GoldenGate is extremely flexible. And given Oracle's focus on integrating AI into its product portfolio, what about GoldenGate? Does it offer any AI-related features, especially since the product name has “23ai” in it? Nick: With the advent of Oracle GoldenGate 23ai, it's one of the two products at this point that has the AI moniker at Oracle. Oracle Database 23ai also has it, and that means that we actually do stuff with AI. So the Oracle GoldenGate product can actually capture vectors from databases like MySQL HeatWave, Postgres using pgvector, which includes things like AlloyDB, Amazon RDS Postgres, Aurora Postgres. We can also replicate data into Elasticsearch and OpenSearch, or if the data is using vectors within OCI or the Oracle Database itself. So GoldenGate can be used for a number of things here. The first one is being able to migrate vectors into the Oracle Database. So if you're using something like Postgres, MySQL, and you want to migrate the vector information into the Oracle Database, you can. Now one thing to keep in mind here is a vector is oftentimes like a GPS coordinate. So if I need to know the GPS coordinates of Austin, Texas, I can put in a latitude and longitude and it will give me the GPS coordinates of a building within that city. But if I also need to know the altitude of that same building, well, that's going to be a different algorithm. And GoldenGate and replicating vectors is the same way. When you create a vector, it's essentially just creating a bunch of numbers under the screen, kind of like those same GPS coordinates. The dimension and the algorithm that you use to generate that vector can be different across different databases, but the actual meaning of that data will change. And so GoldenGate can replicate the vector data as long as the algorithm and the dimensions are the same. If the algorithm and the dimensions are not the same between the source and the target, then you'll actually want GoldenGate to replicate the base data that created that vector. And then once GoldenGate replicates the base data, it'll actually call the vector embedding technology to re-embed that data and produce that numerical formatting for you. 15:42 Lois: So, there are some nuances there… Nick: GoldenGate can also replicate and consolidate vector changes or even do the embedding API calls itself. This is really nice because it means that we can take changes from multiple systems and consolidate them into a single one. We can also do the reverse of that too. A lot of customers are still trying to find out which algorithms work best for them. How many dimensions? What's the optimal use? Well, you can now run those in different servers without impacting your actual AI system. Once you've identified which algorithm and dimension is going to be best for your data, you can then have GoldenGate replicate that into your production system and we'll start using that instead. So it's a nice way to switch algorithms without taking extensive downtime. 16:29 Nikita: What about in multicloud environments? Nick: GoldenGate can also do multicloud and N-way active-active Oracle replication between vectors. So if there's vectors in Oracle databases, in multiple clouds, or multiple on-premise databases, GoldenGate can synchronize them all up. And of course we can also stream changes from vector information, including text as well into different search engines. And that's where the integration with Elasticsearch and OpenSearch comes in. And then we can use things like NVIDIA and Cohere to actually do the AI on that data. 17:01 Lois: Using GoldenGate with AI in the database unlocks so many possibilities. Thanks for that detailed introduction to Oracle GoldenGate 23ai and its capabilities, Nick. Nikita: We’ve run out of time for today, but Nick will be back next week to talk about how GoldenGate has evolved over time and its latest features. And if you liked what you heard today, head over to mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course to learn more. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 17:33 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/36102830
info_outline
Integrating APEX with OCI AI Services
04/22/2025
Integrating APEX with OCI AI Services
Discover how Oracle APEX leverages OCI AI services to build smarter, more efficient applications. Hosts Lois Houston and Nikita Abraham interview APEX experts Chaitanya Koratamaddi, Apoorva Srinivas, and Toufiq Mohammed about how key services like OCI Vision, Oracle Digital Assistant, and Document Understanding integrate with Oracle APEX. Packed with real-world examples, this episode highlights all the ways you can enhance your APEX apps. Oracle APEX: Empowering Low Code Apps with AI: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we looked at how generative AI powers Oracle APEX and in today’s episode, we’re going to focus on integrating APEX with OCI AI Services. Lois: That’s right, Niki. We’re going to look at how you can use Oracle AI services like OCI Vision, Oracle Digital Assistant, Document Understanding, OCI Generative AI, and more to enhance your APEX apps. 01:03 Nikita: And to help us with it all, we’ve got three amazing experts with us, Chaitanya Koratamaddi, Director of Product Management at Oracle, and senior product managers, Apoorva Srinivas and Toufiq Mohammed. In today’s episode, we’ll go through each Oracle AI service and look at how it interacts with APEX. Apoorva, let’s start with you. Can you explain what the OCI Vision service is? Apoorva: Oracle Cloud Infrastructure Vision is a serverless multi-tenant service accessible using the console or REST APIs. You can upload images to detect and classify objects in them. With prebuilt models available, developers can quickly build image recognition into their applications without machine learning expertise. OCI Vision service provides a fully managed model infrastructure. With complete integration with OCI Data Labeling, you can build custom models easily. OCI Vision service provides pretrained models-- Image Classification, Object Detection, Face Detection, and Text Recognition. You can build custom models for Image Classification and Object Detection. 02:24 Lois: Ok. What about its use cases? How can OCI Vision make APEX apps more powerful? Apoorva: Using OCI Vision, you can make images and videos discoverable and searchable in your APEX app. You can use OCI Vision to detect and classify objects in the images. OCI Vision also highlights the objects using a red rectangular box. This comes in handy in use cases such as detecting vehicles that have violated the rules in traffic images. You can use OCI Vision to identify visual anomalies in your data. This is a very popular use case where you can detect anomalies in cancer X-ray images to detect cancer. These are some of the most popular use cases of using OCI Vision with your APEX app. But the possibilities are endless and you can use OCI Vision for any of your image analysis. 03:29 Nikita: Let’s shift gears to Oracle Digital Assistant. Chaitanya, can you tell us what it’s all about? Chaitanya: Oracle Digital Assistant is a low-code conversational AI platform that allows businesses to build and deploy AI assistants. It provides natural language understanding, automatic speech recognition, and text-to-speech capabilities to enable human-like interactions with customers and employees. Oracle Digital Assistant comes with prebuilt templates for you to get started. 04:00 Lois: What are its key features and benefits, Chaitanya? How does it enhance the user experience? Chaitanya: Oracle Digital Assistant provides conversational AI capabilities that include generative AI features, natural language understanding and ML, AI-powered voice, and analytics and insights. Integration with enterprise applications become easier with unified conversational experience, prebuilt chatbots for Oracle Cloud applications, and chatbot architecture frameworks. Oracle Digital Assistant provides advanced conversational design tools, conversational designer, dialogue and domain trainer, and native multilingual support. Oracle Digital Assistant is open, scalable, and secure. It provides multi-channel support, automated bot-to-agent transfer, and integrated authentication profile. 04:56 Nikita: And what about the architecture? What happens at the back end? Chaitanya: Developers assemble digital assistants from one or more skills. Skills can be based on prebuilt skills provided by Oracle or third parties, custom developed, or based on one of the many skill templates available. 05:16 Lois: Chaitanya, what exactly are “skills” within the Oracle Digital Assistant framework? Chaitanya: Skills are individual chatbots that are designed to interact with users and fulfill specific type of tasks. Each skill helps a user complete a task through a combination of text messages and simple UI elements like select list. When a user request is submitted through a channel, the Digital Assistant routes the user's request to the most appropriate skill to satisfy the user's request. Skills can combine multilingual NLP deep learning engine, a powerful dialogflow engine, and integration components to connect to back-end systems. Skills provide a modular way to build your chatbot functionality. Now users connect with a chatbot through channels such as Facebook, Microsoft Teams, or in our case, Oracle APEX chatbot, which is embedded into an APEX application. 06:21 Nikita: That’s fascinating. So, what are some use cases of Oracle Digital Assistant in APEX apps? Chaitanya: Digital assistants streamline approval processes by collecting information, routing requests, and providing status updates. Digital assistants offer instant access to information and documentation, answering common questions and guiding users. Digital assistants assist sales teams by automating tasks, responding to inquiries, and guiding prospects through the sales funnel. Digital assistants facilitate procurement by managing orders, tracking deliveries, and handling supplier communication. Digital assistants simplify expense approvals by collecting reports, validating receipts, and routing them for managerial approval. Digital assistants manage inventory by tracking stock levels, reordering supplies, and providing real-time inventory updates. Digital assistants have become a common UX feature in any enterprise application. 07:28 Want to learn how to design stunning, responsive enterprise applications directly from your browser with minimal coding? The new Oracle APEX Developer Professional learning path and certification enables you to leverage AI-assisted development, including generative AI and Database 23ai, to build secure, scalable web and mobile applications with advanced AI-powered features. From now through May 15, 2025, we’re waiving the certification exam fee (valued at $245). So, what are you waiting for? Visit mylearn.oracle.com to get started today. 08:09 Nikita: Welcome back! Thanks for that, Chaitanya. Toufiq, let’s talk about the OCI Document Understanding service. What is it? Toufiq: Using this service, you can upload documents to extract text, tables, and other key data. This means the service can automatically identify and extract relevant information from various types of documents, such as invoices, receipts, contracts, etc. The service is serverless and multitenant, which means you don't need to manage any servers or infrastructure. You can access this service using the console, REST APIs, SDK, or CLI, giving you multiple ways to integrate. 08:55 Nikita: What do we use for APEX apps? Toufiq: For APEX applications, we will be using REST APIs to integrate the service. Additionally, you can process individual files or batches of documents using the ProcessorJob API endpoint. This flexibility allows you to handle different volumes of documents efficiently, whether you need to process a single document or thousands at once. With these capabilities, the OCI Document Understanding service can significantly streamline your document processing tasks, saving time and reducing the potential for manual errors. 09:36 Lois: Ok. What are the different types of models available? How do they cater to various business needs? Toufiq: Let us start with pre-trained models. These are ready-to-use models that come right out of the box, offering a range of functionalities. The available models are Optical Character Recognition (OCR) enables the service to extract text from documents, allowing you to digitize, scan the documents effortlessly. You can precisely extract text content from documents. Key-value extraction, useful in streamlining tasks like invoice processing. Table extraction can intelligently extract tabular data from documents. Document classification automatically categorizes documents based on their content. OCR PDF enables seamless extraction of text from PDF files. Now, what if your business needs go beyond these pre-trained models. That's where custom models come into play. You have the flexibility to train and build your own models on top of these foundational pre-trained models. Models available for training are key value extraction and document classification. 10:50 Nikita: What does the architecture look like for OCI Document Understanding? Toufiq: You can ingest or supply the input file in two different ways. You can upload the file to an OCI Object Storage location. And in your request, you can point the Document Understanding service to pick the file from this Object Storage location. Alternatively, you can upload a file directly from your computer. Once the file is uploaded, the Document Understanding service can process the file and extract key information using the pre-trained models. You can also customize models to tailor the extraction to your data or use case. After processing the file, the Document Understanding service stores the results in JSON format in the Object Storage output bucket. Your Oracle APEX application can then read the JSON file from the Object Storage output location, parse the JSON, and store useful information at local table or display it on the screen to the end user. 11:52 Lois: And what about use cases? How are various industries using this service? Toufiq: In financial services, you can utilize Document Understanding to extract data from financial statements, classify and categorize transactions, identify and extract payment details, streamline tax document management. Under manufacturing, you can perform text extraction from shipping labels and bill of lading documents, extract data from production reports, identify and extract vendor details. In the healthcare industry, you can automatically process medical claims, extract patient information from forms, classify and categorize medical records, identify and extract diagnostic codes. This is not an exhaustive list, but provides insights into some industry-specific use cases for Document Understanding. 12:50 Nikita: Toufiq, let’s switch to the big topic everyone’s excited about—the OCI Generative AI Service. What exactly is it? Toufiq: OCI Generative AI is a fully managed service that provides a set of state of the art, customizable large language models that cover a wide range of use cases. It provides enterprise grade generative AI with data governance and security, which means only you have access to your data and custom-trained models. OCI Generative AI provides pre-trained out-of-the-box LLMs for text generation, summarization, and text embedding. OCI Generative AI also provides necessary tools and infrastructure to define models with your own business knowledge. 13:37 Lois: Generally speaking, how is OCI Generative AI useful? Toufiq: It supports various large language models. New models available from Meta and Cohere include Llama2 developed by Meta, and Cohere's Command model, their flagship text generation model. Additionally, Cohere offers the Summarize model, which provides high-quality summaries, accurately capturing essential information from documents, and the Embed model, converting text to vector embeddings representation. OCI Generative AI also offers dedicated AI clusters, enabling you to host foundational models on private GPUs. It integrates LangChain and open-source framework for developing new interfaces for generative AI applications powered by language models. Moreover, OCI Generative AI facilitates generative AI operations, providing content moderation controls, zero downtime endpoint model swaps, and endpoint deactivation and activation capabilities. For each model endpoint, OCI Generative AI captures a series of analytics, including call statistics, tokens processed, and error counts. 14:58 Nikita: What about the architecture? How does it handle user input? Toufiq: Users can input natural language, input/output examples, and instructions. The LLM analyzes the text and can generate, summarize, transform, extract information, or classify text according to the user's request. The response is sent back to the user in the specified format, which can include raw text or formatting like bullets and numbering, etc. 15:30 Lois: Can you share some practical use cases for generative AI in APEX apps? Toufiq: Some of the OCI generative AI use cases for your Oracle APEX apps include text summarization. Generative AI can quickly summarize lengthy documents such as articles, transcripts, doctor's notes, and internal documents. Businesses can utilize generative AI to draft marketing copy, emails, blog posts, and product descriptions efficiently. Generative AI-powered chatbots are capable of brainstorming, problem solving, and answering questions. With generative AI, content can be rewritten in different styles or languages. This is particularly useful for localization efforts and catering to diverse audience. Generative AI can classify intent in customer chat logs, support tickets, and more. This helps businesses understand customer needs better and provide tailored responses and solutions. By searching call transcripts, internal knowledge sources, Generative AI enables businesses to efficiently answer user queries. This enhances information retrieval and decision-making processes. 16:47 Lois: Before we let you go, can you explain what Select AI is? How is it different from the other AI services? Toufiq: Select AI is a feature of Autonomous Database. This is where Select AI differs from the other AI services. Be it OCI Vision, Document Understanding, or OCI Generative AI, these are all freely managed standalone services on Oracle Cloud, accessible via REST APIs. Whereas Select AI is a feature available in Autonomous Database. That means to use Select AI, you need Autonomous Database. 17:26 Nikita: And what can developers do with Select AI? Toufiq: Traditionally, SQL is the language used to query the data in the database. With Select AI, you can talk to the database and get insights from the data in the database using human language. At the very basic, what Select AI does is it generates SQL queries using natural language, like an NL2SQL capability. 17:52 Nikita: How does it actually do that? Toufiq: When a user asks a question, the first step Select AI does is look into the AI profile, which you, as a developer, define. The AI profile holds crucial information, such as table names, the LLM provider, and the credentials needed to authenticate with the LLM service. Next, Select AI constructs a prompt. This prompt includes information from the AI profile and the user's question. Essentially, it's a packet of information containing everything the LLM service needs to generate SQL. The next step is generating SQL using LLM. The prompt prepared by Select AI is sent to the available LLM services via REST. Which LLM to use is configured in the AI profile. The supported providers are OpenAI, Cohere, Azure OpenAI, and OCI Generative AI. Once the SQL is generated by the LLM service, it is returned to the application. The app can then handle the SQL query in various ways, such as displaying the SQL results in a report format or as charts, etc. 19:05 Lois: This has been an incredible discussion! Thank you, Chaitanya, Apoorva, and Toufiq, for walking us through all of these amazing AI tools. If you’re ready to dive deeper, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. You’ll find step-by-step guides and demos for everything we covered today. Nikita: Until next week, this is Nikita Abraham… Lois: And Lois Houston signing off! 19:31 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35981580
info_outline
AI-Assisted Development in Oracle APEX
04/15/2025
AI-Assisted Development in Oracle APEX
Get ready to explore how generative AI is transforming development in Oracle APEX. In this episode, hosts Lois Houston and Nikita Abraham are joined by Oracle APEX experts Apoorva Srinivas and Toufiq Mohammed to break down the innovative features of APEX 24.1. Learn how developers can use APEX Assistant to build apps, generate SQL, and create data models using natural language prompts. Oracle APEX: Empowering Low Code Apps with AI: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome back to another episode of the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I’m joined by Lois Houston, Director of Innovation Programs. Lois: Hi everyone! In our last episode, we spoke about Oracle APEX and AI. We covered the data and AI -centric challenges businesses are up against and explored how AI fits in with Oracle APEX. Niki, what’s in store for today? Nikita: Well, Lois, today we’re diving into how generative AI powers Oracle APEX. With APEX 24.1, developers can use the Create Application Wizard to tell APEX what kind of application they want to build based on available tables. Plus, APEX Assistant helps create, refine, and debug SQL code in natural language. 01:16 Lois: Right. Today’s episode will focus on how generative AI enhances development in APEX. We’ll explore its architecture, the different AI providers, and key use cases. Joining us are two senior product managers from Oracle—Apoorva Srinivas and Toufiq Mohammed. Thank you both for joining us today. We’ll start with you, Apoorva. Can you tell us a bit about the generative AI service in Oracle APEX? Apoorva: It is nothing but an abstraction to the popular commercial Generative AI products, like OCI Generative AI, OpenAI, and Cohere. APEX makes use of the existing REST infrastructure to authenticate using the web credentials with Generative AI Services. Once you configure the Generative AI Service, it can be used by the App Builder, AI Assistant, and AI Dynamic Actions, like Show AI Assistant and Generate Text with AI, and also the APEX_AI PL/SQL API. You can enable or disable the Generative AI Service on the APEX instance level and on the workspace level. 02:31 Nikita: Ok. Got it. So, Apoorva, which AI providers can be configured in the APEX Gen AI service? Apoorva: First is the popular OpenAI. If you have registered and subscribed for an OpenAI API key, you can just enter the API key in your APEX workspace to configure the Generative AI service. APEX makes use of the chat completions endpoint in OpenAI. Second is the OCI Generative AI Service. Once you have configured an OCI API key on Oracle Cloud, you can make use of the chat models. The chat models are available from Cohere family and Meta Llama family. The third is the Cohere. The configuration of Cohere is similar to OpenAI. You need to have your Cohere OpenAI key. And it provides a similar chat functionality using the chat endpoint. 03:29 Lois: What is the purpose of the APEX_AI PL/SQL public API that we now have? How is it used within the APEX ecosystem? Apoorva: It models the chat operation of the popular Generative AI REST Services. This is the same package used internally by the chat widget of the APEX Assistant. There are more procedures around consent management, which you can configure using this package. 03:58 Lois: Apoorva, at a high level, how does generative AI fit into the APEX environment? Apoorva: APEX makes use of the existing REST infrastructure—that is the web credentials and remote server—to configure the Generative AI Service. The inferencing is done by the backend Generative AI Service. For the Generative AI use case in APEX, such as NL2SQL and creation of an app, APEX performs the prompt enrichment. 04:29 Nikita: And what exactly is prompt enrichment? Apoorva: Let's say you provide a prompt saying "show me the average salary of employees in each department." APEX will take this prompt and enrich it by adding in more details. It elaborates on the prompt by mentioning the requirements, such as Oracle SQL syntax statement, and providing some metadata from the data dictionary of APEX. Once the prompt enrichment is complete, it is then passed on to the LLM inferencing service. Therefore, the SQL query provided by the AI Assistant is more accurate and in context. 05:15 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now to May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 05:41 Nikita: Welcome back! Let’s talk use cases. Apoorva, can you share some ways developers can use generative AI with APEX? Apoorva: SQL is an integral part of building APEX apps. You use SQL everywhere. You can make use of the NL2SQL feature in the code editor by using the APEX Assistant to generate SQL queries while building the apps. The second is the prompt-based app creation. With APEX Assistant, you can now generate fully functional APEX apps by providing prompts in natural language. Third is the AI Assistant, which is a chat widget provided by APEX in all the code editors and for creation of apps. You can chat with the AI Assistant by providing your prompts and get responses from the Generative AI Services. 06:37 Lois: Without getting too technical, can you tell us how to create a data model using AI? Apoorva: A SQL Workshop utility called Create Data Model Using AI uses AI to help you create your own data model. The APEX Assistant generates a script to create tables, triggers, and constraints in either Oracle SQL or Quick SQL format. You can also insert sample data into these tables. But before you use this feature, you must create a generative AI service and enable the Used by App Builder setting. If you are using the Oracle SQL format, when you click on Create SQL Script, APEX generates the script and brings you to this script editor page. Whereas if you are using the Quick SQL format, when you click on Review Quick SQL, APEX generates the Quick SQL code and brings you to the Quick SQL page. 07:39 Lois: And to see a detailed demo of creating a custom data model with the APEX Assistant, visit mylearn.oracle.com and search for the "Oracle APEX: Empowering Low Code Apps with AI" course. Apoorva, what about creating an APEX app from a prompt. What’s that process like? Apoorva: APEX 24.1 introduces a new feature where you can generate an application blueprint based on a prompt using natural language. The APEX Assistant leverages the APEX Dictionary Cache to identify relevant tables while suggesting the pages to be created for your application. You can iterate over the application design by providing further prompts using natural language and then generating an application based on your needs. Once you are satisfied, you can click on Create Application, which takes you to the Create Application Wizard in APEX, where you can further customize your application, such as application icon and other features, and finally, go ahead to create your application. 08:53 Nikita: Again, you can watch a demo of this on MyLearn. So, check that out if you want to dive deeper. Lois: That’s right, Niki. Thank you for these great insights, Apoorva! Now, let's turn to Toufiq. Toufiq, can you tell us more about the APEX Assistant feature in Oracle APEX. What is it and how does it work? Toufiq: APEX Assistant is available in Code Editors in the APEX App Builder. It leverages generative AI services as the backend to answer your questions asked in natural language. APEX Assistant makes use of the APEX dictionary cache to identify relevant tables while generating SQL queries. Using the Query Builder mode enables Assistant. You can generate SQL queries from natural language for Form, Report, and other region types which support SQL queries. Using the general assistance mode, you can generate PL/SQL JavaScript, HTML, or CSS Code, and seek further assistance from generative AI. For example, you can ask the APEX Assistant to optimize the code, format the code for better readability, add comments, etc. APEX Assistant also comes with two quick actions, Improve and Explain, which can help users improve and understand the selected code. 10:17 Nikita: What about the Show AI Assistant dynamic action? I know that it provides an AI chat interface, but can you tell us a little more about it? Toufiq: It is a native dynamic action in Oracle APEX which renders an AI chat user interface. It leverages the generative AI services that are configured under Workspace utilities. This AI chat user interface can be rendered inline or as a dialog. This dynamic action also has configurable system prompt and welcome message attributes. 10:52 Lois: Are there attributes you can configure to leverage even more customization? Toufiq: The first attribute is the initial prompt. The initial prompt represents a message as if it were coming from the user. This can either be a specific item value or a value derived from a JavaScript expression. The next attribute is use response. This attribute determines how the AI Assistant should return responses. The term response refers to the message content of an individual chat message. You have the option to capture this response directly into a page item, or to process it based on more complex logic using JavaScript code. The final attribute is quick actions. A quick action is a predefined phrase that, once clicked, will be sent as a user message. Quick actions defined here show up as chips in the AI chat interface, which a user can click to send the message to Generative AI service without having to manually type in the message. 12:05 Lois: Thank you, Toufiq and Apoorva, for joining us today. Like we were saying, there’s a lot more you can find in the “Oracle APEX: Empowering Low Code Apps with AI” course on MyLearn. So, make sure you go check that out. Nikita: Join us next week for a discussion on how to integrate APEX with OCI AI Services. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 12:28 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35981480
info_outline
Unlocking the Power of Oracle APEX and AI
04/08/2025
Unlocking the Power of Oracle APEX and AI
Lois Houston and Nikita Abraham kick off a new season of the podcast, exploring how Oracle APEX integrates with AI to build smarter low-code applications. They are joined by Chaitanya Koratamaddi, Director of Product Management at Oracle, who explains the basics of Oracle APEX, its global adoption, and the challenges it addresses for businesses managing and integrating data. They also explore real-world use cases of AI within the Oracle APEX ecosystem Oracle APEX: Empowering Low Code Apps with AI: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Thank you for joining us as we begin a new season of the podcast, this time focused on Oracle APEX and how it integrates with AI to help you create powerful applications. This season is for everyone—from beginners and SQL developers to DBA data scientists and low-code enthusiasts. So, if you’re interested in using Oracle APEX to build low-code applications that have custom generative AI features, you’ll want to stay tuned in. 01:07 Lois: That’s right, Niki. Today, we're going to discuss Oracle APEX at a high level, starting with what it is. Then, we’ll cover a few business challenges related to data and AI innovation that organizations face, and learn how the powerful combination of APEX and AI can help overcome these challenges. 01:27 Nikita: To take us through it all, we’ve got Chaitanya Koratamaddi with us. Chaitanya is Director of Product Management for Oracle APEX. Hi Chaitanya! For anyone new to Oracle APEX, can you explain what it is and why it's so widely used? Chaitanya: Oracle APEX is the world's most popular enterprise low code application platform. APEX enables you to build secure and scalable enterprise-scale applications with world class features that can be deployed anywhere, cloud or on-premises. And with APEX, you can build applications 20 times faster with 100 times less code. APEX delivers the most productive way to develop and deploy mobile and web applications everywhere. 02:18 Lois: That’s impressive. So, what’s the adoption rate like for Oracle APEX? Chaitanya: As of today, there are 19 million plus APEX applications created globally. 5,000 plus APEX applications are created on a daily basis and there are 800,000 plus APEX developers worldwide. 60,000 plus customers in 150 countries across various industry verticals. And 75% of Fortune 500 companies use Oracle APEX. 02:56 Nikita: Wow, the numbers really speak for themselves, right? But Chaitanya, why are organizations adopting Oracle APEX at this scale? Or to put it differently, what’s the core business challenge that Oracle APEX is addressing? Chaitanya: From databases to all data, you know that the world is more connected and automated than ever. To drive new business value, organizations need to explore and exploit new sources of data that are generated from this connected world. That can be sounds, feeds, sensors, videos, images, and more. Businesses need to be able to work with all types of data and also make sure that it is available to be used together. Typically, businesses need to work on all data at a massive scale. For example, supply chains are no longer dependent just on inventory, demand, and order management signals. A manufacturer should be able to understand data describing global weather patterns and how it impacts their supply chains. Businesses need to pull in data from as many social sources as possible to understand how customer sentiment impacts product sales and corporate brands. Our customers need a data platform that ensures all this data works together seamlessly and easily. 04:38 Lois: So, you’re saying Oracle APEX is the platform that helps businesses manage and integrate data seamlessly. But data is just one part of the equation, right? Then there’s AI. How are the two related? Chaitanya: Before we start talking about Oracle AI, let's first talk about what customers are looking for and where they are struggling within their AI innovation. It all starts with data. For decades, working with data has largely involved dealing with structured data, whether it is your customer records in your CRM application and orders from your ERP database. Data was organized into database and tables, and when you needed to find some insights in your data, all you need to do is just use stored procedures and SQL queries to deliver the answers. But today, the expectations are higher. You want to use AI to construct sophisticated predictions, find anomalies, make decisions, and even take actions autonomously. And the data is far more complicated. It is in an endless variety of formats scattered all over your business. You need tools to find this data, consume it, and easily make sense of it all. And now capabilities like natural language processing, computer vision, and anomaly detection are becoming very essential just like how SQL queries used to be. You need to use AI to analyze phone call transcripts, support tickets, or email complaints so you can understand what customers need and how they feel about your products, customer service, and brand. You may want to use a data source as noisy and unstructured as social media data to detect trends and identify issues in real time. Today, AI capabilities are very essential to accelerate innovation, assess what's happening in your business, and most importantly, exceed the expectations of your customers. So, connecting your application, data, and infrastructure allows everyone in your business to benefit from data. 07:32 Raise your game with the Oracle Cloud Applications skills challenge. Get free training on Oracle Fusion Cloud Applications, Oracle Modern Best Practice, and Oracle Cloud Success Navigator. Pass the free Oracle Fusion Cloud Foundations Associate exam to earn a Foundations Associate certification. Plus, there’s a chance to win awards and prizes throughout the challenge! What are you waiting for? Join the challenge today by visiting oracle.com/education. 08:06 Nikita: Welcome back! So, let’s focus on AI across the Oracle Cloud ecosystem. How does Oracle bring AI into the mix to connect applications, data, and infrastructure for businesses? Chaitanya: By embedding AI throughout the entire technology stack from the infrastructure that businesses run on through the applications for every line of business, from finance to supply chain and HR, Oracle is helping organizations pragmatically use AI to improve performance while saving time, energy, and resources. Our core cloud infrastructure includes a unique AI infrastructure layer based on our supercluster technology, leveraging the latest and greatest hardware and uniquely able to get the maximum out of the AI infrastructure technology for scenarios such as large language processing. Then there is generative AI and ML for data platforms. On top of the AI infrastructure, our database layer embeds AI in our products such as autonomous database. With autonomous database, you can leverage large language models to use natural language queries rather than writing a SQL when interacting with the autonomous database. This enables you to achieve faster adoption in your application development. Businesses and their customers can use the Select AI natural language interface combined with Oracle Database AI Vector Search to obtain quicker, more intuitive insights into their own data. Then we have AI services. AI services are a collection of offerings, including generative AI with pre-built machine learning models that make it easier for developers to apply AI to applications and business operations. The models can be custom-trained for more accurate business results. 10:17 Nikita: And what specific AI services do we have at Oracle, Chaitanya? Chaitanya: We have Oracle Digital Assistant Speech, Language, Vision, and Document Understanding. Then we have Oracle AI for Applications. Oracle delivers AI built for business, helping you make better decisions faster and empowering your workforce to work more effectively. By embedding classic and generative AI into its applications, Fusion Apps customers can instantly access AI outcomes wherever they are needed without leaving the software environment they use every day to power their business. 11:02 Lois: Let’s talk specifically about APEX. How does APEX use the Gen AI and machine learning models in the stack to empower developers. How does it help them boost productivity? Chaitanya: Starting APEX 24.1, you can choose your preferred large language models and leverage native generative AI capabilities of APEX for AI assistants, prompt-based application creation, and more. Using native OCI capabilities, you can leverage native platform capabilities from OCI, like AI infrastructure and object storage, etc. Oracle APEX running on autonomous infrastructure in Oracle Cloud leverages its unique native generative AI capabilities tuned specifically on your data. These language models are schema aware, data aware, and take into account the shape of information, enabling your applications to take advantage of large language models pre-trained on your unique data. You can give your users greater insights by leveraging native capabilities, including vector-based similarity search, content summary, and predictions. You can also incorporate powerful AI features to deliver personalized experiences and recommendations, process natural language prompts, and more by integrating directly with a suite of OCI AI services. 12:38 Nikita: Can you give us some examples of this? Chaitanya: You can leverage OCI Vision to interpret visual and text inputs, including image recognition and classification. Or you can use OCI Speech to transcribe and understand spoken language, making both image and audio content accessible and actionable. You can work with disparate data sources like JSON, spatial, graphs, vectors, and build AI capabilities around your own business data. So, low-code application development with APEX along with AI is a very powerful combination. 13:22 Nikita: What are some use cases of AI-powered Oracle APEX applications? Chaitanya: You can build APEX applications to include conversational chatbots. Your APEX applications can include image and object detection capability. Your APEX applications can include speech transcription capability. And in your applications, you can include code generation that is natural language to SQL conversion capability. Your applications can be powered by semantic search capability. Your APEX applications can include text generation capability. 14:00 Lois: So, there’s really a lot we can do! Thank you, Chaitanya, for joining us today. With that, we’re wrapping up this episode. We covered Oracle APEX, the key challenges businesses face when it comes to AI innovation, and how APEX and AI work together to give businesses an AI edge. Nikita: Yeah, and if you want to know more about Oracle APEX, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. Join us next week for a discussion on AI-assisted development in Oracle APEX. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 14:39 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35981365
info_outline
Raise Your Game with Oracle Cloud Applications
04/01/2025
Raise Your Game with Oracle Cloud Applications
In this special episode of the Oracle University Podcast, Bill Lawson and Nikita Abraham chat with Peter Fernandez, Senior Director of Cloud Certification at Oracle University, about the exciting new Raise Your Game challenge. They discuss how the initiative is designed to enhance participants' skills in Oracle Fusion Cloud Applications and Oracle Cloud Success Navigator. They also cover key details about the challenge, such as how to get started, who can participate, the way it is structured, and the prizes up for grabs. Raise Your Game: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Bill: Welcome to the Oracle University Podcast. I’m Bill Lawson, Senior Director of Cloud Applications Product Management with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! Last week, we concluded our three-part series on multicloud, and today, we’re shifting gears and exploring an exciting new challenge that’s been thrown down by Oracle University. To tell us all about it, we have Peter Fernandez joining us. Peter is Senior Director of Cloud Certification at Oracle University. Hi Peter! We’re thrilled to have you with us today! Peter: Hi Niki, hi Bill! I’m delighted to be here. 01:02 Bill: So, Peter, let’s get straight into it. What’s this new challenge all about? Peter: The challenge, which we’re calling Raise Your Game, is an incredible opportunity for anyone looking to gain knowledge and gain professional skills about Oracle’s Fusion Cloud Applications. We launched a skills challenge on Feb 14, and it will continue until May 15, 2025. This challenge encourages you to build expertise in two key areas: Oracle Fusion Cloud Applications and Oracle Cloud Success Navigator. This training is geared towards anyone who could be a student in higher ed or someone pursuing a business degree, and Oracle customers and partners who are new to Oracle’s Applications or experienced consultants implementing business applications. 01:55 Nikita: And how exactly does the challenge help in building this expertise? Peter: The challenge has two levels. In Level 1, you’ll need to complete an Oracle Fusion Cloud Apps Foundations course and pass the corresponding exam. These courses are designed to deepen your understanding of the technology enablers in Oracle’s Fusion Cloud Applications and learn about Oracle’s Modern Best Practice, or OMBP. These are extremely helpful throughout all phases in the journey when implementing and using Oracle Fusion Cloud Applications. The Foundation training itself covers a wide range of topics, including core OMBP processes, key performance metrics, implementation considerations, and technology enablers like AI, ML, mobile, and analytics. 02:49 Bill: Before we move on, Peter, can you tell us more about Oracle Modern Best Practice? We discussed it a few weeks back, but for anyone who missed that episode, it’ll be nice to get a quick refresher. Peter: Sure, Bill. Implementing Oracle Fusion Cloud Applications successfully is more than just technology—it’s about following best practices that drive efficiency and success and tie back to business requirements. Oracle Modern Best Practice represent years of accumulated experience, industry insights, and proven methodologies. It serves as a guiding framework for implementing efficient business processes within Oracle Fusion Cloud Applications. These best practices map the features and innovations within Oracle applications to the processes that customers perform every day, and that is key. These curated, industry-leading practices detail how the features that we have built using the most modern technologies can be leveraged to optimize operations. Having a solid grasp of an OMBP and its associated technology enablers will empower you to ensure smoother business operations and higher customer satisfaction. It will show you how to automate activities, streamline tasks, improve results, and set your team up for continued success. The goal of these courses is to make it easy for implementers, global process owners, IT teams to identify every opportunity to improve an organization’s business processes with Oracle Fusion Cloud Applications. 04:33 Bill: So, getting back to Level 1, what do I earn when I complete it? Peter: When you complete this level, you’ll earn a Level 1 Oracle University Learning Community badge. This recognizes that you have foundational knowledge in your chosen Fusion application. 04:48 Bill: That sounds exciting. And then there’s a Level 2? Peter: There is also a Level 2 and where things get even more exciting. You’re going to take your knowledge to the next level by completing the Oracle Cloud Success Navigator Essentials course and passing the associated assessment. This level in the challenge focuses on particularly applying the knowledge you gained in Level 1 where you’ll explore the Oracle Cloud Success Navigator's features and functionality, and get the skills you need to lead organizations through their Oracle Fusion Cloud Applications implementation journey. 05:25 Nikita: And when I complete Level 2, I earn another badge? Peter: That’s right, Niki. When you successfully complete Level 2, you’ll earn a special Level 2 Oracle University Learning Community badge. The goal of Raise Your Game is to reach the Summit by completing both Level 1 and Level 2 challenges with the fastest time and the highest pass scores. Both these combined determine your position on the leaderboard and your position in the Top 500, which will be awarded separate prizes at the end of the challenge. 05:59 Nikita: So, when you’re done, you’ll have both theoretical and practical knowledge. And I understand that there are some fantastic prizes up for grabs? Peter: Absolutely, Niki. This not only helps with both theoretical but also practical knowledge. Learners also have a chance to be featured on the leaderboard in the Oracle University Learning Community. The leaderboard showcases the people who have achieved Level 1 and Level 2 with the fastest times and the highest scores. Along with the badges I told you about, at the end of the promotion, the top 500 people who complete both Level 1 and Level 2 with the fastest time and highest pass scores will receive an Oracle-branded cap, an Oracle Success Navigator pin, and a special Oracle University Community Success Navigator digital badge. 06:52 Bill: So, Peter, who can participate in this challenge, and are there any prerequisites? Peter: The challenge is open to anyone interested in expanding their knowledge of Oracle Cloud Applications. And while there are no strict prerequisites, a basic understanding of business concepts and some familiarity with Oracle Cloud Applications is recommended. This will ensure that you’re able to make the most of the learning materials and engage with the content effectively. You can always check the program overview on the website if you have more questions about this challenge. We’ve got an FAQ posted there that should answer most anything you are curious about. 07:31 Bill: That’s good to know, Peter. And the fact that I get started no matter my level of experience is great news, too. Peter: Absolutely, Bill. Even if you are a beginner fresh out of college, or a seasoned pro like I mentioned earlier who has been implementing Oracle Fusion Cloud Applications (or other applications) for years, I would recommend the challenge and training to you. Basically, this training is for everyone. This program provides foundational knowledge to improve the implementation approach using Oracle Modern Best Practices. Even those individuals that are certified in Cloud Applications will benefit from learning how these modern best practices fit into their work. 08:12 Nikita: Ok, Peter, I’m ready to do it. How do I get started with the challenge? Peter: That’s great. The first step, of course, is to register. And you can do this by visiting oracle.com/education. That's Oracle's main site. oracle.com/education. And select the first tile that you'll see on the webpage, which is the Raise Your Game challenge. If you don't already have an Oracle MyLearn account, you'll need to create one and you'll be prompted to create one. This account gives you access to the Oracle MyLearning platform. Once you're registered, you'll have access to a curated list of learning paths and corresponding certifications. It's important that you review the official rules and promotion details before proceeding with the challenge. 09:12 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now to May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 09:40 Nikita: Welcome back! Ok, Peter, I’ve registered. What’s next? Peter: After you're done registering, you need to select the Cloud Application course that aligns with your interests and goals. We have courses on four different areas: Human Capital Management, Enterprise Resource Planning, Supply Chain Management, and Customer Experience. Once you complete the training and certification, you're done with Level 1. For Level 2, we have the Oracle Cloud Success Navigator Essentials course and assessment that you will need to complete. You can check your status on the leaderboard in the Oracle University Learning Community and share your progress on social media. Like I was saying, the time taken to complete each of these levels and the higher scores earned determines the Top 500 winners. 10:36 Bill: And the best part of this challenge is that it's completely free, right? Peter: Absolutely. There is no cost associated with participating in the skills challenge. It is completely free for anyone, anywhere in the world to participate as long as they comply with the official rules of the promotion. You can take any or all of the Foundation Associate certification exams at no cost. With multiple free attempts, there is no time limit for completing the exams, but to be eligible for the prizes, you must complete the exams and assessments by May 15, 2025. That's midnight GMT. 11:16 Nikita: What if someone doesn't pass the certification exam on their first attempt? Peter: If someone does not pass the certification exam on their first attempt, we understand that not everyone does. We've made provisions for that. If you don't pass the foundations associate certification exam, you have the option to retake the exam many times over. 11:37 Nikita: Now, Peter, let's say someone has already registered for the Fusion Cloud Applications Foundations Associate certification exam before joining the skills challenge. Will their exam be considered for the prizes? Peter: Well, that's a great question, Niki. If someone has already registered for the exam before joining the challenge, their exam will be considered for the prizes as long as they first join the skills challenge. This ensures that everyone who engages with the challenge has a fair chance to win. 12:06 Bill: Does course content consumed before the start of the challenge count towards the awards and badges? Peter: Unfortunately no, Bill. Any content consumed or purchased before Feb 14, 2025, that's again 12 AM GMT, does not apply retroactively to awards or prizes in the Raise Your Game challenge. We want everyone to start on an equal footing here. 12:29 Nikita: What about certifications earned before the challenge began? Peter: Again, certifications earned before Feb 14, 2025, again 12 AM GMT, do not qualify for the promotion. That ensures again that the challenge is fair for all participants. 12:48 Nikita: Now, Peter, how many free exam attempts do participants get as part of the challenge? Peter: Since all the Oracle Fusion Cloud Applications Foundations Associate Certification exams are free, there is no limit to the number of attempts. Participants can take these exams as many times as they need to. 13:05 Bill: And, Peter, say I want to take more than one of the Foundations courses and exams. Can I do that? Peter: Absolutely. This is a great way for someone to learn about the different areas of business that they may be familiar with. As I mentioned earlier, the Oracle Fusion Cloud Applications Foundations training is a program to provide you with knowledge of OMBPs, Oracle Modern Best Practices, that is, and Fusion Cloud Applications. So, it's a great opportunity to cross-skill. You can earn all four certifications if you choose. 13:38 Bill: Peter, thank you so much for joining us today and telling us all about this challenge. It is a really fantastic opportunity for everyone, whether you’re new to Fusion Cloud Applications or an experienced implementation professional, to boost your Oracle Cloud Apps expertise. We’re really excited to try it out ourselves! Peter: A sincere thank you to you, Bill and Niki. It's been an absolute pleasure. I'd really encourage everyone to jump on this challenge. It's a great way to enhance your learning journey and have some fun along the way. Nikita: I couldn’t agree more! Thanks Peter. That’s a wrap on this episode. Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham… Bill: And Bill Lawson, signing off! 14:19 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35943635
info_outline
Oracle Database@Azure
03/25/2025
Oracle Database@Azure
The final episode of the multicloud series focuses on Oracle Database@Azure, a powerful cloud database solution. Hosts Lois Houston and Nikita Abraham, along with Senior Manager of CSS OU Cloud Delivery Samvit Mishra, discuss how this service allows customers to run Oracle databases within the Microsoft Azure data center, simplifying deployment and management. The discussion also highlights the benefits of native integration with Azure services, eliminating the need for complex networking setups. Oracle Cloud Infrastructure Multicloud Architect Professional: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last two weeks, we’ve been talking about different aspects of multicloud. In the final episode of this three-part series, Samvit Mishra, Senior Manager of CSS OU Cloud Delivery, joins us once again to tell us about the Oracle Database@Azure service. Hi Samvit! Thanks for being here today. Samvit: Hi Niki! Hi Lois! Happy to be back. 01:01 Lois: In our last episode, we spoke about the strategic partnership between Oracle and Microsoft, and specifically discussed the Oracle Interconnect for Azure. Nikita: Yeah, and Oracle Database@Azure is yet another addition to this partnership. What can you tell us about this service, Samvit? Samvit: The Oracle Database@Azure service, which was made generally available in 2023, runs right inside the Microsoft Azure data center and uses Azure networking. The entire Oracle Cloud Database Service infrastructure resides in the Azure data center, while it is managed by an expert Oracle Cloud Infrastructure operations team. It provides customers simple and secure access to Oracle Cloud database services within their chosen Azure deployment region, without getting into the complexity of managing networking between the cloud vendors. It is natively integrated with various Microsoft Azure services. This provides a seamless user experience when configuring and using the different Azure services with OCI Oracle database, since much of the complexity associated with the configuration is greatly simplified. There is no need to set up a private interconnect between Microsoft Azure and OCI because the service itself resides within the Azure data center and uses the Azure network. This is very beneficial in terms of strategic deployment because customers can experience microseconds network latency between the endpoints, while receiving a high-performance database environment. 02:42 Nikita: How do I get started with the Oracle Database@Azure service? Samvit: You begin by purchasing the subscription from Oracle and setting up your billing account. Then you provision the database, resources, and service. With that you are ready to configure your application to connect to the database and work on the remaining deployment. As you continue using the service, you can monitor the different resource metrics using the Azure monitoring services and analyze those logs using Azure Log Analytics. 03:15 Lois: So, the adoption is pretty easy, then. What about the responsibilities? Who is responsible for what? Samvit: The Oracle Cloud operations team is entirely responsible for managing the Exadata Database Infrastructure and the VM cluster resources that are provisioned in the Microsoft Azure data center. Oracle is responsible for maintaining the service software and infrastructure by applying updates as they are released. Any issues arising from the OCI Database Service and the resources will be addressed by Oracle Support. You have to raise a support ticket for them to investigate and provide a resolution. And as Azure customers, you have to do rightsizing, based on your workload needs, and provision the Exadata Database Infrastructure and VM cluster in the OCI pod within the Azure data center. You have to provision the database in Exadata Database Service, apply the database and system updates, and take advantage of the cloud automation to maintain and manage the database. You have to load data, establish the connectivity, and support development on your database. As a customer, you monitor the database and infrastructure metrics and events, and also analyze those logs using the Microsoft Azure-provided native services. 04:42 Nikita: Samvit, what sort of challenges were being faced by customers that necessitated the creation of the Oracle Database@Azure service? Samvit: A common deployment scenario in customer environments was that a lot of critical applications, which could be packaged applications, in-house applications, or customized third-party applications, used Oracle Database as their primary database solution. These Oracle databases were deployed in Exadata Infrastructure on-premises or even in Enterprise Server hardware. Some customers evaluated and migrated many of their packaged and other applications to Microsoft Azure compute. Since Oracle Exadata was not supported in Azure, they had to configure a hybrid deployment in order to use Oracle databases that reside in the Exadata infrastructure on-premises. They needed to configure a dedicated and secure network between the Azure data center and their on-premises data center. This added complexity, incurred high costs, had a latency effect, and was even unreliable. There were also cases where customers migrated Oracle databases on Enterprise Server on-premises to Oracle databases hosted on Azure compute. This did not boost efficiency to a large scale. And those were the only options available when provisioning Oracle Database in Azure because Exadata was not available earlier in Azure. 06:18 Lois: And how has that been resolved now? Samvit: With the Oracle Database@Azure service, customer requirements have been aptly met by allowing them to host their Oracle databases on Exadata infrastructure, right next to their application in the Azure data center. Customers, while migrating their applications to Azure compute, can also migrate their Oracle databases on-premises on Exadata infrastructure directly to Exadata Database Service in Azure. And Oracle databases that are on Enterprise Server on-premises can be consolidated directly into Exadata Database Service in Azure, providing them the benefits of scalability, security, performance, and availability, all that are inherent property of OCI Oracle Exadata Database Service. Customers can see growth in the operational efficiency, saving on the overall cost. 07:17 Nikita: Can you take us through the process of deployment? Samvit: It’s quite simple, actually. First, you deploy the Exadata Database Service that is plugged into Azure VNET. Next, you provision the required number of databases, which might be migrated as is or with a consolidated exercise. You can use any of the Oracle database tools or utilities to do the migration or even use the Oracle Zero Downtime Migration method to automate the entire Oracle database migration. Finally, migrate your enterprise application into the Azure environment. Establish the required network configuration to allow communication between the migrated applications and Oracle databases. And then you are all set to publish your application that is running entirely in Azure. You can leverage other Azure services, like monitoring, log analytics, Power BI, or DevOps tools, to enhance existing or even build and deploy newer enterprise applications that are powered by OCI Oracle Database Service in the back end. 08:25 Lois: What about multi-cloud deployment scenarios where applications reside in Azure, but the Oracle databases are deployed on third-party cloud providers, either as a native solution or in computes? Samvit: These Oracle databases can be migrated to Exadata Database Service in the Oracle Database@Azure service. There is no need for the complex cross-cloud connectivity setup between the vendors. And at the same time, you experience the lowest latency between the application and the database deployment. 09:05 Want to learn how to design stunning, responsive enterprise applications directly from your browser with minimal coding? The new Oracle APEX Developer Professional learning path and certification enables you to leverage AI-assisted development, including generative AI and Database 23ai, to build secure, scalable web and mobile applications with advanced AI-powered features. From now through May 15, 2025, we’re waiving the certification exam fee (valued at $245). So, what are you waiting for? Visit mylearn.oracle.com to get started today. 09:45 Nikita: Welcome back! Samvit, what’s the onboarding process like? Samvit: You have to complete the onboarding process to use the service in Microsoft Azure. But before you do that, you first have to complete the subscription process. You must have an active Microsoft Azure account subscription that will be used for subscribing and onboarding the Oracle Database@Azure service. To subscribe to Oracle Database@Azure, you need to purchase an Oracle Database@Azure private offer from Azure Marketplace. As a customer, you will first reach out to Oracle Sales and negotiate a price for the service. Oracle will provide you with the billing account ID and contact details of the person within the organization who will be handling the service. After this, Oracle will create a private offer in Azure Marketplace. 10:40 Lois: Sorry to interrupt you, but what’s a private offer? Samvit: That’s alright, Lois. Private offers are basically solutions or services created for customers by a Microsoft partner, which, in this case, is Oracle. Purchase of those private offers happens from the private offer management page of Azure Marketplace. But there is a prerequisite. The Azure account must be enabled to make private offer purchases on the subscription from Azure Marketplace. You can refer to the Azure documentation to enable the account, if it is not enabled. You review the offer terms and accept the purchase offer, which will take you to the Create Oracle Subscription page. You validate the subscription and other particulars and proceed with the process. After the service is deployed, the purchase status of the private offer changes to subscribed. There are a few points to note here. Billing and payment are done via Azure, and you can use Microsoft Azure Consumption Commitment. You can also use your on-premises licenses with the Bring Your Own License option and the Unlimited License Agreements to pay towards your service consumption. And you also receive Oracle Support rewards for every dollar spent on the service. 12:02 Nikita: OK, now that I’m subscribed, what’s next? Samvit: After you complete the subscription step, Oracle Database@Azure will appear as an Azure resource, just like any other Azure service, and you can move on to onboarding. Onboarding begins with the linking of your OCI account, which will be used for provisioning and managing database resources. The account is also used for provisioning infrastructure and software maintenance updates for the database service. You can either provide an existing OCI account or create a new one. Then you set up Identity Federation between the Azure account and the OCI tenancy. This can authenticate login to the OCI portal using Azure credentials, which you require while performing certain operations in OCI. For example, provisioning databases, getting infrastructure and software maintenance updates, and so on. This is an optional step, but it is recommended that you complete the Federation. The last step is to authorize users by assigning groups and roles in order to have the needed privileges to perform different operations. For example, some groups of users can manage Exadata Database Service resources in Azure, while some can manage the databases in OCI. You can refer to OCI documentation to get detailed descriptions of roles and group names. 13:31 Lois: Right. That will ensure you assign the correct permissions to the appropriate users. Samvit: Exactly. Assigning the correct roles and permissions to individuals inside the organization is a necessary step for transacting in the marketplace and guaranteeing a smooth purchasing experience. Azure Marketplace uses Azure Role-Based Access Control to enable you to acquire solutions certified to run on Azure. Those are then going to determine the purchasing privileges within the organization. 14:03 Nikita: There’s so much more we can discuss about Oracle Database@Azure, but we have to stop somewhere! Thank you so much, Samvit, for joining us over these last three episodes. Lois: Yeah, it’s been great to have you, Samvit. Samvit: Thank you for having me. Nikita: Remember, we also have the Oracle Database@Google Cloud service. So, if you want to learn about that, or even if you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Cloud Infrastructure Multicloud Architect Professional course. Lois: There are a lot of demonstrations that you’ll surely find useful. Well, that’s all we have for today. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 14:43 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35782215
info_outline
Oracle Interconnect for Azure
03/18/2025
Oracle Interconnect for Azure
Join Lois Houston and Nikita Abraham as they interview Samvit Mishra, Senior Manager of CSS OU Cloud Delivery, on Oracle Interconnect for Azure. Learn how this interconnect revolutionizes the customer experience by providing a direct, private link between Oracle Cloud Infrastructure and Microsoft Azure. From use cases to bandwidth considerations, get an in-depth look into how Oracle and Azure come together to create a unified cloud experience. Oracle Cloud Infrastructure Multicloud Architect Professional: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about multicloud, discussing what it is, and the new partnerships we have with Microsoft Azure, Google Cloud, and Amazon Web Services. If you haven’t gotten to the episode yet, we suggest you go back and listen to it before you dive into this one. 00:56 Nikita: Joining us again is Samvit Mishra, Senior Manager of CSS OU Cloud Delivery, and we’re going to ask him about Oracle Interconnect for Azure. We'll look at the scenarios around Oracle Interconnect for Azure and talk about some considerations too. Hi Samvit! Thanks for being with us today. Samvit: Hi Niki! Hi Lois! Lois: Samvit, you introduced Oracle Interconnect for Azure last week, but tell us, how does it improve the customer experience? What benefits does it offer? 01:25 Samvit: Oracle Interconnect for Azure can be established with a one-time setup, eliminating the need for an intermediary network provider. This cross-cloud direct connection also helps you migrate to the cloud or build cloud-native applications by using the best of OCI and Microsoft Azure. Now, because it is a private connection between Oracle Cloud Infrastructure and Microsoft Azure, you get consistent network performance… around 2 millisecond latency. The interconnect also enables joint customers to take advantage of a unified Identity and Access Management platform. So, you can set up single sign-on between Microsoft Azure and OCI for your Oracle applications, like PeopleSoft and e-Business Suite. 02:16 Nikita: That makes the integration pretty seamless, right? Samvit: Exactly, Niki. Having a federated single sign-on means you authenticate only once to access multiple applications, without signing in separately to access each application. And you also get a secure inter-cloud connection that bypasses the public internet. 02:38 Nikita: How extensive is the global reach of Oracle Cloud Infrastructure and Azure in terms of the number of cloud regions available? Samvit: OCI has the fastest growing network of global data centers, with 50 cloud regions available. And there are 12 Azure interconnect regions. For example, Ashburn in the US is an OCI-Azure interconnect region. 03:01 Lois: Samvit, what is the architecture of Oracle Interconnect for Azure like? How is data transferred securely between a Virtual Cloud Network in Oracle Cloud Infrastructure and a Virtual Network in Microsoft Azure? Samvit: A Virtual Network in a Microsoft Azure region is connected to a Virtual Cloud Network in an OCI region using a private interconnection composed of Azure ExpressRoute and OCI FastConnect. Now, on the OCI side, the FastConnect virtual circuit terminates at a dynamic routing gateway, which is attached to the Virtual Cloud Network. On the Microsoft Azure side, the ExpressRoute connection ends at a virtual network gateway, which is attached to a virtual network. So, traffic from Azure to OCI is routed through the virtual network gateway in Microsoft Azure to the dynamic routing gateway in OCI. What’s important to note is that in both directions, the traffic never leaves the private network. 04:05 Nikita: Wow, ok. Samvit, what are some common use cases of Oracle Interconnect for Azure? Can you give us an example of a supported deployment option? Samvit: We can have a .NET application running in Azure that can access an Oracle database in OCI. Similarly, you can also have custom cloud-native applications running on Azure using Oracle Autonomous Database on the OCI side. 04:29 Lois: And are there any prerequisites when you configure Oracle Interconnect for Azure? Samvit: Yes, there are. Remember, on the Azure side, you must have a virtual network with subnets and a virtual network gateway and on the OCI side, you must have a VCN with subnets and an attached dynamic routing gateway. 04:50 Lois: Let’s talk about the networking components that are involved in each site of the connection. Can you run us through the comparison? Samvit: Now, if we talk about the virtual network component, on the OCI side, there is a Virtual Cloud Network and on the Azure side, there is a Virtual Network. From a virtual circuit standpoint, in OCI, there is the FastConnect virtual circuit… on the Azure side, there is the ExpressRoute circuit. When it comes to the gateway, on the OCI side, there is the dynamic routing gateway and on the Azure side, there is the virtual network gateway. Similarly, for routing, there are route tables in OCI and Microsoft Azure. From a security standpoint, in OCI, you can configure security lists as well as network security groups and on the Azure side, you have network security groups. 05:44 Nikita: What are the benefits of this partnership? Samvit: This partnership allows you to innovate using the best combination of Oracle's and Microsoft’s cloud services based on their features, performance, and pricing. So, in a way, you can combine the capabilities of both cloud vendors. 06:01 Nikita: So, a one-stop shop. Samvit: Exactly, Niki. This partnership also gives you a highly optimized, secure, and unified cross-cloud experience so you can use the best of services from Oracle Cloud Infrastructure and Microsoft Azure. And the best part is you continue to leverage any existing investment in Oracle and Microsoft technologies. 06:24 Lois: I wanted to ask you about the typical scenarios where Oracle Interconnect for Azure is supported. Samvit: There are many scenarios where this Interconnect is supported. Let me run you through a couple of them. You could connect an OCI Virtual Cloud Network to an Azure Virtual Network. That’s a scenario that is supported. You could connect peered OCI VCNs in the same region to Azure. You could connect peered OCI VCNs in different regions to Azure. You could also connect services in Oracle Services Network to Azure. 06:59 Lois: And are there any scenarios where this interconnect is not supported? Samvit: When the scenario involves connecting an on-premises environment to Azure via OCI VCN and vice versa, that is not supported. 07:16 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now through May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 07:42 Nikita: Welcome back! I want to explore these scenarios in a little more detail, Samvit. Samvit: OK. Imagine you have OCI on one side and Azure on the other. In this scenario, we have a dynamic routing gateway in OCI and a virtual network gateway in Azure. This is a basic configuration. With Oracle FastConnect and Azure ExpressRoute, customers can create a private interconnection between their OCI and Azure environments. Now in another scenario, we have VCNs in OCI, and they're peered together using a dynamic routing gateway. With this local peering, the peered VCN can talk to Azure through Oracle Interconnect for Azure. Here’s another scenario. We have VCNs in different OCI regions: one VCN in OCI Region 1 and another in OCI Region 2, with Azure sitting alongside. They have established a remote peering connection, and each VCN has its own dynamic routing gateway. Here's the kicker—the peered VCN in this architecture can also converse with Azure using the interconnect. Now think about this scenario. We have the dynamic routing gateway, but we have also added a service gateway to the VCN in OCI. This service gateway allows your VCN to privately access specific Oracle services without exposing data to the public internet. No internet gateway or NAT gateway is required to reach those specific services. Now, traffic from the VCN to the Oracle Services Network travels over the Oracle network fabric and never traverses the internet. Using Oracle Interconnect for Azure, resources in Azure can also privately access resources in Oracle Services Network. 09:38 Nikita: What are the bandwidth and cost considerations? Samvit: Pricing is based solely on the port capabilities of OCI FastConnect and your ExpressRoute. One thing you need to understand is that the cost of FastConnect is the same across all OCI regions. And there are no separate ingress or egress data charges. The cost of Azure ExpressRoute varies across regions and Oracle recommends that you use the local setting, which has no separate ingress or egress charges. Azure ExpressRoute supports up to 10 GB as bandwidth. FastConnect is available in 1, 2, 5, or 10 Gbps. So, the recommendation here is to choose one of these matching bandwidth options under ExpressRoute. 10:27 Lois: Thank you, Samvit, for taking the time to talk to us about Oracle Interconnect for Azure. Samvit: Thank you for having me. Nikita: Remember, Oracle also offers an interconnect solution with Google Cloud, which is very similar to the one with Azure. It too provides a direct, high-performance, and secure network connection with Oracle Cloud Infrastructure. So, if you want to learn more about it, head over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure Multicloud Architect Professional course. Lois: In our next episode, we’ll take a close look at Oracle Database@Azure service. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 11:07 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35737355
info_outline
What is Multicloud?
03/11/2025
What is Multicloud?
This week, hosts Lois Houston and Nikita Abraham are shining a light on multicloud, a game-changing strategy involving the use of multiple cloud service providers. Joined by Senior Manager of CSS OU Cloud Delivery Samvit Mishra, they discuss why multicloud is becoming essential for businesses, offering freedom from vendor lock-in and the ability to cherry-pick the best services. They also talk about Oracle's pioneering role in multicloud and its partnerships with Microsoft Azure, Google Cloud, and Amazon Web Services. Oracle Cloud Infrastructure Multicloud Architect Professional: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Today, we’re moving on to multicloud. In our next three episodes, we’ll be discussing what multicloud is and why there’s so much of a buzz around it. With us is Samvit Mishra, Senior Manager of CSS OU Cloud Delivery. Hi Samvit! Thanks for joining us today. 00:55 Samvit: Hi Niki! Hi Lois! Happy to be here. Lois: So Samvit, we know that Oracle has been an early adopter of multicloud and a pioneer in multicloud services. But for anyone who isn’t familiar with what multicloud is, can you explain what it means? Samvit: Absolutely, Lois. Multicloud is a very simple, basic concept. It is the coordinated use of cloud services from more than one cloud service provider. 01:21 Nikita: But why would someone want to use more than one cloud service provider? Samvit: There are many reasons why a customer might want to leverage two or more cloud service providers. First, it addresses the very real concern of mitigating or avoiding vendor lock-in. By using multiple providers, companies can avoid being tied down to one vendor and maintain their flexibility. 01:45 Lois: That’s like not putting all your eggs in one basket, so to speak. Samvit: Exactly. Another reason is that customers want the best of breed. What that means is basically leveraging or utilizing the best product from one cloud service provider and pairing it against the best product from another cloud service provider. Getting a solution out of the combined products…out of the coordinated use of those services. 02:14 Nikita: So, it sounds like multicloud is becoming the new normal. And as we were saying before, Oracle was a pioneer in this space. But why did we embrace multicloud so wholeheartedly? Samvit: We recognized that our customers were already moving in this direction. Independent studies from Flexera found that 89% of the subjects of the study used multicloud. And we conducted our own study and came to similar numbers. Over 90% of our customers use two or more cloud service providers. HashiCorp, the big infrastructure as code company, came to similar numbers as well, 94%. They basically asked companies if multicloud helped them advance their business goals. And 94% said yes. And all this is very recent data. 03:04 Lois: Can you give us the backstory of Oracle’s entry into the multicloud space? Samvit: Sure. So back in 2019, Oracle and Microsoft Azure joined forces and announced the interconnect service between Oracle Cloud Infrastructure and Microsoft Azure. The interconnect was between Oracle’s FastConnect and Microsoft Azure’s ExpressRoute. This was a big step, as it allowed for a direct connection between the two providers without needing a third-party. And now we have several of our data centers interconnected already. So, out of the 48 regions, 12 of them are already interconnected. And more are coming. And you can very easily configure the interconnect. This interconnectivity guarantees low latency, high throughput, and predictable performance. And also, on the OCI side, there are no egress or ingress charges for your data. There's also a product called Oracle Database@Azure, where Oracle and Microsoft deliver Oracle Database services in Microsoft Azure data centers. 04:12 Lois: That’s exciting! And what are the benefits of this product? Samvit: The main advantage is the co-location. Being co-located with the Microsoft Azure data center offers you native integration between Azure and OCI resources. No manual configuration of a private interconnect between the two providers is needed. You're going to get microsecond latency between your applications and the Oracle Database. The OCI-native Exadata Database Service is available on Oracle Database@Azure. This enables you to get the highest level of Oracle Database performance, scalability, security, and availability. And your tech support can be provided either from Microsoft or from Oracle. 05:03 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now through May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 05:30 Nikita: Welcome back. Samvit, there have been some new multicloud milestones from OCI, right? Can you tell us about them? Samvit: That’s right, Niki. I am thrilled to share the latest news on Oracle’s multicloud partnerships. We now have agreements with Microsoft Azure, Google Cloud, and Amazon Web Services. So, as we were discussing earlier, with Azure, we have the Oracle Interconnect for Azure and Oracle Database@Azure. Now, with Google Cloud, we have the Oracle Interconnect for Google Cloud. And it is very similar to the Oracle Interconnect for Azure. With Google Cloud, we have physically interconnected data centers and they provide a sub-2 millisecond latency private interconnection. So, you can come in and provision virtual circuits going from Oracle FastConnect to Google Cloud Interconnect. And the best thing is that there are no egress or ingress charges for your data. The way it is structured is you have your Oracle Cloud Infrastructure on one side, with your virtual cloud network, your subnets, and your resources. And on the other side, you have your Google Cloud router with your virtual private cloud subnet and your resources interconnecting. You initiate the connectivity on the Google Cloud side, retrieve the service key and provide that service key to Oracle Cloud Infrastructure, and complete the interconnection on the OCI side. So, for example, our US East Ashburn interconnect will match with us-east4 on the Google Cloud side. 07:08 Lois: Now, wasn’t the other major announcement Oracle Database@Google Cloud? Tell us more about that, please. Samvit: With Oracle Database@Google Cloud, you can run your applications on Google Cloud and the database as well inside the Google Cloud platform. That's the Oracle Cloud Infrastructure database co-located in Google Cloud platform data centers. It allows you to run native integration between GCP and OCI resources with no manual configuration of private interconnect between these two cloud service providers. That means no FastConnect, no Interconnect because, again, the database is located in the Google Cloud data center. And you're going to get microsecond latency and the OCI native Exadata Database Service. So, you're going to gain the highest level of Oracle Database performance, scalability, security, and availability. 08:04 Lois: And how is the tech support managed? Samvit: The technical support is a collaboration between Google Cloud and Oracle Cloud Infrastructure. That means you can either have the technical support provided to completion by Google Cloud or by Oracle. One of us will provide you with an end-to-end solution. 08:22 Nikita: During CloudWorld last year, we also announced Oracle Database@AWS, right? Samvit: Yes, Niki. That’s where Oracle and Amazon Web Services deliver the Oracle Database service on Oracle Cloud Infrastructure in your AWS data center. This will provide you with native integration between AWS and OCI resources, with no manual configuration of private interconnect between AWS and OCI. And you're getting microsecond latency with the OCI-native Exadata Database Service. And again, as with Oracle Database@Google Cloud and Oracle Database@Azure, you're gaining the highest level of Oracle Database performance, scalability, security, and availability. And the technical support is provided by either AWS or Oracle all the way to completion. Now, Oracle Database@AWS is currently available in limited preview, with broader availability in the coming months as it expands to new regions to meet the needs of our customers. 09:28 Lois: That’s great. Now, how does Oracle fare when it comes to pricing, especially compared to our major cloud competitors? Samvit: Our pricing is pretty consistent. You’ll see that in all cases across the world, we have the less expensive solution for you and the highest performance as well. 09:45 Nikita: Let’s move on to some use cases, Samvit. How might a company use the multicloud setup? Samvit: Let’s start with the split-stack architecture between Oracle Cloud Infrastructure and Microsoft Azure. Like I was saying earlier, this partnership dates back to 2019. And basically, we eliminated the FastConnect partner from the middle. And this will provide you with high throughput, low latency, and very predictable performance, all of this on highly available links. These links are redundant, ensuring business continuity between OCI and Azure. And you can have your database on the OCI side and your application on Microsoft Azure side or the other way around. You can have SQL Server on Azure and the application running on Oracle Cloud Infrastructure. And this is very easy to configure. 10:34 Lois: It really sounds like Oracle is at the forefront of the multicloud revolution. Thanks so much, Samvit, for shedding light on this exciting topic. Samvit: It was my pleasure. Nikita: That's a wrap for today. To learn more about what we discussed, head over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure Multicloud Architect Professional course. In our next episode, we’ll take a close look at Oracle Interconnect for Azure. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 11:05 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35620030
info_outline
Oracle Fusion Cloud Applications Foundations Training & Certifications
03/04/2025
Oracle Fusion Cloud Applications Foundations Training & Certifications
In this special episode of the Oracle University Podcast, hosts Lois Houston and Nikita Abraham dive into Oracle Fusion Cloud Applications and the new courses and certifications on offer. They are joined by Oracle Fusion Apps experts Patrick McBride and Bill Lawson who introduce the concept of Oracle Modern Best Practice (OMBP), explaining how it helps organizations maximize results by mapping Fusion Application features to daily business processes. They also discuss how the new courses educate learners on OMBP and its role in improving Fusion Cloud Apps implementations. OMBP: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! For the last two months, we’ve been focusing on all things MySQL. But today, we wanted to share some really exciting news about new courses and certifications on Oracle Fusion Cloud Applications that feature Oracle Modern Best Practice, or OMBP, and Oracle Cloud Success Navigator. 00:57 Nikita: And to tell us more about this, we have two very special guests joining us today. Patrick McBride is a Senior Director from the Fusion Application Development organization. He leads the Oracle Modern Best Practice Program office for Oracle. And Bill Lawson is a Senior Director for Cloud Applications Product Management here at Oracle University. We’ll first ask Patrick about Oracle Modern Best Practice and then move on to Bill for details about the new training and certification we’re offering. Patrick, Bill, thanks for being here today. Patrick: Hey, Niki and Lois, thanks for the invitation. Happy to be here. Bill: Hi Niki, Lois. 01:32 Lois: Patrick, let’s start with some basic information about what OMBP are. Can you tell us a little about why they were created? Patrick: Sure, love to. So, modern best practices are more than just a business process. They’re really about translating features and technology into actionable capabilities in our product. So, we've created these by curating industry leading best practices we've collected from our customers over the years. And ensure that the most modern technologies that we've built into the Fusion Application stack are represented inside of those business processes. Our goal is really to help you as customers improve your business operations by easily finding and applying those technologies to what you do every day. 02:18 Nikita: So, by understanding these modern best practice and the technology that enables it, you’re really unlocking the full potential of Fusion Apps. Patrick: Absolutely. So, the goal is that modern best practice make it really easy for customers, implementers, partners, to see the opportunity and take action. 02:38 Lois: That’s great. OK, so, let’s talk about implementations, Patrick. How do Oracle Modern Best Practice support customers throughout the lifecycle of an Oracle Fusion Cloud implementation? Patrick: What we found during many implementers’ journey with taking our solution and trying to apply it with customers is that customers come in with a long list of capabilities that they're asking us to replicate. What they've always done in the past. And what modern best practice is trying to do is help customers to reimage the art of the possible…what's possible with Fusion by taking advantage of innovative features like AI, like IoT, like, you know, all of the other solutions that we built in to help you automate your processes to help you get the most out of the solution using the latest and greatest technology. So, if you're an implementer, there's a number of ways a modern best practice can help during an implementation. First is that reimagine exercise where you can help the customer see what's possible. And how we can do it in a better way. I think more importantly though, as you go through your implementation, many customers aren't able to get everything done by the time they have to go live. They have a list of things they’ve deferred and modern best practices really establishes itself as a road map for success, so you can go back to it at the completion and see what's left for the opportunity to take advantage of and you can use it to track kind of the continuous innovation that Oracle delivers with every release and see what's changed with that business process and how can I get the most out of it. 04:08 Nikita: Thanks, Patrick. That’s a great primer on OMBP that I’m sure everyone will find very helpful. Patrick: Thanks, Niki. We want our customers to understand the value of modern best practices so they can really maximize their investment in Oracle technology today and in the future as we continue to innovate. 04:24 Lois: Right. And the way we’re doing that is through new training and certifications that are closely aligned with OMBP. Bill, what can you tell us about this? Bill: Yes, sure. So, the new Oracle Fusion Applications Foundations training program is designed to help partners and customers understand Oracle Modern Best Practice and how they improve the entire implementation journey with Fusion Cloud Applications. As a learner, you will understand how to adhere to these practices and how they promise a greater level of success and customer satisfaction. So, whether you’re designing, or implementing, or going live, you’ll be able to get it right on day one. So, like Patrick was saying, these OMBPs are reimagined, industry-standard business processes built into Fusion Applications. So, you'll also discover how technologies like AI, Mobile, and Analytics help you automate tasks and make smarter decisions. You’ll see how data flows between processes and get tips for successful go-lives. So, the training we’re offering includes product demonstrations, key metrics, and design considerations to give you a solid understanding of modern best practice. It also introduces you to Oracle Cloud Success Navigator and how it can be leveraged and relied upon as a trusted source to guide you through every step of your cloud journey, so from planning, designing, and implementation, to user acceptance testing and post-go-live innovations with each quarterly new release of Fusion Applications and those new features. And then, the training also prepares you for Oracle Cloud Applications Foundations certifications. 05:55 Nikita: Which applications does the training focus on, Bill? Bill: Sure, so the training focuses on four key pillars of Fusion Apps and the associated OMBP with them. For Human Capital Management, we cover Human Resources and Talent Management. For Enterprise Resource Planning, it’s all about Financials, Project Management, and Risk Management. In Supply Chain Management, you’ll look at Supply Chain, Manufacturing, Inventory, Procurement, and more. And for Customer Experience, we’ll focus on Marketing, Sales, and Service. 06:24 Lois: That’s great, Bill. Now, who is the training and certification for? Bill: That’s a great question. So, it’s really for anyone who wants to get the most out of Oracle Fusion Cloud Applications. It doesn’t matter if you’re an experienced professional or someone new to Fusion Apps, this is a great place to start. It’s even recommended for professionals with experience in implementing other applications, like on-premise products. The goal is to give you a solid foundation in Oracle Modern Best Practice and show you how to use them to improve your implementation approach. We want to make it easy for anyone, whether you’re an implementer, a global process owner, or an IT team employee, to identify every way Fusion Applications can improve your organization. So, if you’re new to Fusion Apps, you’ll get a comprehensive overview of Oracle Fusion Applications and how to use OMBP to improve business operations. If you're already certified in Oracle Cloud Applications and have years of experience, you'll still benefit from learning how OMBP fits into your work. If you’re an experienced Fusion consultant who is new to Oracle Modern Best Practice processes, this is a good place to begin and learn how to apply them and the latest technology enablers during implementations. And, lastly, if you’re an on-premise or you have non-Fusion consultant skills looking to upskill to Fusion, this is a great way to begin acquiring the knowledge and skills needed to transition to Fusion and migrate your existing expertise. 07:53 Raise your game with the Oracle Cloud Applications skills challenge. Get free training on Oracle Fusion Cloud Applications, Oracle Modern Best Practice, and Oracle Cloud Success Navigator. Pass the free Oracle Fusion Cloud Foundations Associate exam to earn a Foundations Associate certification. Plus, there’s a chance to win awards and prizes throughout the challenge! What are you waiting for? Join the challenge today by visiting oracle.com/education. 08:27 Nikita: Welcome back! Bill, how long is it going to take me to complete this training program? Bill: So, we wanted to make this program detailed enough so our learners find it valuable, obviously. But at the same time, we didn’t want to make it too long. So, each course is approximately 5 hours or more, and provides folks with all the requisite knowledge they need to get started with Oracle Modern Best Practice and Fusion Applications. 08:51 Lois: Bill, is there anything that I need to know before I take this course? Are there any prerequisites? Bill: No, Lois, there are no prerequisites. Like I was saying, whether you’re fresh out of college or a seasoned professional, this is a great place to start your journey into Fusion Apps and Oracle Modern Best Practice. 09:06 Nikita: That’s great, you know, that there are no barriers to starting. Now, Bill, what can you tell us about the certification that goes along with this new program? Bill: The best part, Niki, is that it’s free. In fact, the training is also free. We have four courses and corresponding Foundation Associate–level certifications for Human Capital Management, Enterprise Resource Planning, Supply Chain Management, and Customer Experience. So, completing the training prepares you for an hour-long exam with 25 questions. It’s a pretty straightforward way to validate your expertise in Oracle Modern Best Practice and Fusion Apps implementation considerations. 09:40 Nikita: Ok. Say I take this course and certification. What can I do next? Where should my learning journey take me? Bill: So, you’re building knowledge and expertise with Fusion Applications, correct? So, once you take this training and certification, I recommend that you identify a product area you want to specialize in. So, if you take the Foundations training for HCM, you can dive deeper into specialized paths focused on implementing Human Resources, Workforce Management, Talent Management, or Payroll applications, for example. The same goes for other product areas. If you finish the certification for Foundations in ERP, you may choose to specialize in Finance or Project Management and get your professional certifications there as your next step. So, once you have this foundational knowledge, moving on to advanced learning in these areas becomes much easier. We offer various learning paths with associated professional-level certifications to deepen your knowledge and expertise in Oracle Fusion Cloud Applications. So, you can learn more about these courses by visiting oracle.com/education/training/ to find out more of what Oracle University has to offer. 10:43 Lois: Right. I love that we have a clear path from foundational-level training to more advanced levels. So, as your skills grow, we’ve got the resources to help you move forward. Nikita: That’s right, Lois. Thanks for walking us through all this, Patrick and Bill. We really appreciate you taking the time to join us on the podcast. Bill: Yeah, it’s always a pleasure to join you on the podcast. Thank you very much. Patrick: Oh, thanks for having me, Lois. Happy to be here. Lois: Well, that’s all the time we have for today. If you have questions or suggestions about anything we discussed today, you can write to us at [email protected]. That’s [email protected]. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 11:29 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35514250
info_outline
Monitoring MySQL and HeatWave
02/25/2025
Monitoring MySQL and HeatWave
In this episode, Lois Houston and Nikita Abraham chat with MySQL expert Perside Foster on the importance of keeping MySQL performing at its best. They discuss the essential tools for monitoring MySQL, tackling slow queries, and boosting overall performance. They also explore HeatWave, the powerful real-time analytics engine that brings machine learning and cross-cloud flexibility into MySQL. MySQL 8.4 Essentials: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last two episodes, we spoke about MySQL backups, exploring their critical role in data recovery, error correction, data migration, and more. Lois: Today, we’re switching gears to talk about monitoring MySQL instances. We’ll also explore the features and benefits of HeatWave with Perside Foster, a MySQL Principal Solution Engineer at Oracle. 01:02 Nikita: Hi, Perside! We’re thrilled to have you here for one last time this season. So, let’s start by discussing the importance of monitoring systems in general, especially when it comes to MySQL. Perside: Database administrators face a lot of challenges, and these sometimes appear in the form of questions that a DBA must answer. One of the most basic question is, why is the database slow? To address this, the next step is to determine which queries are taking the longest. Queries that take a long time might be because they are not correctly indexed. Then we get to some environmental queries or questions. How can we find out if our replicas are out of date? If lag is too much of a problem? Can I restore my last backup? Is the database storage likely to fill up any time soon? Can and should we consider adding more servers and scaling out the system? And when it comes to users and making sure they're behaving correctly, has the database structure changed? And if so, who did it and what did they do? And more generally, what security issues have arisen? How can I see what has happened and how can I fix it? Performance is always at the top of the list of things a DBA worries about. The underlying hardware will always be a factor but is one of the things a DBA has the least flexibility with changing over the short time. The database structure, choice of data types and the overall size of retained data in the active data set can be a problem. 03:01 Nikita: What are some common performance issues that database administrators encounter? Perside: The sort of SQL queries that the application runs can be an issue. 90% of performance problems come from the SQL index and schema group. 03:18 Lois: Perside, can you give us a checklist of the things we should monitor? Perside: Make sure your system is working. Monitor performance continually. Make sure replication is working. Check your backup. Keep an eye on disk space and how it grows over time. Check when long running queries block your application and identify those queries. Protect your database structure from unauthorized changes. Make sure the operating system itself is working fine and check that nothing unusual happened at that level. Keep aware of security vulnerabilities in your software and operating system and ensure that they are kept updated. Verify that your database memory usage is under control. 04:14 Lois: That’s a great list, Perside. Thanks for that. Now, what tools can we use to effectively monitor MySQL? Perside: The slow query log is a simple way to monitor long running queries. Two variables control the log queries. Long_query_time. If a query takes longer than this many seconds, it gets logged. And then there's min_exam_row_limit. If a query looks at more than this many rows, it gets logged. The slow query log doesn't ordinarily record administrative statements or queries that don't use indexes. Two variables control this, log_slow_admin_statements and log_queries_not_using_indexes. Once you have found a query that takes a long time to run, you can focus on optimizing the application, either by limiting this type of query or by optimizing it in some way. 05:23 Nikita: Perside, what tools can help us optimize slow queries and manage data more efficiently? Perside: To help you with processing the slow query log file, you can use the MySQL dump slow command to summarize slow queries. Another important monitoring feature of MySQL is the performance schema. It's a system database that provides statistics of how MySQL executes at a low level. Unlike user databases, performance schema does not persist data to disk. It uses its own storage engine that is flushed every time we start MySQL. And it has almost no interaction with the storage media, making it very fast. This performance information belongs only to the specific instance, so it's not replicated to other systems. Also, performance schema does not grow infinitely large. Instead, each row is recorded in a fixed size ring buffer. This means that when it's full, it starts again at the beginning. The SYS schema is another system database that's strongly related to performance schema. 06:49 Nikita: And how can the SYS schema enhance our monitoring efforts in MySQL? Perside: It contains helper objects like views and stored procedures. They help simplify common monitoring tasks and can help monitor server health and diagnose performance issues. Some of the views provide insights into I/O hotspots, blocking and locking issues, statements that use a lot of resources in various statistics on your busiest tables and indexes. 07:26 Lois: Ok… can you tell us about some of the features within the broader Oracle ecosystem that enhance our ability to monitor MySQL? Perside: As an Oracle customer, you also have access to Oracle Enterprise Manager. This tool supports a huge range of Oracle products. And for MySQL, it's used to monitor performance, system availability, your replication topology, InnoDB performance characteristics and locking, bad queries caught by the MySQL Enterprise firewall, and events that are raised by the MySQL Enterprise audit. 08:08 Nikita: What would you say are some of the standout features of Oracle Enterprise Manager? Perside: When you use MySQL in OCI, you have access to some really powerful features. HeatWave MySQL enables continuous monitoring of query statistics and performance. The health monitor is part of the MySQL server and gathers raw data about the performance of queries. You can see summaries of this information in the Performance Hub in the OCI Console. For example, you can see average statement latency or top 100 statements executed. MySQL metrics lets you drill in with your own custom monitoring queries. This works well with existing OCI features that you might already know. The observability and management framework lets you filter by resource type and across several dimensions. And you can configure OCI alarms to be notified when some condition is reached. 09:20 Lois: Perside, could you tell us more about MySQL metrics? Perside: MySQL metrics uses the raw performance data gathered by the health monitor to measure the important characteristic of your servers. This includes CPU and storage usage and information relevant to your database connection and queries executed. With MySQL metrics, you can create your own custom monitoring queries that you can use to feed graphics. This gives you an up to the minute representation of all the performance characteristics that you're interested in. You can also create alarms that trigger on some performance condition. And you can be notified through the OCI alarms framework so that you can be aware instantly when you need to deal with some issue. 10:22 Are you keen to stay ahead in today's fast-paced world? We’ve got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you’re always in the know, we offer New Features courses that give you an insider’s look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started. 10:47 Nikita: Welcome back! Now, let’s dive into the key features of HeatWave, the cloud service that integrates with MySQL. Can you tell us what HeatWave is all about? Perside: HeatWave is the cloud service for MySQL. MySQL is the world's leading database for web applications. And with HeatWave, you can run your online transaction processing or OLTP apps in the cloud. This gives you all the benefits of cloud deployments while keeping your MySQL-based web application running just like they would on your own premises. As well as OLTP applications, you need to run reports with Business Intelligence and Analytics Dashboards or Online Analytical Processing, or OLAP reports. The HeatWave cluster provides accelerated analytics queries without requiring extraction or transformation to a separate reporting system. This is achieved with an in-memory analytics accelerator, which is part of the HeatWave service. In addition, HeatWave enables you to create Machine Learning models to embed artificial intelligence right there in the database. The ML accelerator performs classification, regression, time-series forecasting, anomaly detection, and other functions provided by the various models that you can embed in your architecture. HeatWave can also work directly with storage outside the database. With HeatWave Lakehouse, you can run queries directly on data stored in object storage in a variety of formats without needing to import that data into your MySQL database. 12:50 Lois: With all of these exciting features in HeatWave, Perside, what core MySQL benefits can users continue to enjoy? Perside: The reason why you chose MySQL in the first place, it's still a relational database and with full transactional support, low latency, and high throughput for your online transaction processing app. It has encryption, compression, and high availability clustering. It also has the same large database support with up to 256 terabytes support. It has advanced security features, including authentication, data masking, and database firewall. But because it's part of the cloud service, it comes with automated patching, upgrades, and backup. And it is fully supported by the MySQL team. 13:50 Nikita: Ok… let’s get back to what the HeatWave service entails. Perside: The HeatWave service is a fully managed MySQL. Through the web-based console, you can deploy your instances and manage backups, enable high availability, resize your instances, create read replicas, and perform many common administration tasks without writing a single line of SQL. It brings with it the power of OCI and MySQL Enterprise Edition. As a managed service, many routine DBA tests are automated. This includes keeping the instances up to date with the latest version and patches. You can run analytics queries right there in the database without needing to extract and transform your databases, or load them in another dedicated analytics system. 14:52 Nikita: Can you share some common use cases for HeatWave? Perside: You have your typical OLTP workloads, just like you'd run on prem, but with the benefit of being managed in the cloud. Analytic queries are accelerated by HeatWave. So your reporting applications and dashboards are way faster. You can run both OLTP and analytics workloads from the same database, keeping your reports up to date without needing a separate reporting infrastructure. 15:25 Lois: I’ve heard a lot about HeatWave AutoML. Can you explain what that is? Perside: HeatWave AutoML enables in-database artificial intelligence and Machine Learning. Externally sourced data stores, such as sensor data exported to CSV, can be read directly from object store. And HeatWave generative AI enables chatbots and LLM content creation. 15:57 Lois: Perside, tell us about some of the key features and benefits of HeatWave. Perside: Autopilot is a suite of AI-powered tools to improve the performance and applicability of your HeatWave queries. Autopilot includes two features that help cut costs when you provision your service. There's auto provisioning and auto shape prediction. They analyze your existing use case and tell you exactly which shape you must provision for your nodes and how many nodes you need. Auto parallel loading is used when you import data into HeatWave. It splits the import automatically into an optimum number of parallel streams to speed up your import. And then there's auto data placement. It distributes your data across the HeatWave cluster node to improve your query retrieval performance. Auto encoding chooses the correct data storage type for your string data, cutting down storage and retrieval time. Auto error recovery automatically recovers a fail node and reloads data if that node becomes unresponsive. Auto scheduling prioritizes incoming queries intelligently. An auto change propagation brings data optimally from your DB system to the acceleration cluster. And then there's auto query time estimation and auto query plan improvement. They learn from your workload. They use those statistics to perform on node adaptive optimization. This optimization allows each query portion to be executed on every local node based on that node's actual data distribution at runtime. Finally, there's auto thread pooling. It adjusts the enterprise thread pool configuration to maximize concurrent throughput. It is workload-aware, and minimizes resource contention, which can be caused by too many waiting transactions. 18:24 Lois: How does HeatWave simplify analytics within MySQL and with external data sources? Perside: HeatWave in Oracle Cloud Infrastructure provides all the features you need for analytics, all in one system. Your classic OLTP application run on the MySQL database that you know and love, provision in a DB system. On-line analytical processing is done right there in the database without needing to extract and load it to another analytic system. With HeatWave Lakehouse, you can even run your analytics queries against external data stores without loading them to your DB system. And you can run your machine learning models and LLMs in the same HeatWave service using HeatWave AutoML and generative AI. HeatWave is not just available in Oracle Cloud Infrastructure. If you're tied to another cloud vendor, such as AWS or Azure, you can use HeatWave from your applications in those cloud too, and at a great price. 19:43 Nikita: That's awesome! Thank you, Perside, for joining us throughout this season on MySQL. These conversations have been so insightful. If you’re interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Lois: This wraps up our season on the essentials of MySQL. But before we go, we just want to remind you to write to us if you have any feedback, questions, or ideas for future episodes. Drop us an email at [email protected]. That’s [email protected]. Nikita: Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 20:33 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35354385
info_outline
MySQL Backup - Part 2
02/19/2025
MySQL Backup - Part 2
Lois Houston and Nikita Abraham continue their conversation with MySQL expert Perside Foster, with a closer look at MySQL Enterprise Backup. They cover essential features like incremental backups for quick recovery, encryption for data security, and monitoring with MySQL Enterprise Monitor—all to help you manage backups smoothly and securely. MySQL 8.4 Essentials: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! Last week was the first of a two-part episode covering the different types of backups and why they're important. Today, we’ll look at how we can use MySQL Enterprise Backup for efficient and consistent backups. 00:52 Nikita: And of course, we’ve got Perside Foster with us again to walk us through all the details. Perside, could you give us an overview of MySQL Enterprise Backup? Perside: MySQL Enterprise Backup is a form of physical backup at its core, so it's much faster for large data sets than logical backups, such as the most commonly used MySQL Dump. Because it backs up the data files, it's non-locking and enables either complete system backup or partial backup, focusing only on specific databases. 01:29 Lois: And what are the benefits of using MySQL Enterprise Backup? Perside: You can back up to local storage or direct-to-common-cloud storage types. You can perform incremental backups, which can speed up your backup process greatly. Incremental backups enable point-in-time recovery. It's useful when you need to restore to a point in time before some application or human error occurred. Backups can be compressed to save archival storage requirements and encrypted for regulatory compliance and offline data security. 02:09 Nikita: So we know MySQL Enterprise Backup is an impressive tool, but could you talk more about some of the main features it supports for creating and managing backups? Specifically, which tools are integrated within MySQL Enterprise to support different backup scenarios? Perside: MySQL Enterprise Backup supports SBT, implemented by many common Tape storage systems. MySQL Enterprise Backup supports optimistic backup. This process deals with busy tables separately from the rest of the database. It can record changes that happen in the database during the backup for consistency. In a large data set, this can make a huge difference in performance. MySQL Enterprise Backup runs on all supported platforms. It's available when you have a MySQL Enterprise Edition license. And it comes with Enterprise Edition, but it also is available as a separate package. You can get the most recent version from eDelivery, where you can also get a trial version. If you need a previous release, you can get that from My Oracle Support. It's also available in all versions of MySQL, whether you run a Long-Term support version or an Innovation Release. For LTS releases, MySQL Enterprise Backup supports MySQL instances of the same LTS release. For Innovation releases, it supports the previous LTS release and any subsequent Innovation version within the same LTS family. 04:03 Nikita: How does MySQL Enterprise Monitor manage and track backup processes? Perside: MySQL Enterprise Monitor has a dashboard for monitoring MySQL Enterprise Backup. The dashboard monitors the health of backup process and usage throughout the entire Enterprise fleet, not just a single server. It supports drilling down into specific sub-operations within a backup job. You can see information about full backups, partial backups, and incremental backups. You can configure alerts that will notify you in the event of delays, failures, or backups that have not been performed in some configuration time period. 04:53 Lois: Ok…let’s get into the mechanics. I understand that MySQL Enterprise Backup uses binary logs as part of its backup process. Can you explain how these logs fit into the bigger picture of maintaining database integrity? Perside: MySQL Enterprise Backup is a utility designed specifically for backing up MySQL systems in the most efficient and flexible way. At its simplest, it performs a physical backup of the data files, so it is fast. However, it also records the changes that were made during the time it took to do the backup. So, the result is that you get a consistent backup of the data at the time the backup completed. This backup is not tied to the host system and can be moved to other hosts. It can be used for archiving and is fully supported as part of the MySQL Enterprise Edition. It is, however, tied to the specific version of MySQL from which the backup was taken. So, you cannot use it for upgrades where the destination server is an upgrade from the source. For example, if you take a backup from MySQL 5.7, you can't directly restore it to MySQL 8.0. As a part of MySQL Enterprise Edition, it's not part of the freely available Community Edition. 06:29 Lois: Perside, how do MySQL's binary logs track changes over time? And why is this so critical for backups? Perside: The binary logs record changes to the database. These changes are recorded in a sequential set of files numbered incrementally. MySQL logs changes either in statement-based form, where each log entry records the statement that gives rise to the change, or in row-based form where the actual change row data is recorded. If you select mixed format, then MySQL records statements for most operations and records row for changes where the statement might result in a different row value each time it's run, for example, where there's a generated value like autoincrement. The current log file grows as changes are recorded. When it reaches its maximum configured size, that log file is closed, and the next sequential file is created for new logs. You can make this happen automatically by using the FLUSH BINARY LOGS command. This does not delete any existing log files. 07:59 Nikita: But what happens if you want to delete the log files? Perside: If you want to delete all log files, you can do so manually with the PURGE BINARY LOGS command, either specifying a file or a date time. 08:14 Lois: When it comes to tracking transactions, MySQL provides a couple of methods, right? Can you explain the differences between Global Transaction Identifiers and the traditional log file sequence? Perside: Log files positioning is one of two formats, either legacy, where you specify transactions with a log file in a sequence number, or by using global transaction identifiers, or GTIDs, where each transaction is identified with a universally unique identifier or UUID. When you apply a transaction to the source server, that is when the GTID is attached to the transaction. This makes it particularly useful in replication topologies so that each transaction is uniquely identified by both its server ID and the transaction sequence number. When such a transaction is replicated to other hosts, the transaction retains its original GTID so that you can track when that transaction has propagated to the replicas and has been applied. The global transaction identifier is unique across the entire network. 09:49 Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like LLMs, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.comand find out more. 10:19 Nikita: Welcome back! Let’s move on to replication. How does MySQL’s legacy log format handle transactions, and what does that mean for replication timing across different servers? Perside: Legacy format binary logs are non-transactional. This means that a transaction made up of multiple modifications is logged as a sequence of changes. It's possible that different hosts in a replication network apply those changes at different times. Each server that uses legacy binary logging maintain the current applied log position as coordinates based on a combination of binary log files in the position within that log file. 11:11 Nikita: Troubleshooting with legacy logs can be quite complex, right? So, how does the lack of unique transaction IDs make it more difficult to address replication issues? Perside: Because each server has its own log with its own transactions, these modification could have entirely different coordinates, making it challenging to find the specific modification point if you need to do any deep dive troubleshooting, for example, if one replica fell partway through applying a transaction and you need to partially roll it back manually. On the other hand, when you enable GTIDs, the transaction applied on the source host has that globally unique identifier attached to the whole transaction as a sequence of unique IDs. When the second or subsequent servers apply those transactions, they have exactly the same identifier, making it both transaction-safe for MySQL and also easier to troubleshoot if you need to. 12:26 Lois: How can you use binary logs to perform a point-in-time recovery in MySQL? Perside: First, you restore the last full backup. Once you've restarted the restart server, find the current log position of that backup. Either it's GTID or log sequence number. The SHOW BINARY LOG STATUS command shows this information. Then you can use the MySQL binlog utility to replay events from the binary log files, specifying the start and stop position containing the range of log operations that you wish to apply. You can pipe the output of the MySQL bin log to the MySQL client if you want to execute the changes immediately, or you can redirect the output to a script file if you want to examine and perhaps edit the changes. 13:29 Nikita: And how do you save binary logs? Perside: You can save binary logs to use in disaster recovery, for point-in-time restores, or for incremental backups. One way to do this is to flush the logs so that the log file closes and ready for copying. And then copy it to a different server to protect against hardware media failures. You can also use the MySQL binlog utility to create a copy of a set of binary log files in the same format, but to a different file or set of files. This can be useful if you want to run MySQL binlog continuously, copying from the source server binary log to a new location, perhaps in network storage. If you do this, remember that MySQL binlog does not run as a service or daemon, so you'll need to monitor it to make sure it's running continually. 14:39 Lois: Can you take us through how the MySQL Enterprise Backup process works? What does it do when performing a backup? Perside: First, it performs a physical file copy of necessary data and log files. This can be done while the server is fully operational, and it has minimal impact on performance. Once this initial copy is taken, it applies a low impact backup lock on the instance. If you have any tables that are not using InnoDB, the backup cannot guarantee transaction-safe consistency for those tables. It applies a weed lock to those tables so that it can guarantee consistency. Then it briefly locks all logging activity to take a consistent view of the current coordinates of various logs. It releases the weed lock on non-transactional tables. Using the log coordinates that were taken earlier in the process, it gathers all logs for transactions that have occurred since then. Bear in mind that the backup process takes place while the system is active. So, for a consistent backup, it must record not only the data files, but all changes that occurred during the backup. Then it releases the backup lock. The last piece of information recorded is any metadata for the backup itself, including its timing and contents in the final redo log. That completes the backup operation. 16:30 Nikita: And where are the files stored? Perside: The files contained in the backup are saved to the backup location, which can be on the local system or in network storage. The files contained in the backup location include files from the MySQL data directory. Some raw files include InnoDB tablespace, plus any InnoDB file per table tablespace files, and InnoDB log files. Other files might include data files belonging to other storage engines, perhaps MyISAM files. The various log files in instance configuration files are also retained. 17:20 Lois: What steps do you follow to restore a MySQL Enterprise Backup, and how do you guarantee consistency, especially when dealing with incremental backups? Perside: To restore from a backup using MySQL Enterprise Backup, you must first remove any previous files from the data directory. The restore process will fail if you attempt to restore over an existing system or backup. Then you restore the database with appropriate options. If you only restore a single backup, you can use copy, back, and apply log to ensure that the restored system has a consistency state. If you perform a full backup in subsequent incremental backups, you might need to restore multiple times using copy-back, and then use copy-back-and-apply-log only for the final consistent restore operation. The restart server might be on the same host or might be a different host with different configuration. This means that you might have to change some configuration on the restored server, including the operating system ownership of the restored data directory and various MySQL configuration files. If you want to retain the MySQL configuration files from the source server to reproduce on a new server, you should copy those files separately. MySQL Enterprise Backup focuses on the data rather than the server configuration. It does, however, produce configuration files appropriate for the backup. These are similar to the MySQL configuration files, but only contain options relevant for the backup process itself. There's also variables that have been changed to non-default values and all global variable values. These files must be renamed and possibly edited before they are suitable to become configuration files in the newly restored server. For example, the mysqld-auto.cnf file contains a JSON-formatted set of persisted variables. The backup process stores this as the newly named backup mysqld-auto.cnf. If you want to use it in the restored server, you must rename it and place it in the appropriate location so that the restored server can read it. This also applies in part to the auto.cnf file, which contain identifying information for the server. If you are replacing the original server or restoring on the same host, then you can keep the original values. However, this information must be unique within a network. So, if you are restoring this backup to create a replica in a replication topology, you must not include that file and instead start MySQL without it so that it creates its own unique identifying information. 21:14 Nikita: Let’s discuss securing and optimizing backups. How does MySQL Enterprise Backup handle encryption and compression, and what are the critical considerations for each? Perside: You can encrypt backups so that they are secure while moving them around or archiving them. The encrypt option performs the encryption. And you can specify the encryption key either on the command line as a string or a key file that has been generated with some cryptographic algorithm. Encryption only applies to image files, not to backup directories. You can also compress backup with different levels of compression, with higher levels requiring more CPU, but resulting in greater savings in storage. Compression only works with InnoDB data files. If your organization has media management software for performing backups, perhaps to a tape array, then you can use the SBT interface supported in MySQL Enterprise Backup. 22:34 Lois: Before we wrap up, could you share how MySQL Enterprise Backup facilitates the management of backups across a multi-server environment? Perside: As an enterprise solution, it's easy to run MySQL Enterprise Backup in a multi-server environment. We've already mentioned backing up to cloud storage, but you can, of course, back up to a directory or image on network storage that can be mounted locally, perhaps with NFS or some other file system. The "with time" option enables multiple backups within the same backup directory, where each in its own subdirectory named with the timestamp. This is especially useful when you want to run the same backup script repeatedly. 23:32 Lois: Thank you for that detailed overview, Perside. This wraps up our discussion of the various backup types, their pros and cons, and how to select the right option for your needs. In our next session, we’ll explore the different MySQL monitoring strategies and look at the features as well as benefits of Heatwave. Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the MySQL 8.4 Essentials course. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 24:06 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35352585
info_outline
MySQL Backup - Part 1
02/11/2025
MySQL Backup - Part 1
Join Lois Houston and Nikita Abraham as they kick off a two-part episode on MySQL backups with MySQL expert Perside Foster. In this conversation, they explore the critical role of backups in data recovery, error correction, data migration, and more. Perside breaks down the differences between logical and physical backups, discussing their pros and cons, and shares valuable insights on how to create a reliable backup strategy to safeguard your data. MySQL 8.4 Essentials: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! This is Episode 6 in our series on MySQL, and today we’re focusing on how to back up our MySQL instances. This is another two-parter and we’ve got Perside Foster back with us. 00:49 Lois: Perside is a MySQL Principal Solution Engineer at Oracle and she’s here to share her insights on backup strategies and tools. In this episode, we’ll be unpacking the types of backups available and discussing their pros and cons. Nikita: But first let’s start right at the beginning. Perside, why is it essential for us to back up our databases? 01:10 Perside: The whole point of a database is to store and retrieve your business data, your intellectual property. When you back up your data, you are able to do disaster recovery so that your business can continue after some catastrophic event. You can recover from error and revert to a previous known good version of the data. You can migrate effectively from one system to another, or you can create replicas for load balancing or parallel system. You can retain data for archival purposes. Also, you can move large chunks of data to other systems, for example, to create a historical reporting application. And then you can create test environments for applications that are in development and that need real world test data. 02:10 Lois: Yes, and creating a robust backup strategy takes planning, doesn’t it? Perside: As with any complex business critical process, there are challenges with coming up with a backup strategy that you can trust. This requires some careful planning. Any backup process needs to read the data. And in a production system, this will involve adding input/output operations to what might be an already busy system. The resources required might include memory or disk I/O operation and of course, you'll want to avoid downtime, so you might need to schedule the backup for a time when the system is not at peak usage. You'll also need to consider whether the backup is on network storage or some local storage so that you don't exceed limitations for those resources. It isn't enough just to schedule the backup. You'll also need to ensure that they succeed, which you can do with monitoring and consistency check. No backup is effective unless you can use it to restore your data, so you should also test your restore process regularly. If you have business requirements or regulatory commitments that control your data storage policies, you need to ensure your backup also align with those policies. Remember, every backup is a copy of your data at that moment in time. So it is subject to all of your data retention policies, just like your active data. 04:02 Nikita: Let’s talk backup types. Perside, can you break them down for us? Perside: The first category is logical backup. A logical backup creates a script of SQL statements that will re-create the data structure and roles of the live database. Descript can be moved to another server as required. And because it's a script, it needs to be created by and executed on a running server. Because of this, the backup process takes up resources from the source server and is usually slower than a physical media backup. 04:45 Nikita: Ok… what’s the next type? Perside: The next category is physical backup. This is a backup of the actual data file in the server. Bear in mind that the file copy process takes time, and if the database server is active during that time, then the later parts of the copy data will be inconsistent with those parts copied earlier. Ideally, the file must be stable during the backup so that the database state at the start of the copy process is consistent with the state at the end. If there is inconsistency in the data file, then MySQL detects that when the server starts up and it performs a crash recovery. From MySQL’s perspective, there is no difference between a database backup copied from a running server and restarting a server after a crash. In each case, the data files were not saved in a consistent state and crash recovery can take a lot of time on large databases. 06:02 Lois: I see… how can MySQL Enterprise Backup help with this? Perside: MySQL Enterprise Backup has features that enable a consistent backup from a running server. If you create file system copies, either by copying the data files or by performing a file system snapshot, then you must either shut the server down before the copy and undergo crash recovery on the server that starts with those copied files. 06:35 Lois: And aside from logical and physical backups, are there other techniques to back up data? Perside: The binary log enables point-in-time recovery. You can enable replication in a couple of ways. If you start replication and then stop it at a particular time, the replica effectively contains a live backup of the data at the time that you stopped replication. You can also enable a defined replication lag so that the replica is always a known period of time behind the production database. You can also use transportable tablespaces, which are tables or sets of tables in a specific file that you can copy to another server. 07:34 AI is being used in nearly every industry…healthcare, manufacturing, retail, customer service, transportation, agriculture, you name it! And it’s only going to get more prevalent and transformational in the future. It’s no wonder that AI skills are the most sought-after by employers. If you’re ready to dive in to AI, check out the OCI AI Foundations training and certification e that’s available for free! It’s the perfect starting point to build your AI knowledge. So, get going! Head over to mylearn.oracle.com to find out more. 08:14 Nikita: Welcome back! I want to return to the topic of crafting an effective backup strategy. Perside, any advice here? Perside: We can use the different backup types to come up with an effective backup strategy based on how we intend to restore the data. A full backup is a complete copy of the database at some point in time. This can take a lot of time to complete and to restore. An incremental backup contains only the changes since the last backup, as recorded in the binary log files. To restore an incremental backup, you must have restored the previous full backup and any incremental backups taken since then. For example, you might have four incremental backups taken after the last full backup. Each incremental backup contains only the changes since the previous backup. If you want to restore to the point at which you took the fourth incremental backup, then you must restore the full backup and each incremental backup in turn. A differential backup contains all changes since the last full backup. It contains only those portions of the database that are different from the full backup. Over time, the differential backup takes longer because it contains more changes. However, it is easier to restore because if you want to restore to the point at which you took a particular differential backup, you must restore the last full backup and only the differential backup that you require. You can ignore the intermediate differential backups. 10:13 Lois: Can you drill into the different types of backups and explain how each technique is used in various situations? Perside: One of the physical backup techniques is taking a snapshot of the storage medium. The advantages of a snapshot include its quickness. A snapshot is quick to create and restore. It is well-suited to situations where you need to quickly revert to a previous version of the database. For example, in a development environment. A storage snapshot is often a feature of the underlying file system. Linux supports logical volume management or LVM, and many storage area networks or network-attached storage platforms have native snapshot features. You can also use a storage snapshot to supplement a more scheduled logical backup structure. This way, the snapshot enables quick reversion to a previous type, and the logical backup can be used for other purposes, such as archiving or disaster recovery. 11:28 Nikita: Are there any downsides to using snapshots? Perside: First one includes issues with consistency. Because taking a snapshot is quick and does not cause a database performance hit, you might take the snapshot while the system is running. When you restore such a snapshot, MySQL must perform a crash recovery. If you want a consistent snapshot, you must shut down MySQL in advance. Another problem is that the snapshot is a copy of the file system and not of the database. So if you want to transfer it to another system, you must create a database backup from the storage. This adds step in time. A snapshot records the state of the disk at a specific point in time. Initially, the snapshot is practically empty. When a data page changes, the original version of that page is written to the snapshot. Over time, the snapshot storage grows as more data pages are modified. So multiple snapshots result in multiple writes whenever a snapshot data page is changed. To avoid performance deterioration, you should remove or release snapshots when they are no longer in use. Also, because snapshots are tied to the storage medium, they're not suited to moving backups between systems. 13:03 Lois: How about logical backups? How do we create those? Perside: The mysqldump utility has long been a standard way to create logical backups. It creates a script made up of the SQL statement that creates the data and structure in a database or server. 13:21 Nikita: Perside, what are the advantages and disadvantages of mysql dump? Perside: It is an excellent solution for preserving the database structure or for backing up small databases. Logical backups naturally require that the server is running. And they use system resources to produce the SQL statements, so they are less likely for very large databases. The output is a human-readable text file with SQL statements that you can edit as a text file. It can be managed by a source code management system. This allows you to maintain a known good version of the database structure, one that matches your application source code version, which can also include sample data. The mysqldump disadvantages are it needs to run against an active server. So if your production server is busy, you must take action to ensure a consistent backup. This requires locking tables or using the single transaction option, which can result in application delays as the backup completes in a consistent way. Mysqldump does not track changes since the last backup, so it has no way of recording only those rows that have changed. This means it's not suited to perform differential or incremental backups. The scripts must be executed against a running server, so it is slower to restore than using a data dump or physical backup. Additionally, if the database structure has indexes of foreign keys, these conditions must be checked and updated as the data is imported. You can disable these checks during the import but must handle any risks that come from doing so. Because the backup is nothing more than an SQL script, it is easy to restore. You can simply use the MySQL client or any other client tool that can process scripts. 15:46 Nikita: Is there an alternative tool for logical backups? Perside: MySQL Shell is another utility that supports logical backup and restore. Unlike mysqldump, it dumps data in a form that can be processed in parallel, which makes it much faster to use for larger data sets. This enables it to export to or import from remote storage where it can stream data without requiring the whole file before starting the input. It can process multiple chunks of imported data in parallel, and you can monitor progress as it completes. You can also pause import and resume later. For example, in the event of network outage. You can dump and restart table structure, including indexes and primary keys. The utilities in MySQL Shell are exposed through functions. The dumpInstance and dumpSchema utilities back up the whole server or specified schemas respectively. And loadDump is how you restore from such a dump. 17:07 Lois: Thanks for that rundown, Perside! This concludes our first part on MySQL backups. Next week, we’ll take a look at advanced backup methods and the unique features of MySQL Enterprise Backup. Nikita: And if you want to learn more about everything we discussed today, head over to mylearn.oracle.com and explore the MySQL 8.4 Essentials course. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 17:37 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35200000
info_outline
MySQL Security - Part 2
02/04/2025
MySQL Security - Part 2
Picking up from Part 1, hosts Lois Houston and Nikita Abraham continue their deep dive into MySQL security with MySQL Solution Engineer Ravish Patel. In this episode, they focus on user authentication techniques and tools such as MySQL Enterprise Audit and MySQL Enterprise Firewall. MySQL 8.4 Essentials: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Last week, we began exploring MySQL security, covering regulatory compliance and common security threats. Nikita: This week, we’re continuing the conversation by digging deeper into MySQL’s user authentication methods and taking a closer look at some powerful security tools in the MySQL Enterprise suite. 00:57 Lois: And we’re joined once again by Ravish Patel, a MySQL Solution Engineer here at Oracle. Welcome, Ravish! How does user authentication work in MySQL? Ravish: MySQL authenticates users by storing account details in a system database. These accounts are authenticated with three elements, username and hostname commonly separated with an @ sign along with a password. The account identifier has the username and host. The host identifier specifies where the user connects from. It specifies either a DNS hostname or an IP address. You can use a wild card as part of the hostname or IP address if you want to allow this username to connect from a range of hosts. If the host value is just the percent sign wildcard, then that username can connect from any host. Similarly, if you create the user account with an empty host, then the user can connect from any host. 01:55 Lois: Ravish, can MySQL Enterprise Edition integrate with an organization’s existing accounts? Ravish: MySQL Enterprise authentication integrates with existing authentication mechanisms in your infrastructure. This enables centralized account management, policies, and authentication based on group membership and assigned corporate roles, and MySQL supports a wide range of authentication plugins. If your organization uses Linux, you might already be familiar with PAM, also known as Pluggable Authentication Module. This is a standard interface in Linux and can be used to authenticate to MySQL. Kerberos is another widely used standard for granting authorization using a centralized service. The FIDO Alliance, short for Fast Identify Online, promotes an interface for passwordless authentication. This includes methods for authenticating with biometrics RUSB security tokens. And MySQL even supports logging into centralized authentication services that use LDAP, including having a dedicated plugin to connect to Windows domains. 03:05 Nikita: So, once users are authenticated, how does MySQL handle user authorization? Ravish: The MySQL privilege system uses the GRANT keyword. This grants some privilege X on some object Y to some user Z, and optionally gives you permission to grant the same privilege to others. These can be global administrative privileges that enable users to perform tasks at the server level, or they can be database-specific privileges that allow users to modify the structure or data within a database. 03:39 Lois: What about database privileges? Ravish: Database privileges can be fine-grained from the largest to the smallest. At the database level, you can permit users to create, alter, and delete whole databases. The same privileges apply at the table, view, index, and stored procedure levels. And in addition, you can control who can execute stored procedures and whether they do so with their own identity or with the privileges of the procedure's owner. For tables, you can control who can select, insert, update, and delete rows in those tables. You can even specify the column level, who can select, insert, and update data in those columns. Now, any privileged system carries with it the risk that you might forget an important password and lock yourself out. In MySQL, if you forget the password to the root account and don't have any other admin-level accounts, you will not be able to administer the MySQL server. 04:39 Nikita: Is there a way around this? Ravish: There is a way around this as long as you have physical access to the server that runs the MySQL process. If you launch the MySQL process with the --skip grant tables option, then MySQL will not load the privilege tables from the system database when it starts. This is clearly a dangerous thing to do, so MySQL also implicitly disables network access when you use that option to prevent users from connecting over the network. When you use this option, any client connection to MySQL succeeds and has root privileges. This means you should control who has shell access to the server during this time and you should restart the server or enable privileged system with the command flush privileges as soon as you have changed the root password. The privileges we have already discussed are built into MySQL and are always available. MySQL also makes use of dynamic privileges, which are privileges that are enabled at runtime and which can be granted once they are enabled. In addition, plugins and components can define privileges that relate to features of those plugins. For example, the enterprise firewall plugin defines the firewall admin privilege and the audit admin privilege is defined by the enterprise audit plugin. 06:04 Are you working towards an Oracle Certification this year? Join us at one of our certification prep live events in the Oracle University Learning Community. Get insider tips from seasoned experts and learn from others who have already taken their certifications. Go to community.oracle.com/ou to jump-start your journey towards certification today! 06:28 Nikita: Welcome back! Ravish, I want to move on to MySQL Enterprise security tools. Could you start with MySQL Enterprise Audit? Ravish: MySQL Enterprise Audit is an extension available in Enterprise Edition that makes it easier to comply with regulations that require observability and control over who does what in your database servers. It provides visibility of connections, authentication, and individual operations. This is a necessary part of compliance with various regulations, including GDPR, NIS2, HIPAA, and so on. You can control who has access to the audited events so that the audits themselves are protected. As well as configuring what you audit, you can also configure rotation policies so that unmonitored audit logs don't fill up your storage space. The configuration can be performed while the server is running with minimal effect on production applications. You don't need to restart the server to enable or disable auditing or to change the filtering options. You can output the audit logs in either XML or JSON format, depending on how you want to perform further searching and processing. If you need it, you can compress the logs to save space and you can encrypt the logs to provide address protection of audited identities and data modifications. The extension is available either as a component or if you prefer, as the legacy plugin. 07:53 Lois: But how does it all work? Ravish: Well, first, as a DBA, you'll enable the audit plugin and attach it to your running server. You can then configure filters to audit your connections and queries and record who does what, when they do it, and so on. Then once the system is up and running, it audits whenever a user authenticates, accesses data, or even when they perform schema changes. The logs are recorded in whatever format that you have configured. You can then monitor the audited events at will with MySQL tools such as Workbench or with any software that can view and manipulate XML or JSON files. You can even configure Enterprise Audit to export the logs to an external Audit Vault, enabling collection, and archiving of audit information from all over your enterprise. In general, you won't audit every action on every server. You can configure filters to control what specific information ends up in the logs. 08:50 Nikita: Why is this sort of filtering necessary, Ravish? Ravish: As a DBA, this enables you to create a custom designed audit process to monitor things that you're really interested in. Rules can be general or very fine grained, which enables you to reduce the overall log size, reduces the performance impact on the database server and underlying storage, makes it easier to process the log file once you've gathered data, and filters are configured with the easily used JSON file format. 09:18 Nikita: So what information is audited? Ravish: You can see who did what, when they did it, what commands they use, and whether they succeeded. You can also see where they connected from, which can be useful when identifying man in the middle attacks or stolen credentials. The log also records any available client information, including software versions and information about the operating system and much more. 09:42 Lois: Can you tell us about MySQL Enterprise Firewall, which I understand is a specific tool to learn and protect the SQL statements that MySQL executes? Ravish: MySQL Enterprise Firewall can be enabled on MySQL Enterprise Edition with a plugin. It uses an allow list to set policies for acceptable queries. You can apply this allow list to either specific accounts or groups. Queries are protected in real time. Every query that executes is verified per server and checked to make sure that it conforms to query structures that are defined in the allow list. This makes it very useful to block SQL injection attacks. Only transactions that match well-formed queries in the allow list are permitted. So any attempt to inject other types of SQL statements are blocked. Not only does it block such statements, but it also sends an alert to the MySQL error log in real time. This gives you visibility on any security gaps in your applications. The Enterprise Firewall has a learning mode during which you can train the firewall to identify the correct sort of query. This makes it easy to create the allow list based on a known good workload that you can create during development before your application goes live. 10:59 Lois: Does MySQL Enterprise Firewall operate seamlessly and transparently with applications? Ravish: Your application simply submits queries as normal and the firewall monitors incoming queries with no application changes required. When you use the Enterprise Firewall, you don't need to change your application. It can submit statements as normal to the MySQL server. This adds an extra layer of protection in your applications without requiring any additional application code so that you can protect against malicious SQL injection attacks. This not only applies to your application, but also to any client that configured user runs. 11:37 Nikita: How does this firewall system work? Ravish: When the application submits a SQL statement, the firewall verifies that the statement is in a form that matches the policy defined in the allow list before it passes to the server for execution. It blocks any statement that is in a form that's outside of policy. In many cases, a badly formed query can only be executed if there is some bug in the application's data validation. You can use the firewall’s detection and alerting features to let when it blocks such a query, which will help you quickly detect such bugs, even when the firewall continues to block the malicious queries. 12:14 Lois: Can you take us through some of the encryption and masking features available in MySQL Enterprise Edition? Ravish: Transparent data encryption is a great way to protect against physical security disclosure. If someone gains access to the database files on the file system through a vulnerability of the operating system, or even if you've had a laptop stolen, your data will still be protected. This is called Data at Rest Encryption. It protects not only the data rows in tablespaces, but also other locations that store some version of the data, such as undo logs, redo logs, binary logs and relay logs. It is a strong encryption using the AES 256 algorithm. Once we enable transparent data encryption, it is, of course, transparent to the client software, applications, and users. Applications continue to submit SQL statements, and the encryption and decryptions happen in flight. The application code does not need to change. All data types, table structure, and database names remain the same. It's even transparent to the DBAs. The same data types, table structure, and so on is still how the DBA interacts with the system while creating indexes, views, and procedures. In fact, DBAs don't even need to be in possession of any encryption keys to perform their admin tasks. It is entirely transparent. 13:32 Nikita: What kind of management is required for encryption? Ravish: There is, of course, some key management required at the outside. You must keep the keys safe and put policies in place so that you store and rotate keys effectively, and ensure that you can recover those keys in the event of some disaster. This key management integrates with common standards, including KMIP and KMS. 13:53 Lois: Before we close, I want to ask you about the role of data masking in MySQL. Ravish: Data masking is when we replace some part of the private information with a placeholder. You can mask portions of a string based on the string position using the letter X or some other character. You can also create a table that contains a dictionary of suitable replacement words and use that dictionary to mask values in your data. There are specific functions that work with known formats of data, for example, social security numbers as used in the United States, national insurance numbers from the United Kingdom, and Canadian social insurance numbers. You can also mask various account numbers, such as primary account numbers like credit cards or IBAN numbers as used in the European Bank system. There are also functions to generate random values, which can be useful in test databases. This might be a random number within some range, or an email address, or a compliant credit card number, or social security number. You can also create random information using the dictionary table that contains suitable example values. 14:58 Nikita: Thank you, Ravish, for taking us through MySQL security. We really cannot overstate the importance of this, especially in today’s data-driven world. Lois: That’s right, Niki. Cyber threats are increasingly sophisticated these days. You really have to be on your toes when it comes to security. If you’re interested in learning more about this, the MySQL 8.4 Essentials course on mylearn.oracle.com is a great next step. Nikita: We’d also love to hear your thoughts on our podcast so please feel free to share your comments, suggestions, or questions by emailing us at [email protected]. That’s [email protected]. In our next episode, we’ll journey into the world of MySQL backups. Until then, this is Nikita Abraham… Nikita: And Lois Houston, signing off! 15:51 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/35058185
info_outline
MySQL Security - Part 1
01/28/2025
MySQL Security - Part 1
Security takes center stage in this episode as Lois Houston and Nikita Abraham are joined by MySQL Solution Engineer Ravish Patel. Together, they explore MySQL’s security features, addressing key topics like regulatory compliance. Ravish also shares insights on protecting data through encryption, activity monitoring, and access control to guard against threats like SQL injection and malware. MySQL 8.4 Essentials: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hey everyone! In our last episode, we took a look at MySQL database design. Today is the first of a two-part episode on MySQL security. Lois: In Part 1, we’ll discuss how MySQL supports regulatory compliance and how to spot and handle common security risks. 00:55 Nikita: Joining us today is Ravish Patel, a MySQL Solution Engineer at Oracle. Hi Ravish! Let’s start by talking about how MySQL supports regulatory compliance. 01:06 Ravish: Some of the most important international regulations that we have surrounding data and organizations include the GDPR, HIPAA, Sarbanes-Oxley, the UK Data Protection Act, and the NIS2. Although each regulatory framework differs in the details, in general, you must be able to comply with certain key requirements and all of which are enabled by MySQL. First, you must be able to monitor user activity on the system, which includes keeping track of when new users are created, when the schema changes, and when backups are taken and used. You must protect data, for example, by ensuring that databases that are stored on disk are encrypted at REST and ensuring that only authorized users have privileges to access and modify the data. You must have the appropriate retention policies in place for your data, ensuring that backups are held securely and used only for the purpose intended. You must be able to audit access to the data so that you can trace which users gained access to records or when they were modified. All of these facilities are available in MySQL, either as part of the core community edition features or made available through enterprise features. 02:21 Lois: What kind of risks might we encounter, Ravish, and how can we address them? Ravish: As your system grows in complexity, you're likely going to have more risks associated with it. Some of those risks are associated with the human factors that come with any computer system. These might be errors that are introduced when people perform work on the system, either administrative work on the environment or database or work that developers and testers perform when working on a changing system. You might even have malicious users trying to exploit the system or good faith users or support staff who make changes without proper consideration or protection from knock-on effects. At the foundation are the necessary components of the system, each of which might be vulnerable to human error or malicious actors. Every piece of the system exposes possible risks, whether that's the application presented to users, the underlying database, the operating system or network that it works on, or processes such as backups that place copies of your data in other locations. More complex environments add more risks. High availability architectures multiply the number of active systems. Consolidating multiple application databases on a single server exposes every database to multiple vectors for bugs and human error. Older, less well supported applications might give more challenges for maintenance. Engaging external contractors might reduce your control over authorized users. And working in the cloud can increase your network footprint and your reliance on external vendors. 03:53 Nikita: What are risks that specifically impact the database? Ravish: The database server configuration might not be optimal. And this can be changed by users with proper access. To mitigate this risk, you might enable version control of the configuration files and ensure that only certain users are authorized. Application and administrator accounts might have more data privileges than required, which adds risk of human error or malicious behavior. To mitigate this, you should ensure that users are only granted necessary permissions. In particular, structural modifications and administrative tasks might be more widely accessible than desired. Not every developer needs full administrative rights on a database. And certainly, an application should not have such privileges. You should limit administrative privileges only to those users who need that authorization. 04:45 Nikita: Okay, quick question, Ravish. How do authentication and password security fit into this picture? Ravish: Authentication is often a weak point. And password security is one of the most common issues in large applications. Ensure that you have strong password policies in place. And consider using authentication mechanisms that don't solely rely on passwords, such as pass-through authentication or multifactor authentication. 05:11 Lois: So, it sounds like auditing operations are a critical part of this process, right? Ravish: When something bad happens, you can only repair it or learn from it if you know exactly what has happened and how. You should ensure that you audit key operations so you can recover from error or malicious actions. If a developer laptop is lost or stolen or someone gains access to an underlying operating system, then your data might become vulnerable. You can mitigate this by encrypting your data in place. This also applies to backups and, where possible, securing the connection between your application and the database to encrypt data in flight. 05:54 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You’ll find training on everything from multicloud, database, networking, and security to artificial intelligence and machine learning, all free for our subscribers. So, what are you waiting for? Pick a topic, head over to mylearn.oracle.com and get started. 06:18 Nikita: Welcome back! Before the break, we touched on the importance of auditing. Now, Ravish, what role does encryption play in securing these operations? Ravish: Encryption is only useful if the keys are secure. Make sure to keep your encryption assets secure, perhaps by using a key vault. Every backup that you take contains a copy of your data. If these backups are not kept securely, then you are at risk, just as if your database wasn't secure. So keep your backups encrypted. 06:47 Lois: From what we’ve covered so far, it’s clear that monitoring is essential for database security. Is that right? Ravish: Without monitoring, you can’t track what happens on an ongoing basis. For example, you will not be aware of a denial-of-service attack until the application slows down or becomes unavailable. If you implement monitoring, you can identify a compromised user account or unusual query traffic as it happens. A poorly coded application might enable queries that do more than they should. A database firewall can be configured to permit only queries that conform to a specific pattern. 07:24 Nikita: There are so many potential types of attacks out there, right? Could you tell us about some specific ones, like SQL injection and buffer overflow attacks? Ravish: A SQL injection attack is a particular form of attack that modifies a SQL command to inject a different command to the one that was intended by the developer. You can configure an allow list in a database firewall to block such queries and perform a comprehensive input validation inside the application so that such queries cannot be inserted. A buffer overflow attack attempts to input more data than can fit in the appropriate memory location. These are usually possible when there is an unpatched bug in the application or even in the database or operating system software. Validation and the database firewall can catch this sort of attack before it even hits the database. And frequent patching of the platforms can mitigate risks that come from unpatched bugs. Malicious acts from inside the organization might also be possible. So good access control and authorization can prevent this. And monitoring and auditing can detect it if it occurs. 08:33 Lois: What about brute force attacks? How do they work? Ravish: A brute force attack is when someone tries passwords repeatedly until they find the correct one. MySQL can lock out an account if there have been too many incorrect attempts. Someone who has access to the physical network on which the application and database communicate can monitor or eavesdrop that network. However, if you encrypt the communications in flight, perhaps by using TLS or SSL connections, then that communication cannot be monitored. 09:04 Nikita: How do the more common threats like malware, Trojan horses, and ransomware impact database security? Ravish: Malware, ransomware, and Trojan horses can be a problem if they get to the server platforms or if client systems are compromised and have too much permissions. If the account that is compromised has only limited access and if the database is encrypted in place, then you can minimize the risks associated even if such an event occurs. There are also several risks directly associated with people who want to do the harm. So it's vital to protect personal information from any kind of disclosure, particularly sensitive information, such as credit card numbers. Encryption and access control can protect against this. 09:49 Lois: And then there are denial-of-service and spoofing attacks as well, right? How can we prevent those? Ravish: A denial-of-service attack prevents users from accessing the system. You can prevent any single user from performing too many queries by setting resource users limits. And you can limit the total number of connections as well. Sometimes, a user might gain access to a privileged level that is not appropriate. Password protection, multifactor authentication, and proper access control will protect against this. And auditing will help you discover if it has occurred. A spoofing attack is when an attacker intercepts and uses information to authenticate a user. This can be mitigated with strong access control and password policies. An attacker might attempt to modify or delete data or even auditing information. Again, this can be mitigated with tighter access controls and caught with monitoring and auditing. If the attack is successful, you can recover from it easily if you have a strong backup strategy in place. 10:50 Nikita: Ravish, are there any overarching best practices for keeping a database secure? Ravish: The MySQL installation itself should be kept up-to-date. This is the easiest if you install from a package manager on Windows or Linux. Your authentication systems should be kept strong with password policies or additional authentication systems that supplement or replace passwords entirely. Authorization should be kept tightly controlled by minimizing the number of active accounts and ensuring that those accounts have only the minimal privileges. You should control and monitor changes on the system. You can limit such changes with the database firewall and with tight access controls and observe changes with monitoring, auditing, and logging. Data encryption is also necessary to protect data from disclosure. MySQL supports encryption in place with Transparent Data Encryption, also known as TDE, and a variety of encryption functions and features. And you can encrypt data in flight with SSL or TLS. And of course, it's not just about the database itself but how it's used in the wider enterprise. You should ensure that replicas are secure and that your disaster recovery procedures do not open up to additional risks. And keep your backups encrypted. 12:06 Lois: Is there anything else we should keep in mind as part of these best practices? Ravish: The database environment is also worth paying attention to. The operating system and network should be as secure as you can keep them. You should keep your platform software patched so that you are protected from known exploits caused by bugs. If your operating system has hardening guidelines, you should always follow those. And the Center of Internet Security maintains a set of benchmarks with configuration recommendations for many products designed to protect against threats. 12:38 Nikita: And that’s a wrap on Part 1! Thank you, Ravish, for guiding us through MySQL’s role in ensuring compliance and telling us about the various types of attacks. If you want to dive deeper into these topics, head over to mylearn.oracle.com to explore the MySQL 8.4 Essentials course. Lois: In our next episode, we’ll continue to explore how user authentication works in MySQL and look at a few interesting MySQL Enterprise security tools that are available. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 13:12 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/34986110
info_outline
MySQL Database Design
01/21/2025
MySQL Database Design
Explore the essentials of MySQL database design with Lois Houston and Nikita Abraham, who team up with MySQL expert Perside Foster to discuss key storage concepts, transaction support in InnoDB, and ACID compliance. You’ll also get tips on choosing the right data types, optimizing queries with indexing, and boosting performance with partitioning. MySQL 8.4 Essentials: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! Last week, we looked at installing MySQL and in today’s episode, we’re going to focus on MySQL database design. Lois: That’s right, Niki. Database design is the backbone of any MySQL environment. In this episode, we’ll walk you through how to structure your data to ensure smooth performance and scalability right from the start. 00:58 Nikita: And to help us with this, we have Perside Foster joining us again. Perside is a MySQL Principal Solution Engineer at Oracle. Hi Perside, let’s start with how MySQL handles data storage on the file system. Can you walk us through the architecture? Perside: In the MySQL architecture, the storage engine layer is part of the server process. Logically speaking, it comes between the parts of the server responsible for inputting, parsing, and optimizing SQL and the underlying file systems. The standard storage engine in MySQL is called InnoDB. But other storage engines are also available. InnoDB supports many of the features that are required by a production database system. Other storage engines have different sets of features. For example, MyISAM is a basic fast storage engine but has fewer reliability features. NDB Cluster is a scalable distributed storage engine. It runs on multiple nodes and uses additional software to manage the cluster. 02:21 Lois: Hi Perside! Going back to InnoDB, what kind of features does InnoDB offer? Perside: The storage engine supports many concurrent users. It also keeps their changes separate from each other. One way it achieves this is by supporting transactions. Transactions allows users to make changes that can be rolled back if necessary and prevent other users from seeing those changes until they are committed or saved persistently. The storage engine also enables referential integrity. This is to make sure that data in a dependent table refers only to valid source data. For example, you cannot insert an order for a customer that does not exist. It stores raw data on disk in a B-tree structure and uses fast algorithms to insert rows in the correct place. This is done so that the data can be retrieved quickly. It uses a similar method to store indexes. This allows you to run queries based on a sort order that is different from the row's natural order. InnoDB has its own buffer pool. This is a memory cache that stores recently accessed data. And as a result, queries on active data are much faster than queries that read from the disk. InnoDB also has performance features such as multithreading and bulk insert optimization. 04:13 Lois: So, would you say InnoDB is generally the best option? Perside: When you install MySQL, the standard storage engine is InnoDB. This is generally the best choice for production workloads that need both reliability and high performance. It supports transaction syntax, such as commit and rollback, and is fully ACID compliant. 04:41 Nikita: To clarify, ACID stands for Atomicity, Consistency, Isolation, and Durability. But could you explain what that means for anyone who might be new to the term? Perside: ACID stands for atomic. This means your transaction can contain multiple statements, but the transaction as a whole is treated as one change that succeeds or fails. Consistent means that transactions move the system from one consistent state to another. Isolated means that changes made during a transaction are isolated from other users until that transaction completes. And durable means that the server ensures that the transaction is persisted or written to disk once it completes. 05:38 Lois: Thanks for breaking that down for us, Perside. Could you tell us about the data encryption and security features supported by InnoDB? Perside: InnoDB supports data encryption, which keeps your data secure on the disk. It also supports compression, which saves space at the cost of some extra CPU usage. You can configure an InnoDB cluster of multiple MySQL server nodes across multiple hosts to enable high availability. Transaction support is a key part of any reliable database, particularly when multiple concurrent users can change data. By default, each statement commits automatically so that you don't have to type commit every time you update a row. You can open a transaction with the statement START TRANSACTION or BEGIN, which is synonymous. 06:42 Nikita: Perside, what exactly do the terms "schema" and "database" mean in the context of MySQL, and how do they relate to the storage structure of tables and system-level information? Perside: Schema and database both refer to collections of tables and other objects. In some platform, a schema might contain databases. In MySQL, the word schema is a synonym for database. In InnoDB and some other storage engines, each database maps to a directory on the file system, typically in the data directory. Each table has rows data stored in a file. In InnoDB, this file is the InnoDB tablespace, although you can choose to store tables in other tablespaces. MySQL uses some databases to store or present system-level information. The MySQL and information schema databases are used to store and present structural information about the server, including authentication settings and table metadata. You can query performance metrics from the performance schema and sys databases. If you have configured a highly available InnoDB cluster, you can examine its configuration from the MySQL InnoDB cluster metadata database. 08:21 Lois: What kind of data types does MySQL support? Perside: MySQL supports a number of data types with special characteristics. BLOB stands for Binary Large Object Block. Columns that specify this type can contain large chunks of binary data. For example, JPG pictures or MP3 audio files. You can further specify the amount of storage required by specifying the subtype-- for example, TINYBLOB or LONGBLOB. Similarly, you can store large amounts of text data in TEXT, TINYTEXT, and so on. These types, BLOB and TEXT, share the same characteristic, that they are not stored in the same location as other data from the same row. This is to improve performance because many queries against the table do not query BLOB or TEXT data contained within the table. MySQL supports geographic or spatial data and queries on that data. These include ways to represent points, lines, polygons, and collections of such elements. The JSON data type enables you to use MySQL as a document store. A column of this type can contain complete JSON documents in each row. And MySQL has several functions that enable querying and searching for values within such documents. 10:11 Adopting a multicloud strategy is a big step towards future-proofing your business and we’re here to help you navigate this complex landscape. With our suite of courses, you'll gain insights into network connectivity, security protocols, and the considerations of working across different cloud platforms. Start your journey to multicloud today by visiting mylearn.oracle.com. 10:38 Nikita: Welcome back. Perside, how do indexes improve the performance of MySQL queries? Perside: Indexes make it easier for MySQL to find specific rows. This doesn't just speed up queries, but also ensures that newly inserted rows are placed in the best position in the data file so that future queries will findthem quickly. 11:03 Nikita: And how do these indexes work exactly? Perside: Indexes work by storing the raw data or a subset of the raw data in some defined order. An index can be ordered on some non-unique value, such as a person's name. Or you can create an index on some value that must be unique within the table, such as an ID. The primary index, sometimes called a clustered index, is the complete table data stored on a unique value called a Primary Key. 11:38 Lois: Ok. And what types of indices are supported by InnoDB? Perside: InnoDB supports multiple index types. Raw data in most secondary indexes are stored in a BTREE structure. This stores data in specific buckets based on the index key using fixed-size data pages. HASH indexes are supported by some storage engines, including the memory storage engine. InnoDB has an adaptive HASH feature, which kicks in automatically for small tables and workloads that benefits from them. Spatial data can be indexed using the RTREE structure. 12:25 Nikita: What are some best practices we should follow when working with indexes in MySQL? Perside: First, you should create a Primary Key for each table. This value is unique for each row and is used to order the row data. InnoDB doesn't require that tables have an explicit Primary Key, but if you don't set one, it creates a hidden Primary Key. Each secondary index is a portion of the data ordered by some other column. And internally, each index entry uses the Primary Key as a lookup back to the rest of the row. If your Primary Key is large or complex, this increases the storage requirement of each index. And every time you modify a row, MySQL must update every affected index in the background. The more indexes you have on a table, the slower every insert operation will be. This means that you should only create indexes that improve query performance for your specific workload. The sys schema in MySQL Enterprise Monitor have features to identify indexes that are unused. Use prefix and compound keys to reduce indexes. A prefix key contains only the first part of a string. This can be particularly useful when you have large amounts of text in an index key and want to index based on the first few characters. A compound key contains multiple columns, for example, last name and first name. This also speeds up queries where you're looking for only those values because the secondary index can fulfill the query without requiring a lookup back to the primary indexes. 14:35 Lois: Before we let you go, can you explain what table partitioning is? Perside: Table partitioning is enabled by using a plugin. When you partition a table, you divide its content according to certain rules. You might store portions of the table based on the range of values in a column. For example, storing all sales for 2024 in a single partition. A partition based on a list enables you to store rows with specific values in the partition column. When you partition by hash or key, you distribute rows somewhat evenly between partitions. This means that you can distribute a large table across multiple disks, or you can place more frequently accessed data on faster storage. Explain works with partitioning. Simply prefix any query that uses partition data, and the output shows information about how the optimizer will use the partition. Partitioning is one of the features that is only fully supported in Enterprise Edition. 15:57 Lois: Perside, thank you so much for joining us today. In our next episode, we’ll dive deep into MySQL security. Nikita: And if you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Until next week, this is Nikita Abraham… Lois: And Lois Houston signing off! 16:18 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/34536445
info_outline
Installing MySQL
01/14/2025
Installing MySQL
In this episode, Lois Houston and Nikita Abraham discuss the basics of MySQL installation with MySQL expert Perside Foster. Perside covers every key step, from preparing your environment and selecting the right software, to installing MySQL, setting up secure initial user accounts, configuring the system, and managing updates efficiently. MySQL 8.4 Essentials: Oracle University Learning Community: LinkedIn: X: Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Welcome back to another episode of the Oracle University Podcast. I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I’m joined by Lois Houston, Director of Innovation Programs. Lois: Hi everyone! In our last episode, we spoke about Oracle MySQL ecosystem and its various components. We also discussed licensing, security, and some key tools. What's on the agenda for today, Niki? 00:52 Nikita: Well Lois, today, we’re going beyond tools and features to talk about installing MySQL. Whether you're setting up MySQL for the first time or looking to understand its internal structure a little better, this episode will be a valuable guide. Lois: And we’re lucky to have Perside Foster back with us. Perside is a MySQL Principal Solution Engineer at Oracle. Hi Perside! Say I wanted to get started and install MySQL. What factors should I keep in mind before I do that? 01:23 Perside: The first thing to consider is the environment for the database server. MySQL is supported on many different Linux distributions. You can also run it on Windows or Apple macOS. You can run MySQL on a variety of host platforms. You can use dedicated servers in a server room or virtual machines in a data center. Developers might prefer to deploy on Docker or Kubernetes containers. And don't forget, you can deploy HeatWave, the MySQL cloud version, in many different clouds. MySQL has great multithreading capability. It also has support for Non-Uniform Memory Access or NUMA. This is particularly important if you run large systems with hundreds of concurrent connections. MySQL storage engine, InnoDB, makes effective use of your available memory. It stores your active data in a buffer pool. This greatly improves access time compared to reading straight from disk. Of course, SSDs and other solid state media are much faster than hard disks. But don't forget, MySQL can make full use of that performance benefit too. Redundancy is very important for the MySQL server. Hardware with redundant power supply, storage media, and network connections can make all the difference to your uptime. Without redundancy, a single point of failure will bring down the server if it fails. 03:26 Nikita: Got it. Perside, from where can I download the different editions of MySQL? Perside: Our most popular software is the MySQL Community Edition. It is available at no cost for mysql.com for many platforms. This version is why MySQL is the most popular database for web application. And it is also open source. MySQL Enterprise Edition is the commercial edition. It is fully supported by Oracle. You can get it from support.oracle.com as an Oracle customer. If you want to try out the enterprise features but are not yet a customer, you can get the latest version of MySQL as a trial edition from edelivery.oracle.com. Because MySQL is open source, you can get the source code from either mysql.com or GitHub. Most people don't need the source. But any developer who wants to modify the code or even contribute back to the project are welcome to do so. 04:43 Lois: Perside, can you walk us through MySQL’s release model? Perside: This is divided into LTS and Innovation releases, each with a different target audience. LTS stands for long-term support. MySQL 8.4 is an LTS release and will be supported for several years. LTS releases are feature-stable. When you install an LTS release, you can apply future bug fixes and security patches without changing any behavior in the product. The bug fixes and security patches are designed to be backward compatible. This means you can upgrade easily from previous releases. LTS releases come every two years. This allows you to maintain a stable system without having to change your underlying application too frequently. You will not be forced to upgrade after two years. You can continue to enjoy support for an LTS release for up to eight years. Along with LTS releases, we also have Innovation releases. These contain the latest leading-edge features that are developed even in the middle of an LTS cycle. You can upgrade from LTS to Innovation and back again, depending on which features you require in your application. Innovation releases have a much more rapid cadence. You can get the latest features every quarter. This means Innovation releases are supported only for their specific release. So, if you're on the Innovation track, you must upgrade more frequently. All editions of MySQL are shipped as both LTS and Innovation releases. This includes the self-managed editions and also HeatWave in the cloud. You can treat both LTS and Innovation releases as production-ready. This means they are generally available releases. Innovation does not mean beta quality software. You get the same quality support from Oracle whether you're using LTS or Innovative software. The MySQL client software and other tools will operate with both LTS and innovation releases. 07:43 Nikita: What are connectors in the context of MySQL? Perside: Connectors are the language-specific software component that connects your application to MySQL. You should use the latest version of connectors. Connectors are also production-ready, generally available software. They will work with any version of MySQL that is supported at the time of the connector's release. 08:12 Nikita: How does MySQL integrate with Docker and other container platforms? Perside: You might already be familiar with the Docker store. It is used for getting containerized images of software. As an Oracle customer, you might be familiar with My Oracle Support. It provides support and updates for all supported Oracle software in patches. MySQL works well with virtualization and container platform, including Docker. You can get images from the Community Edition on Docker Hub. If you are an Enterprise Edition customer, you can get images from the Docker store for MySQL Oracle Support or from Oracle container's registry. 09:04 Lois: What resources are available for someone who wants to know more about MySQL? Perside: MySQL has detailed documentation. You should familiarize yourself with the documentation as you prepare to install MySQL. The reference manual for both Community and Enterprise editions are available at the Developer Zone at dev.mysql.com. Oracle customers also have access to the knowledge base at support.oracle.com. It contains support information on use cases and reference architectures. The product team regularly posts announcements and technical articles to several blogs. These blogs often contain pre-release announcements of upcoming features to help you prepare for your next project. Also, you'll find deep dives into technical topics and complex problems that MySQL solves. This includes some problems specific to highly available architecture. We also feature individual blogs from high profile members of our team. These include the MySQL Community evangelist lefred. He posts about upcoming events and interesting features. Also, Dimitri Kravchuk offers blogs that provide deep dives into performance. 10:53 Nikita: Ok, now that I have all this information and am prepped and ready, how do I actually install MySQL on my operating system? What’s the process like? Perside: You can install MySQL on various operating system, depending on your needs. These might include several distributions of Linux or UNIX, Windows, Mac OS, Oracle Linux based on the Unbreakable Enterprise Kernel, Solaris, and freeBSD. As always, the MySQL documentation provides full details on supported operating system. It also provides the specific installation steps for each of the operating system. Plus, it tells you how to perform the initial configuration and further administrative steps. If you're installing on Windows, you have a couple of options. First, the MySQL Installer utility is the easiest way to install MySQL. It installs MySQL and performs the initial configuration based on options that you choose at installation time. It includes not only the MySQL server, but also the most important connectors, the MySQL Shell Client, MySQL Workbench Client with user interface and common utilities for troubleshooting and administration. It also installs several sample databases and models and documentation. It's the easiest way to install MySQL because it uses an installation wizard. It lets you select your installation target location, what components to install, and other options. 12:47 Lois: But what if I want to have more control? Perside: For more control over your installation, you can install MySQL from the binary zip archive. This does not include sample or supporting tools and connectors, but only contains the application’s binaries, which you can install anywhere you want. This means that the initial configuration is not performed by selecting an option through a wizard. Instead, you must configure the Windows service and MySQL configuration file yourself. Linux installation is more varied. This is because of the different distribution and also because of its terms of flexibility. On many distributions of Linux, you can use the package manager native to that distribution. For example, you can use the yum package manager in all Oracle Linux to install RPM files. You can also use a binary archive to install only the files. To decide which method you want to use, it's based on several factors. How much you know about MySQL files and configuration and the operating system on which you're going to do the installation? Any applicable standard or operating procedures within your own company's IT infrastructure, how much control do you need over this installation and how flexible a method do you need? For example, the RPM package for Oracle Linux, it installs the file in specific locations and with a specific service, MySQL user account. 14:54 Transform the way you work with Oracle Database 23ai! This cutting-edge technology brings the power of AI directly to your data, making it easier to build powerful applications and manage critical workloads. Want to learn more about Database 23ai? Visit mylearn.oracle.com to pick from our range of courses and enroll today! 15:18 Nikita: Welcome back! Is there a way for me to extend the functionality of MySQL beyond its default capabilities? Perside: Much of MySQL's behavior is standard and always exists when you install the server. However, you can configure some additional behaviors by extending MySQL with plugins or components. Plugins operate closely with the server and by calling APIs exposed by the server, they add features by providing extra functions or variables. Not only do they add variables, they can also interact with the servers on global variables and functions. That makes them work as if they are dynamically loadable parts of the server itself. Components also extend functionality, but they are separate from the server and extend its functionality through a service-based architecture. You can also extend MySQL in other ways-- by creating stored procedures, triggers, and functions with standard SQL and MySQL extensions to that language, or by creating external dynamically loaded user-defined functions. 16:49 Lois: Perside, can we talk about the initial user accounts? Perside: A MySQL account identifier is more than just a username and password. It consists of three elements, two that identify the account, and one that is used for authentication. The three elements are the username, it's used to log in from the client; the hostname element, it identifies a computer or set of computers; and the password, it must be provided to gain access to MySQL. The hostname is a part of the account identifier that controls where the user can log in. It is typically a DNS computer name or an IP address. You can use a wildcard, which is the percentage sign to allow the name user to log in from any connected host, or you can use the wildcard as part of an IP address to allow login from a limited range of IP addresses. 17:58 Nikita: So, what happens when I install MySQL on my computer? Perside: When you first install MySQL on your computer, it installs several system accounts. The only user account that you can log in to is the administrative account. That's called the root account. Depending on the installation method that you use, you'll either see the initial root password on the console as you install the server, or you can read it from the log file. For security reasons, the password of a new account, such as the root account must change. MySQL prevents you from executing any other operation with that account until you have changed the password. 18:46 Lois: What are the system requirements for installing and running MySQL? Perside: The MySQL service must run as a system-level user. Each operating system has its own method for creating such a user. All operating system follows the same general principles. However, when using the MySQL installer on Windows or the RPM package installation on Oracle Linux, each installation process creates and configure the system-level user. 19:22 Lois: Perside, since MySQL is always evolving, how do I upgrade it when newer versions become available? Perside: When you upgrade MySQL, you have to bring the server down so that the upgrade process can replace all of the relevant binary executable files. And if necessary, update the data and configuration to suit the new software. The safest thing to do is to back up your whole MySQL environment. This includes not only your data in the files, such as binaries and configuration files, but also logical elements, including triggers, stored procedures, user configuration, and anything else that's required to rebuild your system. The upgrade process gives you two main options. An in-place upgrade uses your existing data directory. After you shut down your MySQL server process, you either replace the package or binaries with new versions, or you install the new binary executables in a new location and point your symbolic links to this new location. The server process detects that the data directory belongs to an earlier version and performs any required upgrade checks. 20:46 Lois: Thank you, Perside, for taking us through the practical aspects of using MySQL. If you want to learn about the MySQL architecture, visit mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Nikita: Before you go, we wanted to take a minute to thank you for taking the Oracle University Podcast survey that we put out at the end of last year. Your insights were invaluable and will help shape our future episodes. Lois: And if you missed taking the survey but have feedback to share, you can write to us at [email protected]. That’s [email protected]. We’d love to hear from you. Join us next week for a discussion on MySQL database design. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 21:45 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/34536645
info_outline
Introduction to MySQL
01/07/2025
Introduction to MySQL
Join hosts Lois Houston and Nikita Abraham as they kick off a new season exploring the world of MySQL 8.4. Together with Perside Foster, a MySQL Principal Solution Engineer, they break down the fundamentals of MySQL, its wide range of applications, and why it’s so popular among developers and database administrators. This episode also covers key topics like licensing options, support services, and the various tools, features, and plugins available in MySQL Enterprise Edition. ------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Happy New Year, everyone! Thank you for joining us as we begin a new season of the podcast, this time focused on the basics of MySQL 8.4. If you’re a database administrator or want to become one, this is definitely for you. It’s also great for developers working with data-driven apps or IT professionals handling MySQL installs, configurations, and support. 01:03 Lois: That’s right, Niki. Throughout the season, we'll be delving into MySQL Enterprise Edition and covering a range of topics, including installation, security, backups, and even MySQL HeatWave on Oracle Cloud. Nikita: Today, we're going to discuss the Oracle MySQL ecosystem and its various components. We’ll start by covering the fundamentals of MySQL and the different licenses that are available. Then, we’ll explore the key tools and features to boost data security and performance. Plus, we’ll talk a little bit about MySQL HeatWave, which is the cloud version of MySQL. 01:39 Lois: To take us through all of this, we’ve got Perside Foster with us today. Perside is a MySQL Principal Solution Engineer at Oracle. Hi Perside! For anyone new to MySQL, can you explain what it is and why it's so widely used? Perside: MySQL is a relational database management system that organizes data into structured tables, rows, and columns for efficient programming and data management. MySQL is transactional by nature. When storing and managing data, actions such as selecting, inserting, updating, or deleting are required. MySQL groups these actions into a transaction. The transaction is saved only if every part completes successfully. 02:29 Lois: Now, how does MySQL work under the hood? Perside: MySQL is a high-performance database that uses its default storage engine, known as InnoDB. InnoDB helps MySQL handle complex operations and large data volumes smoothly. 02:49 Nikita: For the unversed, what are some day-to-day applications of MySQL? How is it used in the real world? Perside: MySQL works well with online transaction processing workloads. It handles transactions quickly and manages large volumes of transaction at once. OLTP, with low latency and high throughput, makes MySQL ideal for high-speed environments like banking or online shopping. MySQL not only stores data but also replicates it from a main server to several replicas. 03:31 Nikita: That's impressive! And what are the benefits of using MySQL? Perside: It improves data availability and load balancing, which is crucial for businesses that need up-to-date information. MySQL replication supports read scale-out by distributing queries across servers, which increases high availability. MySQL is the most popular database on the web. 04:00 Lois: And why is that? What makes it so popular? What sets it apart from the other database management systems? Perside: First, it is a relational database management system that supports SQL. It also works as a document store, enabling the creation of both SQL and NoSQL applications without the need for separate NoSQL databases. Additionally, MySQL offers advanced security features to protect data integrity and privacy. It also uses tablespaces for better disk space management. This gives database administrators total control over their data storage. MySQL is simple, solid in its reliability, and secure by design. It is easy to use and ideal for both beginners and professionals. MySQL is proven at scale by efficiently handling large data volumes and high transaction rates. MySQL is also open source. This means anyone can download and use it for free. Users can modify the MySQL software to meet their needs. However, it is governed by the GNU General Public License, or GPL. GPL outlines specific rules for its use. MySQL offers two major editions. For developers and small teams, the Community Edition is available for free and includes all of the core features needed. For large enterprises, the Commercial Edition provides advanced features, management tools, and dedicated technical support. 05:58 Nikita: Ok. Let’s shift focus to licensing. Who is it useful for? Perside: MySQL licensing is essential for independent software vendors. They're called ISVs. And original manufacturers, they're called OEMs. This is because these companies often incorporate MySQL code into their software products or hardware system to boost the functionality and performance of their product. MySQL licensing is equally important for value-added resellers. We call those VARs. And also, it's important for other distributors. These groups bundle MySQL with other commercially licensed software to sell as part of their product offering. The GPL v.2 license might suit Open Source projects that distribute their products under that license. 07:02 Lois: But what if some independent software vendors, original manufacturers, or value-add resellers don’t want to create Open Source products. They don’t want their source to be publicly available and they want to keep it private? What happens then? Perside: This is why Oracle provides a commercial licensing option. This license allows businesses to use MySQL in their products without having to disclose their source code as required by GPL v2. 07:33 Nikita: I want to bring up the robust support services that are available for MySQL Enterprise. What can we expect in terms of support, Perside? Perside: MySQL Enterprise Support provides direct access to the MySQL Support team. This team consists of experienced MySQL developers, who are experts in databases. They understand the issues and challenges their customers face because they, too, have personally tackled these issues and challenges. This support service operates globally and is available in 29 languages. So no matter where customers are located, Oracle Support provides assistance, most likely in their preferred language. MySQL Enterprise Support offers regular updates and hot fixes to ensure that the MySQL customer systems stays current with the latest improvements and security patches. MySQL Support is available 24 hours a day, 7 days a week. This ensures that whenever there is an issue, Oracle Support can provide the needed help without any delay. There are no restrictions on how many times customers can receive help from the team because MySQL Enterprise Support allows for unlimited incidents. MySQL Enterprise Support goes beyond simply fixing issues. It also offers guidance and advice. Whether customers require assistance with performance tuning or troubleshooting, the team is there to support them every step of the way. 09:27 Lois: Perside, can you walk us through the various tools and advanced features that are available within MySQL? Maybe we could start with MySQL Shell. Perside: MySQL Shell is an integrated client tool used for all MySQL database operations and administrative functions. It's a top choice among MySQL users for its versatility and powerful features. MySQL Shell offers multi-language support for JavaScript, Python, and SQL. These naturally scriptable languages make coding flexible and efficient. They also allow developers to use their preferred programming language for everything, from automating database tasks to writing complex queries. MySQL Shell supports both document and relational models. Whether your project needs the flexibility of NoSQL’s document-oriented structures or the structured relationships of traditional SQL tables, MySQL Shell manages these different data types without any problems. Another key feature of MySQL Shell is its full access to both development and administrative APIs. This ability makes it easy to automate complex database operations and do custom development directly from MySQL Shell. MySQL Shell excels at DBA operations. It has extensive tools for database configuration, maintenance, and monitoring. These tools not only improve the efficiency of managing databases, but they also reduce the possibility for human error, making MySQL databases more reliable and easier to manage. 11:37 Nikita: What about the MySQL Server tool? I know that it is the core of the MySQL ecosystem and is available in both the community and commercial editions. But how does it enhance the MySQL experience? Perside: It connects with various devices, applications, and third-party tools to enhance its functionality. The server manages both SQL for structured data and NoSQL for schemaless applications. It has many key components. The parser, which interprets SQL commands. Optimizer, which ensures efficient query execution. And then the queue cache and buffer pools. They reduce disk usage and speed up access. InnoDB, the default storage engine, maintains data integrity and supports robust transaction and recovery mechanism. MySQL is designed for scalability and reliability. With features like replication and clustering, it distributes data, manage more users, and ensure consistent uptime. 13:00 Nikita: What role does MySQL Enterprise Edition play in MySQL server’s capabilities? Perside: MySQL Enterprise Edition improves MySQL server by adding a suite of commercial extensions. These exclusive tools and services are designed for enterprise-level deployments and challenging environments. These tools and services include secure online backup. It keeps your data safe with efficient backup solutions. Real-time monitoring provides insight into database performance and health. The seamless integration connects easily with existing infrastructure, improving data flow and operations. Then you have the 24/7 expert support. It offers round the clock assistance to optimize and troubleshoot your databases. 14:04 Lois: That's an extensive list of features. Now, can you explain what MySQL Enterprise plugins are? I know they’re specialized extensions that boost the capabilities of MySQL server, tools, and services, but I’d love to know a little more about how they work. Perside: Each plugin serves a specific purpose. Firewall plugin protects against SQL injection by allowing only pre-approved queries. The audit plugin logs database activities, tracking who accesses databases and what they do. Encryption plugin secures data at rest, protecting it from unauthorized access. Then we have the authentication plugin, which integrates with systems like LDAP and Active Directory for control access. Finally, the thread pool plugin optimizes performance in high load situation by effectively controlling how many execution threads are used and how long they run. The plugin and tools are included in the MySQL Enterprise Edition suite. 15:32 Join the Oracle University Learning Community and tap into a vibrant network of over 1 million members, including Oracle experts and fellow learners. This dynamic community is the perfect place to grow your skills, connect with likeminded learners, and celebrate your successes. As a MyLearn subscriber, you have access to engage with your fellow learners and participate in activities in the community. Visit community.oracle.com/ou to check things out today! 16:03 Nikita: Welcome back! We’ve been going through the various MySQL tools, and another important one is MySQL Enterprise Backup, right? Perside: MySQL Enterprise Backup is a powerful tool that offers online, non-blocking backup and recovery. It makes sure databases remain available and performs optimally during the backup process. It also includes advanced features, such as incremental and differential backup. Additionally, MySQL Enterprise Backup supports compression to reduce backups and encryptions to keep data secure. One of the standard capabilities of MySQL Enterprise Backup is its seamless integration with media management software, or MMS. This integration simplifies the process of managing and storing backups, ensuring that data is easily accessible and secure. Then we have the MySQL Workbench Enterprise. It enhances database development and design with robust tools for creating and managing your diagram and ensuring proper documentation. It simplifies data migration with powerful tools that makes it easy to move databases between platforms. For database administration, MySQL Workbench Enterprise offers efficient tools for monitoring, performance tuning, user management, and backup and recovery. MySQL Enterprise Monitor is another tool. It provides real-time MySQL performance and availability monitoring. It helps track database's health and performance. It visually finds and fixes problem queries. This is to make it easy to identify and address performance issues. It offers MySQL best-practice advisors to guide users in maintaining optimal performance and security. Lastly, MySQL Enterprise Monitor is proactive and it provides forecasting. 18:40 Lois: Oh that’s really going to help users stay ahead of potential issues. That’s fantastic! What about the Oracle Enterprise Manager Plugin for MySQL? Perside: This one offers availability and performance monitoring to make sure MySQL databases are running smoothly and efficiently. It provides configuration monitoring. This is to help keep track of the database settings and configuration. Finally, it collects all available metrics to provide comprehensive insight into the database operation. 19:19 Lois: Are there any tools designed to handle higher loads and improve security? Perside: MySQL Enterprise Thread Pool improves scalability as concurrent connections grows. It makes sure the database can handle increased loads efficiently. MySQL Enterprise Authentication is another tool. This one integrates MySQL with existing security infrastructures. It provides robust security solutions. It supports Linux PAM, LDAP, Windows, Kerberos, and even FIDO for passwordless authentication. 20:02 Nikita: Do any tools offer benefits like customized logging, data protection, database security? Perside: The MySQL Enterprise Audit provides out-of-the-box logging of connections, logins, and queries in XML or JSON format. It also offers simple to fine-grained policies for filtering and log rotation. This is to ensure comprehensive and customizable logging. MySQL Enterprise Firewall detects and blocks out of policy database transactions. This is to protect your data from unauthorized access and activities. We also have MySQL Enterprise Asymmetric Encryption. It uses MySQL encryption libraries for key management signing and verifying data. It ensures data stays secure during handling. MySQL Transparent Data Encryption, another tool, provides data-at-rest encryption within the database. The Master Key is stored outside of the database in a KMIP 1.1-compliant Key Vault. That is to improve database security. Finally, MySQL Enterprise Masking offers masking capabilities, including string masking and dictionary replacement. This ensures sensitive data is protected by obscuring it. It also provides random data generators, such as range-based, payment card, email, and social security number generators. These tools help create realistic but anonymized data for testing and development. 22:12 Lois: Can you tell us about HeatWave, the MySQL cloud service? We’re going to have a whole episode dedicated to it soon, but just a quick introduction for now would be great. Perside: MySQL HeatWave offers a fully managed MySQL service. It provides deployment, backup and restore, high availability, resizing, and read replicas, all the features you need for efficient database management. This service is a powerful union of Oracle Infrastructure and MySQL Enterprise Edition 8. It combines robust performance with top-tier infrastructure. With MySQL HeatWave, your systems are always up to date with the latest security fixes, ensuring your data is always protected. Plus, it supports both OLTP and analytics/ML use cases, making it a versatile solution for diverse database needs. 23:22 Nikita: So to wrap up, what are your key takeways when it comes to MySQL? Perside: When you use MySQL, here is the bottom line. MySQL Enterprise Edition delivers unmatched performance at scale. It provides advanced monitoring and tuning capabilities to ensure efficient database operation, even under heavy loads. Plus, it provides insurance and immediate help when needed, allowing you to depend on expert support whenever an issue arises. Regarding total cost of ownership, TCO, this edition significantly reduces the risk of downtime and enhances productivity. This leads to significant cost savings and improved operational efficiency. On the matter of risk, MySQL Enterprise Edition addresses security and regulatory compliance. This is to make sure your data meets all necessary standards. Additionally, it provides direct contact with the MySQL team for expert guidance. In terms of DevOps agility, it supports automated scaling and management, as well as flexible real-time backups, making it ideal for agile development environments. Finally, concerning customer satisfaction, it enhances application performance and uptime, ensuring your customers have a reliable and smooth experience. 25:18 Lois: Thank you so much, Perside. This is really insightful information. To learn more about all the support services that are available, visit support.oracle.com. This is the central hub for all MySQL Enterprise Support resources. Nikita: Yeah, and if you want to know about the key commercial products offered by MySQL, visit mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Join us next week for a discussion on installing MySQL. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 25:53 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/34535995
info_outline
Best of 2024: Autonomous Database on Serverless Infrastructure
12/17/2024
Best of 2024: Autonomous Database on Serverless Infrastructure
Want to quickly provision your autonomous database? Then look no further than Oracle Autonomous Database Serverless, one of the two deployment choices offered by Oracle Autonomous Database. Autonomous Database Serverless delegates all operational decisions to Oracle, providing you with a completely autonomous experience. Join hosts Lois Houston and Nikita Abraham, along with Oracle Database experts, as they discuss how serverless infrastructure eliminates the need to configure any hardware or install any software because Autonomous Database handles provisioning the database, backing it up, patching and upgrading it, and growing or shrinking it for you. Survey: Oracle MyLearn: Oracle University Learning Community: LinkedIn: X (formerly Twitter): Special thanks to Arijit Ghosh, David Wright, Rajeev Grover, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started. 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! We hope you’ve been enjoying these last few weeks as we’ve been revisiting our most popular episodes of the year. Lois: Today’s episode is the last one in this series and is a throwback to a conversation on Autonomous Databases on Serverless Infrastructure with three experts in the field: Hannah Nguyen, Sean Stacey, and Kay Malcolm. Hannah is a Staff Cloud Engineer, Sean is the Director of Platform Technology Solutions, and Kay is Vice President of Database Product Management. For this episode, we’ll be sharing portions of our conversations with them. 01:14 Nikita: We began by asking Hannah how Oracle Cloud handles the process of provisioning an Autonomous Database. So, let’s jump right in! Hannah: The Oracle Cloud automates the process of provisioning an Autonomous Database, and it automatically provisions for you a highly scalable, highly secure, and a highly available database very simply out of the box. 01:35 Lois: Hannah, what are the components and architecture involved when provisioning an Autonomous Database in Oracle Cloud? Hannah: Provisioning the database involves very few steps. But it's important to understand the components that are part of the provisioned environment. When provisioning a database, the number of CPUs in increments of 1 for serverless, storage in increments of 1 terabyte, and backup are automatically provisioned and enabled in the database. In the background, an Oracle 19c pluggable database is being added to the container database that manages all the user's Autonomous Databases. Because this Autonomous Database runs on Exadata systems, Real Application Clusters is also provisioned in the background to support the on-demand CPU scalability of the service. This is transparent to the user and administrator of the service. But be aware it is there. 02:28 Nikita: Ok…So, what sort of flexibility does the Autonomous Database provide when it comes to managing resource usage and costs, you know… especially in terms of starting, stopping, and scaling instances? Hannah: The Autonomous Database allows you to start your instance very rapidly on demand. It also allows you to stop your instance on demand as well to conserve resources and to pause billing. Do be aware that when you do pause billing, you will not be charged for any CPU cycles because your instance will be stopped. However, you'll still be incurring charges for your monthly billing for your storage. In addition to allowing you to start and stop your instance on demand, it's also possible to scale your database instance on demand as well. All of this can be done very easily using the Database Cloud Console. 03:15 Lois: What about scaling in the Autonomous Database? Hannah: So you can scale up your OCPUs without touching your storage and scale it back down, and you can do the same with your storage. In addition to that, you can also set up autoscaling. So the database, whenever it detects the need, will automatically scale up to three times the base level number of OCPUs that you have allocated or provisioned for the Autonomous Database. 03:38 Nikita: Is autoscaling available for all tiers? Hannah: Autoscaling is not available for an always free database, but it is enabled by default for other tiered environments. Changing the setting does not require downtime. So this can also be set dynamically. One of the advantages of autoscaling is cost because you're billed based on the average number of OCPUs consumed during an hour. 04:01 Lois: Thanks, Hannah! Now, let’s bring Sean into the conversation. Hey Sean, I want to talk about moving an autonomous database resource. When or why would I need to move an autonomous database resource from one compartment to another? Sean: There may be a business requirement where you need to move an autonomous database resource, serverless resource, from one compartment to another. Perhaps, there's a different subnet that you would like to move that autonomous database to, or perhaps there's some business applications that are within or accessible or available in that other compartment that you wish to move your autonomous database to take advantage of. 04:36 Nikita: And how simple is this process of moving an autonomous database from one compartment to another? What happens to the backups during this transition? Sean: The way you can do this is simply to take an autonomous database and move it from compartment A to compartment B. And when you do so, the backups, or the automatic backups that are associated with that autonomous database, will be moved with that autonomous database as well. 05:00 Lois: Is there anything that I need to keep in mind when I’m moving an autonomous database between compartments? Sean: A couple of things to be aware of when doing this is, first of all, you must have the appropriate privileges in that compartment in order to move that autonomous database both from the source compartment to the target compartment. In addition to that, once the autonomous database is moved to this new compartment, any policies or anything that's defined in that compartment to govern the authorization and privileges of that said user in that compartment will be applied immediately to that new autonomous database that has been moved into that new compartment. 05:38 Nikita: Sean, I want to ask you about cloning in Autonomous Database. What are the different types of clones that can be created? Sean: It's possible to create a new Autonomous Database as a clone of an existing Autonomous Database. This can be done as a full copy of that existing Autonomous Database, or it can be done as a metadata copy, where the objects and tables are cloned, but they are empty. So there's no rows in the tables. And this clone can be taken from a live running Autonomous Database or even from a backup. So you can take a backup and clone that to a completely new database. 06:13 Lois: But why would you clone in the first place? What are the benefits of this? Sean: When cloning or when creating this clone, it can be created in a completely new compartment from where the source Autonomous Database was originally located. So it's a nice way of moving one database to another compartment to allow developers or another community of users to have access to that environment. 06:36 Nikita: I know that along with having a full clone, you can also have a refreshable clone. Can you tell us more about that? Who is responsible for this? Sean: It's possible to create a refreshable clone from an Autonomous Database. And this is one that would be synced with that source database up to so many days. The task of keeping that refreshable clone in sync with that source database rests upon the shoulders of the administrator. The administrator is the person who is responsible for performing that sync operation. Now, actually performing the operation is very simple, it's point and click. And it's an automated process from the database console. And also be aware that refreshable clones can trail the source database or source Autonomous Database up to seven days. After that period of time, the refreshable clone, if it has not been refreshed or kept in sync with that source database, it will become a standalone, read-only copy of that original source database. 07:38 Nikita: Ok Sean, so if you had to give us the key takeaways on cloning an Autonomous Database, what would they be? Sean: It's very easy and a lot of flexibility when it comes to cloning an Autonomous Database. We have different models that you can take from a live running database instance with zero impact on your workload or from a backup. It can be a full copy, or it can be a metadata copy, as well as a refreshable, read-only clone of a source database. 08:12 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You’ll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all of which is available free to subscribers. So, get going! Pick a course of your choice, get certified, join the Oracle University Learning Community, and network with your peers. If you’re already an Oracle MyLearn user, go to MyLearn to begin your journey. If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started. 08:50 Nikita: Welcome back! Thank you, Sean, and hi Kay! I want to ask you about events and notifications in Autonomous Database. Where do they really come in handy? Kay: Events can be used for a variety of notifications, including admin password expiration, ADB services going down, and wallet expiration warnings. There's this service, and it's called the notifications service. It's part of OCI. And this service provides you with the ability to broadcast messages to distributed components using a publish and subscribe model. These notifications can be used to notify you when event rules or alarms are triggered or simply to directly publish a message. In addition to this, there's also something that's called a topic. This is a communication channel for sending messages to subscribers in the topic. You can manage these topics and their subscriptions really easy. It's not hard to do at all. 09:52 Lois: Kay, I want to ask you about backing up Autonomous Databases. How does Autonomous Database handle backups? Kay: Autonomous Database automatically backs up your database for you. The retention period for backups is 60 days. You can restore and recover your database to any point in time during this retention period. You can initiate recovery for your Autonomous Database by using the cloud console or an API call. Autonomous Database automatically restores and recovers your database to the point in time that you specify. In addition to a point in time recovery, we can also perform a restore from a specific backup set. 10:37 Lois: Kay, you spoke about automatic backups, but what about manual backups? Kay: You can do manual backups using the cloud console, for example, if you want to take a backup say before a major change to make restoring and recovery faster. These manual backups are put in your cloud object storage bucket. 10:58 Nikita: Are there any special instructions that we need to follow when configuring a manual backup? Kay: The manual backup configuration tasks are a one-time operation. Once this is configured, you can go ahead, trigger your manual backup any time you wish after that. When creating the object storage bucket for the manual backups, it is really important-- so I don't want you to forget-- that the name format for the bucket and the object storage follows this naming convention. It should be backup underscore database name. And it's not the display name here when I say database name. In addition to that, the object name has to be all lowercase. So three rules. Backup underscore database name, and the specific database name is not the display name. It has to be in lowercase. Once you've created your object storage bucket to meet these rules, you then go ahead and set a database property. Default_backup_bucket. This points to the object storage URL and it's using the Swift protocol. Once you've got your object storage bucket mapped and you've created your mapping to the object storage location, you then need to go ahead and create a database credential inside your database. You may have already had this in place for other purposes, like maybe you were loading data, you were using Data Pump, et cetera. If you don't, you would need to create this specifically for your manual backups. Once you've done so, you can then go ahead and set your property to that default credential that you created. So once you follow these steps as I pointed out, you only have to do it one time. Once it's configured, you can go ahead and use it from now on for your manual backups. 13:00 Lois: Kay, the last topic I want to talk about before we let you go is Autonomous Data Guard. Can you tell us about it? Kay: Autonomous Data Guard monitors the primary database, in other words, the database that you're using right now. 13:14 Lois: So, if ADB goes down… Kay: Then the standby instance will automatically become the primary instance. There's no manual intervention required. So failover from the primary database to that standby database I mentioned, it's completely seamless and it doesn't require any additional wallets to be downloaded or any new URLs to access APEX or Oracle Machine Learning. Even Oracle REST Data Services. All the URLs and all the wallets, everything that you need to authenticate, to connect to your database, they all remain the same for you if you have to failover to your standby database. 13:58 Lois: And what happens after a failover occurs? Kay: After performing a failover, a new standby for your primary will automatically be provisioned. So in other words, in performing a failover your standby does become your new primary. Any new standby is made for that primary. I know, it's kind of interesting. So currently, the standby database is created in the same region as the primary database. For better resilience, if your database is provisioned, it would be available on AD1 or Availability Domain 1. My secondary, or my standby, would be provisioned on a different availability domain. 14:49 Nikita: But there’s also the possibility of manual failover, right? What are the differences between automatic and manual failover scenarios? When would you recommend using each? Kay: So in the case of the automatic failover scenario following a disastrous situation, if the primary ADB becomes completely unavailable, the switchover button will turn to a failover button. Because remember, this is a disaster. Automatic failover is automatically triggered. There's no user action required. So if you're asleep and something happens, you're protected. There's no user action required, but automatic failover is allowed to succeed only when no data loss will occur. For manual failover scenarios in the rare case when an automatic failover is unsuccessful, the switchover button will become a failover button and the user can trigger a manual failover should they wish to do so. The system automatically recovers as much data as possible, minimizing any potential data loss. But you can see anywhere from a few seconds or minutes of data loss. Now, you should only perform a manual failover in a true disaster scenario, expecting the fact that a few minutes of potential data loss could occur, to ensure that your database is back online as soon as possible. 16:23 Lois: We hope you’ve enjoyed revisiting some of our most popular episodes over these past few weeks. We always appreciate your feedback and suggestions so remember to take that quick survey we’ve put out. You’ll find it in the show notes for today’s episode. Thanks a lot for your support. We’re taking a break for the next two weeks and will be back with a brand-new season of the Oracle University Podcast in January. Happy holidays, everyone! Nikita: Happy holidays! Until next time, this is Nikita Abraham... Lois: And Lois Houston, signing off! 16:56 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
/episode/index/show/oracleuniversitypodcast/id/34273600