Before this year’s Oracle OpenWorld, I shared on this blog some of my experiences and tips about the expansive conference, which stretches from San Francisco’s Moscone Center to the palm trees of Union Square (and requires running shoes to cover all of the venues). But, despite the expanse and the expense of it all, the utility of bringing customers together, from many countries, in one place, is that Oracle can broadcast a unified message to its entire user base—and do so in just four days.
After all, Oracle must cover a broad portfolio of products and services. That includes the flagship Oracle Database 12c, many dozens of new vertical apps, and Oracle’s enterprise applications (e.g., PeopleSoft, J.D. Edwards and Oracle Financials) – along with the cloud-delivered data services based on those solutions.
Transformation in the Data Center
This year, it’s clear the theme is this: Oracle is focusing on transformation in the data center – at the process of evaluating which workloads will stay on-prem and which ones will go off-prem. Oracle is positioning its products and services to match these deployments – whether they are delivered via cloud services – or kept within the customer’s data center and enterprise network.
At the Oracle OpenWorld 2014 conference, Oracle executives presented their case for workload modernization – and for on-prem/off-prem deployments for enterprise applications and databases that leverage cloud computing and offsite hosting. The megatrends that are causing changes in the data center, worldwide, are clear: Big Data/Analytics, Cloud Computing, Social Media and Mobility—all of these are pushing the capacity of today’s data centers to their limits, in many cases.
The wave of change is in the air, via data center transformation, although customers will take many paths will to update their enterprise workloads and data centers.
And, the change itself may provide some new opportunities for Oracle competitors, as the infrastructure is updated. That’s why Oracle is moving quickly to address the demands that the megatrends present.
Now, for the Keynote Videos
On Monday morning, Sept. 29, 2014, Oracle CEO Mark Hurd interviewed big-company CIOs on-stage at Oracle OpenWorld. Among them: CIOs from Walgreen’s, General Electric, Intel and Procter & Gamble. All agreed that the data center is being transformed by new technologies for Cloud, Big Data, Analytics, Social Media and Mobility.
But all of them said that the sheer number of applications and databases that need to be updated to adapt to the new workload mix is very large – and that will require many years to update all of their applications and databases to meet new business goals.
On Tuesday afternoon, Sept. 30, Oracle executive chairman and CTO Larry Ellison announced a number of automation tools, cloud services and app-dev clouds—all of which could be leveraged to speed up the process of workload modernization.
Automation is a key feature of these tools for modernization – including automation of a re-hosting process to move workloads from the onsite data center (on-prem) to a hosting site or cloud provider of IaaS, PaaS and SaaS (off-prem).
The central theme of carrying forward the business logic, while changing out the technology wrapped around those workloads, was consistently noted in keynotes by Ellison, Oracle Systems Executive Vice President John Fowler, Oracle Product Development Vice President Thomas Kurian – and by others in breakout sessions. That synchronization of the messages reinforced the central theme.
The Waves of Change
Oracle plans to address this wave of change in many ways: by providing software tools for workload modernization on-prem, inside enterprise data centers – and by providing the Oracle Cloud and cloud-related services for hosting applications and databases off-prem. This is the result of several years of planning and development – much of which has been hinted at in previous OOW conferences.
As a longtime Oracle watcher, I think it’s interesting to note that Ellison, as Oracle CEO, started to preview this on-prem/off-prem thesis at OOW 2013, and even before that in other speeches. As with other Oracle initiatives (e.g., announcing the Oracle Cloud, developing Oracle Linux), the idea was hinted at during its development phases—then brought to market in a big way, with a big announcement. This pattern clearly shows the importance of developing, then leveraging, net-new building blocks in Oracle’s long-term roadmap.
Flash in the Mix
What does all of this mean for flash technology? Flash was widely acknowledged as part of these solutions, both for the enterprise data center and for the cloud data center. But it was seen as one of several storage tiers in the data center—including DRAM, flash storage, hybrid arrays, and hard-disk drives—where workloads would access their data.
On Wednesday, Oct. 1, John Fowler, executive vice president of systems at Oracle, mentioned both flash and DRAM in his technology keynote—and presented charts that showed the storage hierarchy that will be supported by Oracle in its product portfolio, including the company’s engineered systems, which combine hardware and software.
Certainly, Oracle is already leveraging flash in its high-performing engineered systems. For example, flash storage is inside Oracle’s Exadata engineered system and in several of its branded appliances, including Oracle’s ZFS storage appliance. For these systems, flash is a powerful accelerant for enterprise workloads, supporting data transfer for large datasets and supporting large IOPS rates for data-intensive workloads.
What SanDisk® Showed at Oracle OpenWorld
Customers who run Oracle workloads benefit from inclusion of flash – either in their existing systems, or in future system deployments.
At Oracle OpenWorld, SanDisk®’s booth featured eight demo stations that showed a variety of Oracle workloads running on flash-enabled servers. The booth showed the following workloads from Oracle: Oracle Data Warehouse, Oracle Backup and Recovery; Oracle NoSQL, Oracle MySQL and an Oracle ZFS appliance on SPARC-based hardware that was running at a remote site. Each workload demonstrated servers or appliances that build on SanDisk technology – including Fusion-io appliances (e.g., ION Accelerator) and Fusion-io’s AMP and Atomic Writes software.
For business managers, the operational cost savings are key takeaways from the demo showcases. Importantly, the solid-state drive (SSD) deployments shown at the booth demo stations typically used fewer drives, taking up less data center space, and requiring less power and cooling than similar deployments based on hard-disk drives (HDDs). For example, one of the demos leveraged just four SSDs to do the same work as a set of 16 HDDs—and the SSDs performed that work faster than the HDD-based system.
If you’re interested to read more, please go ahead and download the following two whitepapers:
- Boost Oracle Data Warehouse Performance Using SanDisk SSDs
- Scale Up and Out with Oracle MySQL and SanDisk SSDs
What began with tips-and-techniques for tuning databases and applications back in 1997 has become a discussion about platforms—including delivery of data services over Oracle’s cloud infrastructure (e.g., software as a service (SaaS), platform as a service (PaaS), infrastructure as a service (IaaS) and disaster recovery as a service (DRaaS).
Oracle has tapped into the energy of these times—when the big forces, or megatrends, along with flash technology, are changing the server/storage/networking infrastructure of the enterprise data center. The OOW conference theme of data center transformation syncs up with what many customers are seeing in their own sites, as user expectations for faster processing, and better business results, are rising.
Now, it’s up to customers to dialog with Oracle and the Oracle ecosystem partners to find the path that best suits their preferences, given the state of their business and their budgets. Cost to deploy is clearly top-of-mind for IT managers and business managers. That’s one reason that cloud-based solutions are becoming more widely adopted by enterprises. The customers must make their deployment decisions based on their “comfort zones” for the rate of change in the data center – and their continuing need to maintain availability and security in a data center that’s being redesigned, often from the ground up.