Oracle's new in-memory feature 'unfocused, hardware-centric and expensive' says Exasol CEO
Rivals respond to Oracle's announcement that new in-memory capability is to be released in July
Oracle officially launched its in-memory option for Oracle Database 12c software last week, having announced the feature at OpenWorld in October 2013. During that event CEO Larry Ellison boasted that for certain workloads processing could be "100 times faster".
At the time the announcement raised a few eyebrows. Wasn't this the same Larry Ellison who had described in-memory databases as "wacko" just a few short years before? Wasn't this the same Larry Ellison who accused executives at rival SAP of being on drugs when they described the firm's in-memory database HANA as an alternative to Oracle?
More seasoned observers were not so surprised. Ellison is something of a specialist at eating his own words. After all, this was the same Larry Ellison that described cloud computing as "complete gibberish" and "insane" before jumping on the bandwagon himself when the trend became unstoppable.
"They'll take whatever's new, they'll think about it, they'll let people go ahead as far as they get, and then at one point they'll say ‘Fine, we'll do it, too', and when they do that, they drop all the cost against it," Constellation Research analyst Ray Wang told Computing, describing Oracle's approach as "commoditising innovation".
The in-memory feature will be available as part of release 12.1.0.2 of Oracle Database 12c, which will be shipped in July. The fact that the in-memory capability is integrated with the existing database should reduce compatibility issues with current applications, something that Oracle says will make it easier to adopt than SAP HANA, where recoding is likely to be necessary.
However, Aaron Auld, CEO of Germany-based Exasol, which has been producing in-memory databases with a focus on analytics since 2000, says that trying to pile all functionality into one database makes it a jack of all trades and a master of none.
"It is great that Larry Ellison has finally accepted the in-memory paradigm as the way to the future. At the same time Oracle is still trying to sell us on the hybrid concept of one database for both OLTP and analytics. Oracle is still in denial that in-memory is not a constraint, but rather very resilient, very multimode capable and highly scalable," Auld told Computing.
Meanwhile, SAP has refuted Oracle's assertion that HANA requires recoding for most applications.
"Many third-party applications and BI tools run transparently with SAP HANA (1,200+ start-ups), as well as custom SQL and MDX-based application," the company said in a fact sheet on its HANA blog.
Another feature that Oracle is keen to push is tiered storage. Data can be stored across memory, flash and disk. Thus the hottest data, that most likely to be queried, can be stored in memory while colder data can be relegated to disk or flash, an architecture that will potentially speed up processing of common analytical tasks. In HANA all data has to be stored in memory.
Again SAP claims that Oracle's assertions of superiority are false and that HANA already "offers transparent data tiering with Smart Data Access."
The price tag to be attached to Oracle's in-memory capability has not yet been revealed, but Auld suggests that customers will be hit by hidden hardware costs. Rather than running on cheap commodity hardware, as Exasol's and other vendors' offerings do, he believes that to achieve the claimed performance benefits the solution will need to run on Oracle's own hardware.
"Oracle's response is a solution which is plainly hardware overkill. According to the TPC-H [the performance benchmark] Oracle already has an astronomical hardware price tag and this will surely push that dubious record to new heights. The result is an unfocused, hardware-centric and expensive solution that is easily outperformed by mature in-memory products on the market today," Auld said.