分类: 数据库开发技术
2010-06-17 19:17:07
The following is a comparison Oracle is often asked to provide of the Oracle Enterprise Edition database vs. IBM DB2 UDB Enterprise Server Edition and NCR Teradata for business intelligence and data warehousing. This document provides comparisons in the following areas: scalability, performance, parallelism, optimizer, operational impact, manageability, physical design, and availability. These topics are often used for “marketing oriented” messages by the various vendors (including Oracle). However, we are providing information we believe to be factual in this paper based on our knowledge of Oracle as well as stated intentions or product features and proof points by our competitors. We encourage our customers to always thoroughly perform. their own analyses to validate vendor claims and analysts’ opinions.
Scalability
Oracle has long supported a variety of hardware platforms and operating systems, giving customers choice and negotiating leverage. Oracle supports the most scalable 64-bit Symmetric Multiprocessing (SMP) systems and Non-Uniform. Memory Access (NUMA) systems today and has supported systems of similar architecture since the early to mid 1990s. These systems support a single database instance and single operating system, yet are proven to scale to tens of Terabytes of data. IBM DB2 UDB began supporting 64-bit SMP and NUMA systems in 2000. NCR Teradata currently supports 32-bit Intel based platforms with a stated intention of supporting 64-bit Itanium in the future. (Note: Oracle already has a version of Oracle9i and / or Oracle Database 10g available for HP-UX, Linux, and Windows on Itanium.)
Massively Parallel Processing (MPP) systems, supported by Oracle, NCR, and IBM, require more care and feeding due to multiple operating systems and database instances to manage. MPP is the only large scalability solution offered by NCR. The IBM SP is the MPP platform. targeted by IBM DB2 UDB ESE with Partitioning (formerly EEE). The Teradata and IBM solutions rely on hash partitioning to provide a theoretically even distribution of data across the MPP complex. This introduces MPP management and tuning considerations to minimize data skew during real business queries largely mitigated in Oracle Real Application Clusters’ combination shared nothing / shared disk approach. Oracle Database 10g now has automated striping and mirroring independent of partitioning or clustering via the Automatic Storage Manager (ASM).
Thus, Oracle can scale on MPP or clusters, but Oracle also has a long history of scaling on easier to manage single platforms. Oracle’s customers have deployed multi-Terabyte data warehouses on a variety of 64-bit SMP platforms and NUMA platforms and clusters and MPP systems.
NCR customers have deployed large-scale data warehouses successfully only on NCR’s own 32-bit platforms. Many NCR customers go through expensive non-competitive hardware upgrades or, when faced with the cost, entertain proposals from other vendors to try to get reasonable pricing (but at a cost of their own time and effort). IBM’s VLDB examples are also overwhelmingly on IBM platforms, though IBM DB2 is offered on some other hardware platforms (HP, Sun, Linux, Windows).
A few examples of Oracle multi-Terabyte (of data) customers
include:
· Amazon.com with 13 TB of data on HP Superdome.
·
Acxiom with 6 TB of data on HP AlphaServer cluster.
· AtosEuronext
with 1.5 TB of data on RedHat Linux cluster.
· Best Buy with 1.5 TB of
data on Sun E-10000.
· Colgate Palmolive with 2.6 TB of data in SAP BW on IBM p690.
·
Financial Institution with 5 TB of data on multi-node 206 CPU IBM
SP.
· France Telecom with 21+ TB of data on HP V-2500.
·
Telecom Italia Mobile with 9.6 TB of data on HP AlphaServer
cluster.
Performance
Performance
comparisons are extremely difficult to objectively obtain, even in competitive
benchmarks, due to differences in hardware configurations, optimization goals,
and skills of the individuals involved. TPC-H is the industry benchmark for
decision support () and
results are posted at 1 TB, 3, and 10 TB of raw data for Oracle, IBM, and NCR
Teradata. Oracle’s goal here was to work with hardware partners and furnish
results that were reasonably deployable in configurations we would feel
comfortable in recommending to customers. The IBM and NCR approach was to show
off expensive hardware configurations of their own platforms. As a result, it
is difficult to conclude more than all three can scale to this benchmark and
commodity platforms have some advantage when it comes to
pricing.
Real customers again provide a more realistic
view on what is possible. Richard Winter surveys very large decision support
implementations and publishes that survey periodically (). In the area of
“decision support” for size and workload, Oracle and NCR dominate the lists.
Winter has also researched several very large Oracle sites and has noted that
France Telecom has a peak concurrent number of queries around 600, for
example.
Philosophies for achieving performance have
largely been dependent on what the companies are selling. Oracle only sells
software, so the optimization has been concentrated on improving the database
cost-based optimizer (an effort over 10 years in the making) and other
features. IBM has also focused on improving its cost based optimizer and many
of the features it has added somewhat mimic Oracle features. NCR Teradata has
primarily improved performance through hardware upgrades (the solution to
performance problems usually is to sell more nodes) and manual aggregation or to
partner with MicroStrategy that creates its own temporary aggregate
tables.
Oracle’s optimizer enables the use of fewer
aggregates where ad-hoc queries occur, and Oracle also offers other space saving
features such as static bitmap indexes. NCR’s bitmap support is much more
limited and has not typically been leveraged for deployments. Oracle also
implemented additional sophisticated table compression techniques since Oracle9i
Release 2 that have reduced disk requirements for data in some companies a
factor of two or more.
Oracle also focuses on minimizing
memory resources required. For example, a feature that first appeared in
Oracle9i allocates memory at query time and results not only in more scalable
performance, but also typically in less memory
consumption.
Parallelism
At the heart of Oracle’s scalable, parallel
decision support strategy lays a dynamic parallelism approach. This approach
allows for a straightforward transparent path to parallelism without needing
static table partitions. As a result, data management tasks are significantly
reduced and hardware is utilized to its full potential. Where Oracle’s
Partitioning Option is desired for data maintenance, Oracle does support
parallelism across partitions including parallel DML. Oracle also supports
parallelism within each partition for queries and inserts.
By comparison, both IBM (DB2 UDB EEE) and NCR Teradata chose initially to use
partitioning as a means to enable parallelism. NCR finally delivered range
partitioning capability in Version 5 and IBM proposes a “UNION ALL” workaround.
Interestingly, IBM chose not to take this approach for DB2 on OS/390 (Z/OS)
where the approach is shared disk for the Sysplex and where partitioning (range)
is used for management.
An advanced parallel-aware query
optimizer enhances Oracle’s parallel approach. This cost-based optimizer takes
into account parallel attributes when determining optimal execution plans.
Combined with partition level optimization, this solution allows for greater
VLDB manageability, easier administration and higher
availability.
An additional set of Dynamic Parallel Query
optimizations is used in cluster and MPP environments to adapt and fully exploit
so-called “shared nothing” machine architectures (including the use of function
shipping techniques).
Scalable, parallel Oracle data
management features include:
· Tablespace
creation
· Data loading
· Index creation and
rebuilds
· Table creation (e.g. summary creation)
·
Partition maintenance operations (e.g. move split)
· DML (insert,
update, delete)
· Integrity constraints
· Statistics
gathering for cost-based optimizer
· Query processing
·
Table scans
· Nested loops
· Sort merge join
·
Group by
· NOT IN subqueries (anti-join)
· User-defined
functions
· Index scans
· Select distinct UNION and UNION
ALL
· Hash join, order by, aggregation
· Bitmap star
joins
· Partition-wise joins
· Backup /
restore
Optimizer
As
noted in the previous section regarding performance, Oracle’s cost based
optimizer dates back to Oracle7 and is about 10 years old. Over this time
period, Oracle has continued to add optimization techniques and improve the
intelligence in the optimizer such that hints have become much less frequently
used. Oracle announced plans to desupport the rules based optimizer as of
Oracle Database 10g.
Business analysts typically want to
ask questions such as “How many products sell in a certain geography over a
certain period of time?” The standard approach to modeling the database to
match such queries is the star schema, a large transaction or fact table
surrounded by multiple dimension or look-up tables. Oracle’s cost based
optimizer has recognized the star schema since Oracle7. Oracle 7.3 introduced
Cartesian product joins to solve the query, and Oracle8 added a parallel bitmap
star join technique. Interestingly, IBM also made a focus of its cost based
optimizer the solution of star schema. NCR Teradata has often claimed that
third normal form. is the right modeling approach to any query, but their
customers usually end up building data marts around the Teradata system to solve
these types of queries, particularly if On-line Analytical Processing (OLAP) is
required. A number of Teradata customers use Oracle Express and the Oracle
database exactly for this purpose.
The optimizer in
Oracle8i added recognition of “materialized views”, a hierarchy of summary
tables in the RDBMS to which queries can be transparently redirected to a
summary level resulting in much better query performance. Oracle implemented
this capability for fact and dimension tables. IBM took a similar approach with
DB2 UDB, but only supports fact tables (since Version 7). NCR does not have
similar SQL rewrite capabilities though they
sometimes refer to their join index capabilities as materialized views. Note
that Oracle also has a “Summary Advisor” for Enterprise Manager to recommend
where summary tables might be warranted as well as in the Oracle Discoverer tool
that is part of the Oracle Application Server.
The costs
computed by Oracle’s cost based optimizer for queries can now be leveraged in
Oracle’s Database Resource Manager. Cost limits can be assigned to groups of
users (or “query consumers”) preventing queries that are not well thought out
from interfering with the queries from other users who also desire performance.
IBM and NCR have taken a tools based approach external to the database to solve
this problem.
It is worth noting that Oracle also fully
embeds an OLAP Option and Data Mining Option in the database. This enables very
sophisticated analysis where the data resides instead of within a business
intelligence tool and can greatly improve performance. The OLAP Option is
accessible via SQL or a Java API. The Data Mining Option is accessible via a
Java API.
Operational
Impact
In many organizations, one Oracle DBA typically
manages several databases. For large-scale decision support, the following
implementations provide an Oracle proof point regarding how few DBAs are often
actually needed:
· Acxiom, 16 TB database, 2
DBAs
· Acxiom, 6 TB database (RAC), 2 DBAs
· Amazon.com, 16
TB database, 2 DBAs
· France Telecom, 29+ TB database, 2
DBAs
· Telecom Italia Mobile, 12 TB database (OPS), 3 DBAs
·
WestPac, 2.3 TB database, 2 DBAs
The number of DBAs is
usually more a function of the usage of automated tools that are available and
provided. Installations leveraging Oracle Enterprise Manager and its monitoring
capabilities usually have automated much of the day-to-day management. IBM DB2
UDB is taking a similar approach with its Control Center. NCR Teradata tends to
be more manual in terms of management, but this cost is often hidden by on-site
NCR Teradata Systems Administrators and Field Engineers and only becomes
apparent when maintenance costs are explored.
Another
consideration regarding impact on operations is the availability of skills for
the technologies being considered. Since Oracle appears in far more data
warehousing / decision support implementations (most analysts say about 30-50
percent), far more Oracle skilled consultants and designers exist. Oracle
customers often leverage their own internal skills for such implementations.
Those looking for consulting assistance can find it at Oracle, at Big Four
consulting companies, and at many specialized second-tier consulting companies.
As a result, Oracle project managers can leverage a wide choice of skilled
resources at a variety of price levels. For example, a recent search of Oracle
skills on “Monster.com” showed over 5000 entrants for individuals with Oracle
skills but a much smaller number listing “DB2 UDB” or Teradata (many of whom
also listed Oracle). You might consider doing your own search for Oracle, IBM
DB2 UDB, and NCR Teradata skills prior to making a
decision.
Manageability
Oracle typically requires a similar number
of DBAs to NCR Teradata and IBM DB2 UDB (see previous section) for management of
very large decision support / data warehouses. As noted previously,
administration of the highly tunable Oracle database is eased through the use
Oracle Enterprise Manager, providing a HTML-based console for managing multiple
Oracle servers. The interface is used not only for basic management of database
instances and security, but also for setting up advanced features such as
partitioning, server-managed backup, and replication. Packs for Oracle
Enterprise Manager include:
Oracle Database and
Application Server Diagnostics Packs: Monitor, diagnose, and maintain the
health of databases, operating systems, and Application Servers. Both
historical and real-time analyses are used to automatically avoid problems
before they occur. The packs provide powerful capacity planning features that
enable users to easily plan and track future system resource
requirements.
Oracle Tuning Pack: Optimizes system
performance by identifying and tuning major database and application bottlenecks
such as inefficient SQL, poor data structures, and improper use of system
resources. The pack proactively discovers tuning opportunities and
automatically generates the analysis and required changes to tune the
system.
Oracle Change Management Pack: Helps eliminate
errors and loss of data when upgrading databases to support new applications.
The pack analyzes the impact and complex dependencies associated with
application change and automatically performs database upgrades. Users initiate
change safely with easy-to-use wizards that teach the systematic steps necessary
to upgrade.
Oracle Database and Application Server
Configuration Management Packs: Help track software and hardware configurations
and provide a patch management mechanism.
Oracle’s focus
is on building an increasingly self-tuning and self-managing environment. For
example, Oracle’s advanced optimizer enables sophisticated ad-hoc queries
without extensive use of aggregates. Oracle’s star schema optimization enables
on-line analytical processing tools to work well against Oracle tables.
Built-in analysis extensions to SQL (based the ANSI SQL Analytic standard)
enable analysis in the database engine. While IBM has provided similar
capability, Oracle believes it has a significant edge vs. both IBM and NCR
Teradata in optimizer sophistication and flexibility. Oracle’s database also
has many self-tuning capabilities to minimize DBA efforts such as automatic
row-level locking to maintain data integrity, automatic degree of parallelism at
query time based on system load, and automatic summary table
refresh.
A feature first introduced in Oracle9i is
automatic management of rollback segments utilizing an UNDO tablespace. The
length of time for keeping undo information can be specified and a “Flashback
Query” is possible. A Flashback Query is submitted to return results from the
database as at it appeared in the past. Flashback Query is particularly
valuable if user error has caused a loss of current valid data. These features
are further automated in Oracle Database 10g through introduction of a Segment
Advisor and new single command Flashback capabilities.
For
managing different usage requirements, Oracle8i introduced a database resource
manager to set levels of service and allocate percentages of CPU time and degree
of parallelism such users or sets of users can utilize. Oracle 9i added
proactive query governing, automatic queuing (based on administrator defined
limits in numbers of active sessions), and dynamic re-prioritization to other
resource groups.
Further aiding the management of the
database in complex query environments are features that are increasingly making
the database “self-tuning”. Oracle8i introduced an adaptive degree of
parallelism to automatically set degree of parallelism for a query at the time
the query is submitted. Oracle9i introduces automatic memory tuning in order to
dynamically allocate runtime memory based on a query’s
requirements.
Oracle Database 10g features many new
ease-of-management capabilities. Statistics are automatically captured in a
repository and analyzed using an Automatic Database Diagnostics Monitor (ADDM).
Advisors present recommendations created by ADDM. An example is the SQL Tuning
Advisor that can recommend better plans that are then registered with the
optimizer for future execution when the “bad” SQL is
resubmitted.
NCR Teradata has yet to respond other than to
try to position deployment of data into 3rd normal form. schema as the magic
answer. IBM has recognized a need to provide a self-managing and tuning
solution similar to Oracle for DB2 UDB and has announced a “SMART” initiative.
IBM’s approach so far is largely tools based and looks a lot like many of the
capabilities and options in Oracle’s Enterprise Manager. The message IBM is
delivering for the future is that DB2 UDB will also become increasingly
self-managing and tuning.
Oracle also realizes many
companies consider availability as a key success criteria of effective database
management. This paper addresses availability considerations in a subsequent
section, including leveraging of Oracle’s range and list
partitioning.
Another aspect of management is Meta. Model
or Metadata management. This is often a challenge due to the lack of common
metadata definitions across the various tools. Oracle supports OMG’s Common
Warehouse Meta. Model (CWM) and use of a common repository. Oracle Warehouse
Builder enables viewing of meta. models stored within the Warehouse Builder
repository for determining data lineage and impact analysis. IBM is also a key
member behind OMG’s CWM and has implemented a similar strategy. NCR Teradata
was on the review board, but its implementation direction is unclear. NCR’s
most important tools partner, MicroStrategy, has so far indicated it has no
plans to take part in CWM.
Both Oracle and IBM DB2 UDB are
positioning to manage any data at any scale for any application. Structured
data represents only part of the critical data in an organization.
Spreadsheets, word processing documents, video clips, newspaper articles, press
releases and geographic descriptions are only a few of the unstructured objects
that may need to be integrated and managed in a central repository. Oracle is
actively responding to these needs with a host of advanced server functions
supporting these additional data types. These offerings
include:
· interMedia
· Image extends ability
to store and retrieve images
· Audio extends ability to store and
retrieve audio clips
· Video extends ability to store and retrieve
video clips
· Locator extends ability to retrieve data linked to
coordinates
· Text
· Text extends ability to retrieve
documents and the gist of documents
· Native XML storage
·
Spatial Data Option
· Provides means to link data to
coordinates
· Typically leveraged by partner vendors of Geographic
Information Systems (GIS)
IBM’s approach has been to offer “extenders” to handle many of these data types. NCR Teradata has largely ignored this area, though even NCR recognizes the importance of XML and at least has a partner identified to help it with these data types.
Physical Design
Physical topologies for business intelligence and data warehousing vary from implementation to implementation. Some companies, particularly those doing predictable reporting, use a single platform. (enterprise data warehouse) and rely on summarizations or aggregates where necessary. Some also deploy data marts where summarizations are relevant for only a single business area, or performance needs to be guaranteed for that business area, or the workload is significantly different (e.g. sophisticated OLAP or data mining). Operational data stores also come into the picture where the primary need is consolidated reporting.
Oracle continues to see customers of all major enterprise data warehouse databases (Oracle, IBM DB2 UDB, NCR Teradata) also deploying data marts. However, as the sophistication of the decision support and data warehousing database technology grows (particularly for Oracle!), data marts frequently co-exist in the same database with traditional enterprise data warehouses and also operational data stores. This is particularly evident in talking with many of Oracle largest customers where the deployment model is not pure 3rd normal form. or star schema. Instead, many are deploying a hybrid approach. (For example, a poll of 20 of Oracle’s most sophisticated data warehouse customers a couple of years ago indicated only 4 were using a pure 3rd normal form. and most were taking the hybrid approach.) So, companies are increasingly deploying multiple logical models in a single physical database and consolidating.
NCR Teradata has claimed, at various times, that Teradata solutions do not require data marts thus further simplifying operational management. This is due to many Teradata systems being used for consolidated (and planned) reporting, not true ad-hoc query and analysis. Also, users of many Teradata systems use a tool named MicroStrategy that creates temporary tables (or virtual data marts) on the same platform. In fact, Oracle customers using MicroStrategy also often do not deploy separate data marts (however such a solution is also not without tradeoffs). We believe a careful analyst would find that many NCR Teradata sites do deploy data marts, sometimes with Oracle! These are particularly common where the site is doing more sophisticated analysis. Many of the tools themselves expect a star schema, and performance and usability suffers if the model is not created. France Telecom had data marts around their Teradata system – the Teradata system has since been repositioned as a data mart deployed off of an Oracle enterprise data warehouse.
Based on discussions at public forums, we believe that IBM DB2 and DB2 UDB representatives and consultants are also positioning deployment and physical design options similar to the Oracle positioning presented here, though IBM will sometimes initially present a more federated approach that relies heavily on integration and integration consulting services. We believe that Oracle’s proven flexibility in large-scale deployments of all types give Oracle a decided advantage. We further discuss this below in the “Future Considerations” section.
Availability
As data warehouses and decision support systems increasingly are leveraged for tactical as well as strategic planning, the need for highly available systems becomes a necessity. For global companies, the required availability becomes 24x7x365. Amazon.com is once such example with 5 on-line bookstores around the world and a multi-TB data warehouse / data store meeting not only the query requirements of business analysts, but also feeding information to other operational systems.
One key Oracle technology leveraged by Amazon.com and many other Oracle based customers with very large data warehouses is the Oracle Partitioning Option. Oracle’s Partitioning enables data partitions based on business value ranges for administrative flexibility. As an example of administrative flexibility, consider the common data warehousing requirement for “rolling window” operations - adding new data and removing old data based on time. Oracle’s partitioning allows you to add a new partition, load and index it in parallel and optionally remove the oldest partition, all without any impact on existing data and with uninterrupted access to other existing data. The combination of dynamic parallelism and independent data partitioning gives Oracle customers the ability to use partitioning for efficient data management without dictating and limiting parallelism. Thus, the administrative burden of managing partitions that can become unbalanced due to unanticipated growth is avoided. Hash partitioning is also supported and can be implemented in order to spread data evenly based on a hash algorithm. Oracle believes it is best used within range partitions (composite partitioning) maintaining manageability while possibly improving performance.
Oracle9i introduced “List” partitioning allowing the DBA to create discrete domains using partitioning. Examples might include geography or product category. Oracle9i release 2 added a composite partitioning named “Range-List” providing list groupings within ranges.
As noted previously, only IBM DB2 on OS/390 (Z/OS) built somewhat similar partitioning capability for management. Other databases, such as IBM DB2 UDB ESE with Partitioning and NCR Teradata, took a shortcut in implementing parallelism by basing it upon partitioning. Both are trying to retrofit the management capability and Teradata has introduced some range partitioning capability.
While the Partitioning Option is very useful for implementing 7x24x365 data warehousing, many companies also add other components to ensure high availability. Often, clusters of two (or more) systems or nodes provide a backup system for fail over. To enable very fast or even transparent fail over to the users, many utilize the Oracle Real Applications Cluster (RAC) software, a replacement for Oracle Parallel Server that existed prior to Oracle9i. This software can also be used for providing a single scalable database across a large number of systems or nodes in a “grid” strategy.
Both IBM DB2 UDB ESE and NCR Teradata have designed highly available fail over solutions. However, the shared nothing approach of UDB with Partitioning and Teradata means that when a single node fails, the surviving node will process about twice its original workload (assuming the data is evenly spread for that particular query). Oracle believes that such a performance impact is not desirable.
Future Considerations
We also think it important to provide an indication of where we think the future may lead. Most analysts believe the future of business intelligence and data warehousing holds the promise of more demands for near real-time information. Oracle customers have used Oracle’s facilities for replication, advanced queuing (AQ), and / or transportable tablespaces (tablespaces copied from one database to another without import / export) to accomplish near real-time data movement. Oracle9i bundles as “Streams” features that include queuing and log based replication while simplifying management through Enterprise Manager. Oracle Database 10g adds transportable tablespaces supported across heterogeneous systems running Oracle and a new high speed Data Pump. IBM DB2 UDB shows indications of being able to meet requirements for near real-time data movement as well, though IBM’s replication may not be quite as advanced as Oracle and the queuing mechanism, MQSeries, is a separate middleware product. NCR Teradata has been forced to partner to meet this requirement (hence an agreement with TIBCO).
As the data feeds get more and more frequent, the database begins to handle updates in a manner not unlike OLTP databases. Oracle believes it is uniquely able to fulfill this need as Oracle already has many extremely large OLTP customers using the same database engine today. Oracle’s architecture is such that reads never block writes and writes never block reads. IBM DB2 UDB was originally targeted more toward decision support and data warehousing with the intent not to compete with DB2 on IBM mainframes and AS/400s. IBM is now working to improve OLTP characteristics of DB2 UDB, though IBM’s lock escalation will be a challenge. In our opinion, NCR Teradata will be the most challenged to make this transition as NCR has consistently positioned the need to have a special purpose data warehouse / decision support engine that is not the same as the OLTP engine.
Eventually, it may be possible to do all business intelligence using a single platform. and the OLTP and decision support / data warehousing distinctions will disappear. This will require more advances in commodity hardware (and declining prices of key components such as CPU and memory) and software (with even more sophisticated self managing and tuning). Oracle Database 10g with Grid computing support might eventually be used in such a fashion. Such an evolution would be consistent with past technology changes.
Summary of Key Differences
In order to better summarize how we see Oracle Enterprise Edition vs. IBM DB2 UDB ESE (on UNIX and Windows) and NCR Teradata, we’ve prepared a couple of tables comparing key features and indicators using information publicly available from all three vendors. The first table includes comparisons related to scalability and performance:
Oracle IBM DB2 UDB NCR Teradata
Static bitmap
indexes Yes No Limited
Materialized views Yes,
fact & dimension with SQL rewrite, bitmap join indexes Yes, fact with
SQL rewrite Join indexes only, no automatic SQL rewrite
ANSI SQL
analytic functions Yes Yes Some analytics,
standard?
OLAP in database Yes, in RDBMS with SQL or Java API
Partial with DB2 OLAP No, typically partner
Data mining Yes, in
RDBMS with Java API Yes, with extender Yes
NUMA, 64-bit
platform. support Yes, 64-bit since early 1990s, NUMA since
mid-1990s Yes, since 2000 No, future 64-bit
MPP, cluster
support Yes, clusters since 1980s, MPP since 1991 Yes, since
1995 Yes MPP; limited Windows cluster support since 2001
Multi-TB
customers Yes, on HP Alpha, HP, IBM, Linux, Sun Yes, on IBM
Yes, on NCR
The following table includes comparisons related to management,
availability, and loading:
Oracle IBM DB2 UDB NCR Teradata
Range, List
partitioning Yes No, UNION ALL workaround Range since
V2R5
Resource manager Yes, CPU utilization, query cost, queues &
reprioritization in RDBMS External tool Priority
scheduler
Summary advisor Yes No No
Sub-minute fail
over Yes, with RAC No Doubles disk
Extended security
(ACL, row label…) Yes No No
Traditional ETL
Support Yes, in RDBMS & Warehouse Builder Yes, in Data
Warehouse Center High speed load optional, Informatica
Queuing
Yes, AQ / Streams in database Yes, MQSeries middleware Partner
solution from TIBCO
Extended data types Yes, included in RDBMS
Yes, with extenders Limited partner solutions
XML in database
Yes Yes Partner solution
OMG CWM meta. model standard
Yes Yes No