you tell me. Honestly - you are the only one that can answer this. (and
by the way, 11 million rows is meaningless - that gives NO clue as to
the size, we could be taking 100mb or 100gb or a terabyte - but we don't
know.... everyone needs to stop saying "X rows" and start saying "Y
bytes")
You know now the physics behind partitioning.
You know now what you can hope to achieve:
o increased availability - by spreading the data out over many
partitions - if you suffer a media failure, only a small subset of the
data might become unavailable.
is this relevant to you, does this
apply in your case, do you need this, is this something you need to
design into your system, is this your major number one goal?
if not, move onto the next point
o ease of administration - by making the things you manage smaller.
is
this relevant to you, do you purge data by date, would a range
partition on date make sense, do you need to reorganize data/indexes for
whatever reason, would the fact the big table is now a series of small
tables be relevant to you, would it make your life better?
if not, move onto the next point
o improved DML performance. For warehouse - partition pruning. For OLTP - possible reduced contention.
are
you a warehouse that could make use of this? are you a transactional
system suffering from contention on a single segment - be that an hot
right hand side of an index populated by a DATE or a SEQUENCE or a table
with massive concurrent inserts