Chinaunix首页 | 论坛 | 博客
  • 博客访问: 252227
  • 博文数量: 38
  • 博客积分: 2093
  • 博客等级: 大尉
  • 技术积分: 432
  • 用 户 组: 普通用户
  • 注册时间: 2010-04-01 10:29
文章分类

全部博文(38)

文章存档

2011年(10)

2010年(28)

我的朋友

分类: LINUX

2010-11-22 14:50:07

~s0mbre/blog/devel/fs/index.html

姑妄看之,姑妄听之



I will write here my personal opinion on filesystems I either worked with, or just read design notes or at least some documentation. It can be completely wrong - feel free to comment and I will fix wrong issues.

.
This is distributed filesystem heavily based on ext3 codebase with following issues:

  • since it is heavily based on ext3, it has all its problems with large files, fragmentation and other isses.

  • absence of any redundancy either at filesystem layer or block layer. It requies external storage (like RAID arrays) to handle failures.
  • very strong developers (a lot of former reiser3/4 team)
  • I was asked to work with :)
.
Really strong filesystem, originally created on top of IBM's . Following issues deserve additional highlighting:
  • requires expensive hardware to build shared storage
  • supports filesystem replication to two disks for data and metadata
  • really interesting locking techiques used to minimize locking contention
  • in-kernel filesystem, but with large number of mnagement userspace daemons
  • proprietary
(second version).
I already wrote about it.

also known as GFS.
It was briefly described here already.

.
This is FUSE based parallel filesyste with following features:
  • userspace filesystem
  • filesystem works on top of usual in-kernel filesystem and thus can suffer from its problems
  • requires shared storage to handle failures (developers say that additional redundancy and data striping is a bad idea), although they implemented file replication
  • very bad documentation about filesystem design, but lots of advertisemnt words
.
  • shared-disk filesystem, which locks access at block layer.
  • badly scales
  • second verions of the filesystem is in mainline Linux kernel tree
(second version).
  • uses in-kernel journalling code (and suffers from its problems)
  • uses in-kernel distributed locking management system (it was created for OCFS2 actually and is not usable by anyone else, requires a bit ugly userspace support).
  • no documentation
  • 32-bit filesystem
  • no words about redundancy support - likely is not supported
.
SGI clustered XFS filesystem.
  • no redundancy
  • single master server
  • fibre channel luns only (?)
filesystem.
Development started at CMU when I even did not see a computer, but right now its development is very unactive. The only thing I know about CODA is its really interesting offline capabilities, which are not supported in any existing network/distributed filesystems.

.
Google-like filesystem with similar limitations:
  • designed write-once/read-many workload only
  • good performance for large files only
  • mostly sequential access
  • single metadata server
  • no redundancy
.
  • very small amount of documentation
  • looks like having only single metadata server and caching metadata server
  • userspace filesystem
.
  • uses interesting pseudo-random distribution of the data among data nodes
  • allows to have multiple replicas of the same data
  • has metadata cluster, but metadata is only partitioned between nodes without additional redundancy
  • good documentation of some design notes and very bad on others
  • userspace storage
Various AFS clones.
This is long dead filesystem.

Parallel NFS.
Although it was not yet released by any vendor, its specification allows to store proprietary protocol into the communication core, which means that non-vendor-certified products will not be able to work with this technology.

.
This is userspace based filesystem without automatic resync on failure.


P.S. I repeat that this opinion can be damn wrong.
阅读(1754) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~