Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1357363
  • 博文数量: 281
  • 博客积分: 8800
  • 博客等级: 中将
  • 技术积分: 3346
  • 用 户 组: 普通用户
  • 注册时间: 2006-05-17 22:31
文章分类

全部博文(281)

文章存档

2013年(1)

2012年(18)

2011年(16)

2010年(44)

2009年(86)

2008年(41)

2007年(10)

2006年(65)

我的朋友

分类: LINUX

2009-04-15 15:42:49

Now Linux 2.6.27 is running on ZPF with UBI/UBIFS enabled.

I’ve provided changeset to Zheng and you should get the latest kernel with ZPF support soon.

 

To build kernel image for ZPF just copy $(Linux 2.6.27)/arch/arm/configs/ochaya1050_defconfig to .config, then make menuconfig and make uImage.

 

Following are steps to enable UBI/UBIFS support:

1.       Compile UBI/UBIFS as modules

2.       After “make uImage” is finished issue “make modules” and “make modules_install INSTALL_MOD_PATH=$(YOUR_NFSROOT_PATH)”

 

Following are steps to bring up UBIFS with on board NAND:

1.       Bring up ZPF

2.       Check MTD partitions info

root@ochaya1050:~# cat /proc/mtd

dev:    size   erasesize  name

mtd0: 00020000 00010000 "u-boot"

mtd1: 00020000 00010000 "params"

mtd2: 00200000 00010000 "kernel"

mtd3: 005c0000 00010000 "root"

mtd4: 20000000 00020000 "test"

3.       Initialize NAND

root@ochaya1050:~# flash_eraseall /dev/mtd4

Erasing 128 Kibyte @ b00000 --  2 % complete.

Skipping bad block at 0x00b20000

Erasing 128 Kibyte @ 98c0000 -- 29 % complete.

Skipping bad block at 0x098e0000

Erasing 128 Kibyte @ e9c0000 -- 45 % complete.

Skipping bad block at 0x0e9e0000

Erasing 128 Kibyte @ eb80000 -- 45 % complete.

Skipping bad block at 0x0eba0000

Erasing 128 Kibyte @ 151e0000 -- 65 % complete.

Skipping bad block at 0x15200000

Erasing 128 Kibyte @ 16460000 -- 69 % complete.

Skipping bad block at 0x16480000

Erasing 128 Kibyte @ 19300000 -- 78 % complete.

Skipping bad block at 0x19320000

Erasing 128 Kibyte @ 19440000 -- 78 % complete.

Skipping bad block at 0x19460000

Erasing 128 Kibyte @ 1ffe0000 -- 99 % complete.

4.       Load UBI module

root@ochaya1050:~# modprobe ubi mtd=4

UBI: attaching mtd4 to ubi0

UBI: physical eraseblock size:   131072 bytes (128 KiB)

UBI: logical eraseblock size:    126976 bytes

UBI: smallest flash I/O unit:    2048

UBI: VID header offset:          2048 (aligned 2048)

UBI: data offset:                4096

UBI: empty MTD device detected

UBI: create volume table (copy #1)

UBI: create volume table (copy #2)

UBI: attached mtd4 to ubi0

UBI: MTD device name:            "test"

UBI: MTD device size:            512 MiB

UBI: number of good PEBs:        4088

UBI: number of bad PEBs:         8

UBI: max. allowed volumes:       128

UBI: wear-leveling threshold:    4096

UBI: number of internal volumes: 1

UBI: number of user volumes:     0

UBI: available PEBs:             4044

UBI: total number of reserved PEBs: 44

UBI: number of PEBs reserved for bad PEB handling: 40

UBI: max/mean erase counter: 0/0

UBI: background thread "ubi_bgt0d" started, PID 915

5.       Create UBIFS volumes

root@ochaya1050:~# ubimkvol /dev/ubi0 -s 128MiB -N ubifs0

Volume ID 0, size 1058 LEBs (134340608 bytes, 128.1 MiB), LEB size 126976 bytes (124.0 KiB), dynamic, name "ubifs0", alignment 1

root@ochaya1050:~# ubimkvol /dev/ubi0 -s 360MiB -N ubifs1

Volume ID 1, size 2973 LEBs (377499648 bytes, 360.0 MiB), LEB size 126976 bytes (124.0 KiB), dynamic, name "ubifs1", alignment 1

6.       Load UBIFS module and mount UBIFS volumes

root@ochaya1050:~# modprobe ubifs

root@ochaya1050:~# mount -t ubifs ubi0:ubifs1 /mnt/nand0/

UBIFS: background thread "ubifs_bgt0_1" started, PID 943

UBIFS: mounted UBI device 0, volume 1, name "ubifs1"

UBIFS: file system size: 375848960 bytes (367040 KiB, 358 MiB, 2960 LEBs)

UBIFS: journal size: 18792448 bytes (18352 KiB, 17 MiB, 148 LEBs)

UBIFS: default compressor: LZO

UBIFS: media format 4, latest format 4

root@ochaya1050:~# mount -t ubifs ubi0:ubifs0 /mnt/cf

UBIFS: default file-system created

UBIFS: background thread "ubifs_bgt0_0" started, PID 945

UBIFS: mounted UBI device 0, volume 0, name "ubifs0"

UBIFS: file system size: 133070848 bytes (129952 KiB, 126 MiB, 1048 LEBs)

UBIFS: journal size: 6602752 bytes (6448 KiB, 6 MiB, 52 LEBs)

UBIFS: default compressor: LZO

UBIFS: media format 4, latest format 4

 

 

Following are test results with UBIFS on NAND:

1.       SD_iotest (1000 loops)

Linux 2.6.27 kernel (Used with Dnot)

File Size

8KB

16KB

64KB

256KB

1024KB

4096KB

8192KB

Read Time (s)

0.871040

1.230180

3.383700

12.051290

46.811501

184.687607

368.041138

Read Throughput

9404850.00

13318376.00

19368148.00

21752360.00

22399966.00

22710262.31

22792582.50

Write Time (s)

1.328790

1.756610

4.098210

13.571870

51.899750

620.820984

2164.252197

Write Throughput

6165007.50

9327056.00

15991372.00

19315246.00

20203874.00

6756060.29

3875984.51

 

Linux 2.6.23.9 (Use EA’s UBIFS porting)

File Size

8KB

16KB

64KB

256KB

1024KB

4096KB

8192KB

Read Time (s)

0.873040

1.215520

3.216810

11.303190

43.568371

171.293518

Not finished

Read Throughput

9383304.00

13479005.00

20372978.00

23192036.00

24067368.00

24486063.74

N/A

Write Time (s)

1.36798

1.734690

3.898450

12.444410

55.375351

1269.082153

Not finished

Write Throughput

5988391.50

9444915.00

16810784.00

21065200.00

18935790.00

3304990.00

N/A

 

2.       Copy/List/Delete

Linux 2.6.27 kernel (Used with Dnot)

Copy 114 files with 341MB size takes:

real    6m 36.83s

user    0m 1.37s

sys     3m 47.74s

 

List 134 files takes:

real    0m 0.30s

user    0m 0.03s

sys     0m 0.12s

 

Delete 134 files takes:

real    0m 5.44s

user    0m 0.00s

sys     0m 5.41s

 

Linux 2.6.23.9 (Use EA’s UBIFS porting)

Copy 114 files with 341MB size takes:

real    6m 10.59s

user    0m 1.26s

sys     3m 32.38s

 

List 134 files takes:

real    0m 0.22s

user    0m 0.02s

sys     0m 0.08s

 

Delete 134 files takes:

real    0m 5.35s

user    0m 0.01s

sys     0m 5.29s

 

Regards,

 


This is a huge step forward, but you know that already.

 

The real adventure is still ahead.  We need to see something wrt performance of UBI_FS on zevios nand to help understand what to expect on D7.

 

Please keep at it.

 

 


Now Linux 2.6.27 is running on ZPF board and can successfully bring up whole ROOT_FS!

 

Tracks show that there maybe one bug in EA's zevio-timer module (or conflict with 2.6.27 kernel).

This potential bug/conflict is in mach-zevio/time.c::zevio_timer_interrupt().

write_seqlock(&xtime_lock); and write_sequnlock(&xtime_lock);

are commented out.

Now "Kernel hacking->Kernel debugging->Detect Soft Lockups" is enabled and the kernel still runs stable.

 

I'll port zevio-NAND driver later while still keeping eyes on this potential timer bug/conflict to make sure it is correctly solved.

 

 

Pls check following for 2.6.27 porting status:

1. Serial, GPIO/SPI, Ethernet driver are working.

2. Kernel working

3. Failed to start init in $(ROOTFS)/sbin/init. System hangs there.

It may take more time to bring up ZPF with 2.6.27 kernel. And if possible I'd like to consult EA for issues I met in porting.

And I tried the UBI FS which EA ported in 2.62.23.9.

Following are steps how to add UBI FS support:

1. In Device Drivers->Memory Technology Device (MTD) support->UBI - Unsorted block images, select "Enable UBI" as module

2. In File systems->Miscellaneous filesystems, select "UBIFS file system support" as module

3. make uImage

To use UBI FS pls follow these steps:

1. Bring up ZPF system

2. Check MTD devices info

root@ochaya1050:~# cat /proc/mtd

dev:    size   erasesize  name

mtd0: 00020000 00010000 "u-boot"

mtd1: 00020000 00010000 "params"

mtd2: 00200000 00010000 "kernel"

mtd3: 005c0000 00010000 "root"

mtd4: 20000000 00020000 "test"

mtd5: 00200000 00010000 "m25p80"

3. Erase NAND mtd

root@ochaya1050:~# flash_eraseall /dev/mtd4

Erasing 128 Kibyte @ b00000 --  2 % complete.

Skipping bad block at 0x00b20000

Erasing 128 Kibyte @ 98c0000 -- 29 % complete.

Skipping bad block at 0x098e0000

Erasing 128 Kibyte @ e9c0000 -- 45 % complete.

Skipping bad block at 0x0e9e0000

Erasing 128 Kibyte @ eb80000 -- 45 % complete.

Skipping bad block at 0x0eba0000

Erasing 128 Kibyte @ 151e0000 -- 65 % complete.

Skipping bad block at 0x15200000

Erasing 128 Kibyte @ 16460000 -- 69 % complete.

Skipping bad block at 0x16480000

Erasing 128 Kibyte @ 19300000 -- 78 % complete.

Skipping bad block at 0x19320000

Erasing 128 Kibyte @ 19440000 -- 78 % complete.

Skipping bad block at 0x19460000

Erasing 128 Kibyte @ 1ffe0000 -- 99 % complete.

4. Create UBI nodes

root@ochaya1050:~# modprobe ubi mtd=4

UBI: attaching mtd4 to ubi0

UBI: physical eraseblock size:   131072 bytes (128 KiB)

UBI: logical eraseblock size:    126976 bytes

UBI: smallest flash I/O unit:    2048

UBI: VID header offset:          2048 (aligned 2048)

UBI: data offset:                4096

UBI: empty MTD device detected

UBI: create volume table (copy #1)

UBI: create volume table (copy #2)

UBI: attached mtd4 to ubi0

UBI: MTD device name:            "test"

UBI: MTD device size:            512 MiB

UBI: number of good PEBs:        4088

UBI: number of bad PEBs:         8

UBI: max. allowed volumes:       128

UBI: wear-leveling threshold:    4096

UBI: number of internal volumes: 1

UBI: number of user volumes:     0

UBI: available PEBs:             4044

UBI: total number of reserved PEBs: 44

UBI: number of PEBs reserved for bad PEB handling: 40

UBI: max/mean erase counter: 0/0

UBI: background thread "ubi_bgt0d" started, PID 449

5. Create UBI FS volumes

root@ochaya1050:~# ubimkvol /dev/ubi0 -s 64MiB -N ubiFS0

Volume ID 0, size 529 LEBs (67170304 bytes, 64.1 MiB), LEB size 126976 bytes (124.0 KiB), dynamic, name "ubiFS0", alignment 1

root@ochaya1050:~# ubimkvol /dev/ubi0 -s 420MiB -N ubiFS1

Volume ID 1, size 3469 LEBs (440479744 bytes, 420.1 MiB), LEB size 126976 bytes (124.0 KiB), dynamic, name "ubiFS1", alignment 1

root@ochaya1050:~# modprobe ubifs

6. Mount UBI FS volumes

root@ochaya1050:~# mount -t ubifs ubi0:ubiFS0 /mnt/nand0

UBIFS: default file-system created

UBIFS: background thread "ubifs_bgt0_0" started, PID 497

UBIFS: mounted UBI device 0, volume 0, name "ubiFS0"

UBIFS: file system size: 66027520 bytes (64480 KiB, 62 MiB, 520 LEBs)

UBIFS: journal size: 3301376 bytes (3224 KiB, 3 MiB, 26 LEBs)

UBIFS: default compressor: LZO

UBIFS: media format 4, latest format 4

root@ochaya1050:~# mount -t ubifs ubi0:ubiFS1 /mnt/cf

UBIFS: background thread "ubifs_bgt0_1" started, PID 507

UBIFS: mounted UBI device 0, volume 1, name "ubiFS1"

UBIFS: file system size: 438702080 bytes (428420 KiB, 418 MiB, 3455 LEBs)

UBIFS: journal size: 21966848 bytes (21452 KiB, 20 MiB, 173 LEBs)

UBIFS: default compressor: LZO

UBIFS: media format 4, latest format 4

Following are simple test with UBI FS on NAND

1. SD_iotest:

Write:

Size    8KB     16KB    64KB    256KB   1024KB  4096KB 
Time    1.36798 1.734690        3.898450        12.444410       55.375351       1269.082153    
Throughput      5988391.50      9444915.00      16810784.00     21065200.00     18935790.00     3304990.14     
Read:

Size    8KB     16KB    64KB    256KB   1024KB  4096KB 
Time    0.873040        1.215520        3.216810        11.303190       43.568371       171.293518     
Throughput      9383304.00      13479005.00     20372978.00     23192036.00     24067368.00     24486063.74    
2. Copy and list files

Copy 134 files with 420MB size takes:

real    7m 42.22s

user    0m 0.98s

sys     4m 24.35s

List these files takes:

real    0m 0.11s

user    0m 0.02s

sys     0m 0.06s

Delete all files takes:

real    0m 6.20s

user    0m 0.01s

sys     0m 6.18s

///////////////////////////////////////////////////////////////////

Today 2.6.27 is running on ZPF platform. The reason why it does not boot early is option:

Kernel hacking->Kernel debugging->Detect Soft Lockups is enabled

Now I've bring up serial, GPIO/SPI and Ethernet. It's expected to finish NAND driver porting in two days.

After that I can start to test UBI fs on ZPF.

p.s: I also find that in EA's linux 2.6.23.9-ER3 kernel UBI FS is already there. Perhaps we can also use this to verify it on ZPF.

/////////////////////////////////////////////////////////

Pls check the status of Dnot's 2.6.27 kernel porting to ZPF:

1. Still failed to boot ZPF using new 2.6.27 image. Track it down shows that it entered an infinity loop after param_sysfs_init is called in:

Kernel_init-> do_basic_setup -> do_initcalls-> do_one_initcall.

Still no idea about how could this happens. Only serial driver is used in current Ochaya1050 code (in mach-zevio)

If you have any clue or suggestion pls let me know. Thanks.

////////////////////////////////////////////////

Pls check the status of Dnot's 2.6.27 kernel porting to ZPF:

1. Zevio related code has been added. Kernel image can be built out.

2. Failed to boot up ZPF. I tracked it down and find it hangs after page tables are rebuilt in

   start_kernel()->setup_arch()->paging_init()->devicemaps_init()

It seems that after the devicemaps_init is called wrong device PMDs are created and later call to any device will lead PC to unknown address. I'm trying to find out how this happens. If you have any clue/suggestion on it pls let me know. Thanks.

//////////////////////////////////////////////////////////////////////

Thanks...now I remember the path to Ethernet.

Still, I would like us to have 1 kernel.  In that, I do not want to source control 2 kernels.  Also, I haven't mentioned yet that we may need to use this path, of 2.6.27 on zevio for other things going forward.

The question is: can we port the zevio changes from our 2.6.23 code base up to the ddnot kernel in use today?

////////////////////////////////////////////////////////////

UBI fs is introduced in after 2.6.27 while what we have from EA is 2.6.23.9. So we have two possible ways:

1. Port UBI fs to 2.6.23.9

2. Port zevio related code to 2.6.27

If we take latter solution GPIO and SPI driver is also need because:

Ethernet driver depends on SPI and SPI depends on GPIO.

Let me know your suggestion. Thanks.

///////////////////////////////////////////////////

 

I was hoping we can avoid starting with a fresh kernel.

 

Instead, I would like to know if we can start with the kernel we have, build it for arm.  From there you should only need Ethernet, and nand.  After this is the ubifs hook up.

 

Lets discuss a bit morealso Id like to know why you think gpio and spi are needed for this task.

 

-P

 //////////////////////////////////////////////////// 

Ive downloaded linux 2.6.27.8 kernel source code and done some survey on the EAs linux 2.6.23.9 kernel. Following is my plan to port 2.6.27.8 to zevio:

 

1.       Port zevio architecture code: 2~3 days

2.       Port zevio GPIO driver: 1 day

3.       Port zevio SPI driver: 2 days

4.       Port zpf ethernet driver: 2 days

5.       Port zpf NAND driver: 3 days

6.       Enable UBI/UBI-FS and test: 3 days

 

-J

 

 

阅读(1387) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~