您的位置:首页 > 健康 > 美食 > 建个网站的费用_成app短视频源码下载_seo排名软件免费_乔拓云智能建站平台

建个网站的费用_成app短视频源码下载_seo排名软件免费_乔拓云智能建站平台

2024/12/27 3:39:37 来源:https://blog.csdn.net/home19900111/article/details/144087650  浏览:    关键词:建个网站的费用_成app短视频源码下载_seo排名软件免费_乔拓云智能建站平台
建个网站的费用_成app短视频源码下载_seo排名软件免费_乔拓云智能建站平台

目录

前言

问题分析

page buffers创建

page buffers丢失

Write-Protect

Dirty Page w/o Buffers

问题解决


前言

这个问题发生在3.10.0-514.el7上,并且在RHEL的知识库中快速找到了对应的案例以及解决方案,但是,理解问题如何发生和解决则着实费了些功夫。

RHEL的链接如下:

RHEL7: kernel crash in xfs_vm_writepage - kernel BUG at fs/xfs/xfs_aops.c:1062! - Red Hat Customer Portal

调用栈为:

[1004630.854317] kernel BUG at fs/xfs/xfs_aops.c:1062!
[1004630.854894] invalid opcode: 0000 [#1] SMP 
[1004630.861333] CPU: 6 PID: 56715 Comm: kworker/u48:4 Tainted: G        W      ------------   3.10.0-514.el7.x86_64 #1
[1004630.862046] Hardware name: HP ProLiant BL460c Gen9, BIOS I36 12/28/2015
[1004630.862703] Workqueue: writeback bdi_writeback_workfn (flush-253:28)
[1004630.863414] task: ffff881f8436de20 ti: ffff881f23a4c000 task.ti: ffff881f23a4c000
[1004630.864117] RIP: 0010:[<ffffffffa083f2fb>]  [<ffffffffa083f2fb>] xfs_vm_writepage+0x58b/0x5d0 [xfs]
[1004630.864860] RSP: 0018:ffff881f23a4f948  EFLAGS: 00010246
[1004630.865749] RAX: 002fffff00040029 RBX: ffff881bedd50308 RCX: 000000000000000c
[1004630.866466] RDX: 0000000000000008 RSI: ffff881f23a4fc40 RDI: ffffea00296b7800
[1004630.867218] RBP: ffff881f23a4f9f0 R08: fffffffffffffffe R09: 000000000001a098
[1004630.867941] R10: ffff88207ffd6000 R11: 0000000000000000 R12: ffff881bedd50308
[1004630.868656] R13: ffff881f23a4fc40 R14: ffff881bedd501b8 R15: ffffea00296b7800
[1004630.869399] FS:  0000000000000000(0000) GS:ffff881fff180000(0000) knlGS:0000000000000000
[1004630.870147] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[1004630.870868] CR2: 0000000000eb3d30 CR3: 0000001ff79dc000 CR4: 00000000001407e0
[1004630.871610] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[1004630.872349] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[1004630.873072] Stack:
[1004630.873749]  0000000000008000 ffff880070b03644 ffff881f23a4fc40 ffff881f23a4fa68
[1004630.874480]  ffff881f23a4fa80 ffffea00296b7800 0000000000001000 000000000000000e
[1004630.875223]  0000000000001000 ffffffff81180981 0000000000000000 ffff881bedd50310
[1004630.875957] Call Trace:
[1004630.876665]  [<ffffffff81180981>] ? find_get_pages_tag+0xe1/0x1a0
[1004630.877417]  [<ffffffff8118b3b3>] __writepage+0x13/0x50
[1004630.878173]  [<ffffffff8118bed1>] write_cache_pages+0x251/0x4d0
[1004630.878915]  [<ffffffffa00c170a>] ? enqueue_cmd_and_start_io+0x3a/0x40 [hpsa]
[1004630.879626]  [<ffffffff8118b3a0>] ? global_dirtyable_memory+0x70/0x70
[1004630.880368]  [<ffffffff8118c19d>] generic_writepages+0x4d/0x80
[1004630.881157]  [<ffffffffa083e063>] xfs_vm_writepages+0x53/0x90 [xfs]
[1004630.881907]  [<ffffffff8118d24e>] do_writepages+0x1e/0x40
[1004630.882643]  [<ffffffff81228730>] __writeback_single_inode+0x40/0x210
[1004630.883403]  [<ffffffff8122941e>] writeback_sb_inodes+0x25e/0x420
[1004630.884141]  [<ffffffff8122967f>] __writeback_inodes_wb+0x9f/0xd0
[1004630.884863]  [<ffffffff81229ec3>] wb_writeback+0x263/0x2f0   
[1004630.885610]  [<ffffffff810ab776>] ? set_worker_desc+0x86/0xb0
[1004630.886378]  [<ffffffff8122bd05>] bdi_writeback_workfn+0x115/0x460
[1004630.887142]  [<ffffffff810c4cf8>] ? try_to_wake_up+0x1c8/0x330
[1004630.887875]  [<ffffffff810a7f3b>] process_one_work+0x17b/0x470
[1004630.888638]  [<ffffffff810a8d76>] worker_thread+0x126/0x410   
[1004630.889389]  [<ffffffff810a8c50>] ? rescuer_thread+0x460/0x460
[1004630.890126]  [<ffffffff810b052f>] kthread+0xcf/0xe0
[1004630.890816]  [<ffffffff810b0460>] ? kthread_create_on_node+0x140/0x140
[1004630.891521]  [<ffffffff81696418>] ret_from_fork+0x58/0x90
[1004630.892229]  [<ffffffff810b0460>] ? kthread_create_on_node+0x140/0x140
[1004630.892877] Code: e0 80 3d 4d b4 06 00 00 0f 85 a4 fe ff ff be d7 03 00 00 48 c7 c7 4a e0 88 a0 e8 61 66 84 e0 c6 05 2f b4 06 00 01 e9 87 fe ff ff <0f> 0b 8b 4d a4 e9 e8 fb ff ff 41 b9 01 00 00 00 e9 69 fd ff ff 
[1004630.894245] RIP  [<ffffffffa083f2fb>] xfs_vm_writepage+0x58b/0x5d0 [xfs]
[1004630.894890]  RSP <ffff881f23a4f94

问题发生的位置为:

xfs_vm_writepage()
---...bh = head = page_buffers(page);...
---#define page_buffers(page)                    \({                            \BUG_ON(!PagePrivate(page));            \((struct buffer_head *)page_private(page));    \})

问题分析

page buffers创建

在Linux内核中,buffer head的目的是为了对接在存储子系统和内存子系统的两个基本单位:

  • sector,这是存储的基本单位,512字节
  • page,这个是内存的基本单位,4096字节

文件缓存,即page cache,是存储子系统和内存子系统的结合部,buffer_head对应的就是page cache中的一个sector;当我们格式化文件系统,把fsblock设置为512、1024、2048字节时,或者操作raw block设备时,每个page就会对应多个buffer_head;

近些年,512字节的fsblock基本被主流文件系统抛弃,虽然还支持,但是都以支持4K为主要的优化方向,xfs甚至抛弃了buffer_head,直接使用page作为IO操作的基本单位;

那么,buffer_head都是在哪些契机被创建呢?

3.10.0-514.el7
【A】
generic_perform_write()-> aops->write_begin()xfs_vm_write_begin()-> grab_cache_page_write_begin()-> __block_write_begin()-> create_page_buffers()
【B】
do_shared_fault()-> __do_fault()-> vma->vm_ops->page_mkwrite()xfs_filemap_page_mkwrite()-> __block_page_mkwrite()-> __block_write_begin()-> create_page_buffers()do_mpage_readpage() -> block_read_full_page() //如果page中各个bh的状态不一致,不如有些map有些unmap,会进入到此路径,分别对bh进行操作-> create_page_buffers()

通常来讲【A】和【B】路径可以保证,在对page做写操作之前,保证page buffers已经创建,它们分别代表是通过系统调用和mmap对文件进行写的场景;

page buffers丢失

那本问题中,被设置了dirty标记的page的buffer是如何丢失的呢?

page buffer被释放的典型场景:

3.10.0-514.el7shrink_page_list()-> try_to_unmap()-> try_to_release_page()-> aops->releasepage()xfs_vm_releasepage()-> try_to_free_buffers()-> __remove_mapping()

在try_to_release_page()之前,try_to_unmap()会被调用,它会清理掉pte,并且检测是否需要page dirty

3.10.0-514.el7try_to_unmap()-> try_to_unmap_file()-> try_to_unmap_one()-> set_page_dirty() //pte_dirty()

这样就可以保证,在执行try_to_release_page()之前,给page及其buffer设置dirty 标记,

3.10.0-514.el7xfs_vm_set_page_dirty()
---spin_lock(&mapping->private_lock);if (page_has_buffers(page)) {struct buffer_head *head = page_buffers(page);struct buffer_head *bh = head;do {if (offset < end_offset)set_buffer_dirty(bh);bh = bh->b_this_page;offset += 1 << inode->i_blkbits;} while (bh != head);}
---try_to_free_buffers()-> drop_buffers()-> buffer_busy()-> atomic_read(&bh->b_count) | (bh->b_state & ((1 << BH_Dirty) | (1 << BH_Lock)))

如果buffer有dirty标记就不会被释放。另外,有truncate和invalidate的场景,也是类似的操作。

但是有一个机器特殊的场景:

3.10.0-514.el7shrink_active_list()
---if (unlikely(buffer_heads_over_limit)) {if (page_has_private(page) && trylock_page(page)) {if (page_has_private(page))try_to_release_page(page, 0);unlock_page(page);}}
---

这里对buffers直接进行释放。

Write-Protect

用对mmap的page的写操作,是通过下面的机器触发的page fault并给page设置dirty的

3.10.0-514.el7generic_writepages()-> clear_page_dirty_for_io()-> page_mkclean()-> page_mkclean_file()-> page_mkclean_one()-> pte_wrprotect()-> pte_mkclean()handle_pte_fault()-> do_wp_page() // pte_present() && !pte_write()-> wp_page_shared()-> do_page_mkwrite()-> xfs_filemap_page_mkwrite()-> __block_page_mkwrite()-> lock_page()-> __block_write_begin()-> block_commit_write()-> set_page_dirty(page);-> wait_for_stable_page(page);-> wp_page_reuse()-> set_page_dirty()-> unlock_page()

在执行writepage之前,在page_lock的保护之下,通过clean_page_dirty_for_io()清除page的dirty  flags以及mmap的pte的写权限,将相关page变为write-protect,这样,下次用户写这个page的时候,就会触发pagefault,内核在这里将相关的page设置为dirty,在此期间,会给page创建buffers;这样,就可以保证任何对mmap的写操作,都可以通过page fault提交到writeback子系统中。

write-protect page fault发生时,写操作还没有发生,所以,dirty bit并不会被设置;而一旦写操作发生,那么上面的代码所代表的过程,一定会发生,那么buffer也一定是具备的。

Dirty Page w/o Buffers

这里我们对比下ext4和xfs的writepages操作的调用栈:

ext4_writepages()-> mpage_prepare_extent_to_map()-> pagevec_lookup_tag()-> lock_page()-> wait_on_page_writeback()-> mpage_process_page_bufs()-> mpage_submit_page()-> clear_page_dirty_for_io(page)-> ext4_bio_write_page()-> set_page_writeback()-> io_submit_add_bh()-> clean_buffer_dirty()-> unlock_page()xfs_vm_writepages()-> generic_writepages()-> write_cache_pages()-> pagevec_lookup_tag()-> lock_page()-> xfs_vm_writepage()-> lock_buffer()-> xfs_add_to_ioend()-> xfs_start_page_writeback()-> clear_page_dirty_for_io()-> set_page_writeback()-> unlock_page() <<-----------------------HERE!!!!                    -> xfs_submit_ioend()-> xfs_start_page_writeback()-> mark_buffer_async_write()-> set_buffer_uptodate()-> clear_buffer_dirty()

在ext4调用栈中,Page Dirty和Buffer Dirty的清理都是在page_lock下进行的;而xfs中,buffer的清理是在page_lock之外,这时,我们如果引入page fault过程中的page_mkwrite调用链,就会产生以下竞态,

writeback workqueue             user page fault()
xfs_vm_writepages()             xfs_filemap_page_mkwrite()
lock_page()                     __block_page_mkwrite()
clear_page_dirty_for_io()
unlock_page()lock_page()xfs_vm_set_page_dirty()set_buffer_dirty()TestSetPageDirty()
clear_buffer_dirty()end_page_writeback()

于是这里,我们得到了一个page有dirty flags,但是buffer全是clean的;如果将此场景带入到ext4,就不会有这种问题,因为有page_lock的保护,最终的结果,要么是page buffer全部dirty,要是全是clean。

到这一步,产生了两个关键点:

  • page dirty + buffer clean
  • dirty bit,因为write-protect page fault已经发生过,所以,写操作已经完成

其中page dirty + buffer clean将继续推进问题的发生;

我们再回到shrink_active_list(),它可能会调用try_to_release_page(),

try_to_free_buffers()-> drop_buffers()-> buffer_busy() // dirty or lock-> cancel_dirty_page()

page dirty + buffer clean,在这里,因为buffer是clean的,所以,它可以被释放,然后page的dirty也被清除了;

但是,此时pte中的dirty bit是存在的,于是在后续的shrink_page_list()中:

tshrink_page_list()-> try_to_unmap()-> try_to_unmap_file()-> try_to_unmap_one()-> set_page_dirty() //pte_dirty()

page被设置dirty,然后回收中止;得到了一个page dirty + no buffers

所以,问题的关键是,xfs_vm_writepage()中,对page dirty和buffer dirty的clean操作并没有在page_lock的保护下

问题解决

在搜索社区代码和Commit记录之后,该问题在以下commit解决:

commit e10de3723c53378e7cf441529f563c316fdc0dd3
Author: Dave Chinner <dchinner@redhat.com>
Date:   Mon Feb 15 17:23:12 2016 +1100xfs: don't chain ioends during writepage submission@@ -565,6 +539,7 @@ xfs_add_to_ioend(bh->b_private = NULL;wpc->ioend->io_size += bh->b_size;wpc->last_block = bh->b_blocknr;
+       xfs_start_buffer_writeback(bh);}

在该修改之后,buffer clean操作也放到了page lock之下。

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com