Linux Kdump 机制详解
阅读原文时间:2023年07月10日阅读:2

文章目录

1. 简介

Kdump 提供了一种机制在内核出现故障的时候把系统的所有内存信息和寄存器信息 dump 出来成一个文件,后续通过 gdb/crash 等工具进行分析和调试。和用户态程序的 coredump 机制类似。它的主要流程如下图所示:


可以看到它的核心原理是保留一段内存并且预先加载了一个备用的 kernel,在主 kernel 出现故障时跳转到备用 kernel,在备用 kernel 中把主 kernel 使用的内存和发生故障时的寄存器信息 dump 到一个磁盘文件中供后续分析。这个文件的格式是 elf core 文件格式。

kdump 主要还是用来捕捉纯软件的故障,在嵌入式领域还需要加上对硬件故障的捕捉,仿照其原理并进行加强和改造,就能构造出自己的 coredump 机制。

下面就来详细的分析整个 kdump 机制的详细原理。

之前的 kdump 安装需要手工的一个个安装 kexec-toolskdump-toolscrash,手工配置 grub cmdline 参数。在现在的 ubuntu 中只需要安装一个 linux-crashdump 软件包就自动帮你搞定:

$ sudo apt-get install linux-crashdump

安装完后,可以通过 kdump-config 命令检查系统是否配置正确:

$ kdump-config show
DUMP_MODE:        kdump
USE_KDUMP:        1
KDUMP_SYSCTL:     kernel.panic_on_oops=1
KDUMP_COREDIR:    /var/crash        // kdump 文件的存储目录
crashkernel addr: 0x
   /var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinuz-5.8.18+
kdump initrd:
   /var/lib/kdump/initrd.img: symbolic link to /var/lib/kdump/initrd.img-5.8.18+
current state:    ready to kdump    // 显示 ready 状态,说明系统 kdmup 机制已经准备就绪

kexec command:
  /sbin/kexec -p --command-line="BOOT_IMAGE=/boot/vmlinuz-5.8.18+ root=UUID=9ee42fe2-4e73-4703-8b6d-bb238ffdb003 ro find_preseed=/preseed.cfg auto noprompt priority=critical locale=en_US quiet reset_devices systemd.unit=kdump-tools-dump.service nr_cpus=1 irqpoll nousb ata_piix.prefer_ms_hyperv=0" --initrd=/var/lib/kdump/initrd.img /var/lib/kdump/vmlinuz

linux-crashdump 的本质还是由一个个分离的软件包组成的:

$ sudo apt-get install linux-crashdump -d
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  crash efibootmgr grub-common grub-efi-arm64 grub-efi-arm64-bin
  grub-efi-arm64-signed grub2-common kdump-tools kexec-tools libfreetype6
  libsnappy1v5 makedumpfile os-prober
Suggested packages:
  multiboot-doc xorriso desktop-base
Recommended packages:
  secureboot-db
The following NEW packages will be installed:
  crash efibootmgr grub-common grub-efi-arm64 grub-efi-arm64-bin
  grub-efi-arm64-signed grub2-common kdump-tools kexec-tools libfreetype6
  libsnappy1v5 linux-crashdump makedumpfile os-prober
0 upgraded, 14 newly installed, 0 to remove and 67 not upgraded.
Need to get 6611 kB of archives.

在 kdump 就绪以后我们手工触发一次 panic :

$ sudo bash
# echo c > /proc/sysrq-trigger

在系统 kdump 完成,重新启动以后。我们在 /var/crash 目录下可以找到 kdump 生成的内存转存储文件:

$ ls -l /var/crash/202107011353/
total 65324
-rw------- 1 root whoopsie   119480 Jul  1 13:53 dmesg.202107011353      // 系统 kernel log 信息
-rw------- 1 root whoopsie 66766582 Jul  1 13:53 dump.202107011353        // 内存转存储文件,压缩格式
$ sudo file /var/crash/202107011353/dump.202107011353
/var/crash/202107011353/dump.202107011353: Kdump compressed dump v6, system Linux, node ubuntu, release 5.8.18+, version #18 SMP Thu Jul 1 11:24:39 CST 2021, machine x86_64, domain (none)

默认生成的 dump 文件是经过 makedumpfile 压缩过的,或者我们修改一些配置生成原始的 elf core 文件:

$ ls -l /var/crash/202107011132/
total 1785584
-rw------- 1 root whoopsie     117052 Jul  1 11:32 dmesg.202107011132    // 系统 kernel log 信息
-r-----r-- 1 root whoopsie 1979371520 Jul  1 11:32 vmcore.202107011132    // 内存转存储文件,原始 Elf 格式
$ file /var/crash/202107011132/vmcore.202107011132
/var/crash/202107011132/vmcore.202107011132: ELF 64-bit LSB core file, x86-64, version 1 (SYSV), SVR4-style

使用 crash 工具可以很方便对 kdump 文件进行分析, crash 是对 gdb 进行了一些包装,生成了更多的调试内核的快捷命令。同样可以利用 gdb 和 trace32 工具进行分析。

$ sudo crash /usr/lib/debug/boot/vmlinux-5.8.0-43-generic /var/crash/202106170338/dump.202106170338

值得注意的是,调试需要带 debuginfo 信息的 vmlinux 文件,需要额外安装。

1.3.1 安装 debuginfo vmlinux

参考ubuntu文档 How to use linux-crashdump to capture a kernel oops/panic 进行安装:

// 添加 debuginfo 包源仓库
$ sudo tee /etc/apt/sources.list.d/ddebs.list << EOF
deb http://ddebs.ubuntu.com/ $(lsb_release -cs)          main restricted universe multiverse
deb http://ddebs.ubuntu.com/ $(lsb_release -cs)-security main restricted universe multiverse
deb http://ddebs.ubuntu.com/ $(lsb_release -cs)-updates  main restricted universe multiverse
deb http://ddebs.ubuntu.com/ $(lsb_release -cs)-proposed main restricted universe multiverse
EOF

$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys ECDCAD72428D7C01
$ sudo apt-get update
$ sudo apt-get install linux-image-$(uname -r)-dbgsym

1.3.2 编译 kernel

如果找不到带 debuginfo 信息的 vmlinux 文件,也可以自己编译内核来进行调试。

  • 1、去掉/etc/apt/sources.list文件中关于deb-src的注释,下载当前内核源码:

    $ sudo apt-get update
    $ sudo apt-get source linux-image-unsigned-$(uname -r)

  • 2、参考Ubuntu BuildYourOwnKernel安装相关工具:

    $ sudo apt-get build-dep linux linux-image-$(uname -r)
    $ sudo apt-get install libncurses-dev gawk flex bison openssl libssl-dev dkms libelf-dev libudev-dev libpci-dev libiberty-dev autoconf

  • 3、内核编译和安装:

可以参考Ubuntu BuildYourOwnKernel中使用debian/rules的方式进行内核编译和打包。也可以使用以下的简便方式来进行编译安装:

// 编译
$ make menuconfig
$ make bzImage modules
// 安装
$ make INSTALL_MOD_STRIP=1 modules_install
$ sudo mkinitramfs /lib/modules/4.14.134+ -o /boot/initrd.img-4.14.134-xenomai
$ sudo cp arch/x86/boot/bzImage /boot/vmlinuz-4.14.134-xenomai
$ sudo cp System.map /boot/System.map-4.14.134-xenomai
$ sudo update-grub2

在前面我们说过可以把 kdump 默认的压缩格式改成原生 ELF Core 文件格式,本节我们就来实现这个需求。

/proc/vmcore文件从内存拷贝到磁盘是 crash kernel 中的 kdump-tools.service 服务完成的,我们来详细分析一下其中的流程:

  • 1、首先从 kdump-config 配置中可以看到,第二份 crash kernel 启动后 systemd 只需要启动一个服务 kdump-tools-dump.service

    kdump-config show

    DUMP_MODE: kdump
    USE_KDUMP: 1
    KDUMP_SYSCTL: kernel.panic_on_oops=1
    KDUMP_COREDIR: /var/crash
    crashkernel addr: 0x73000000
    /var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinuz-5.8.0-43-generic
    kdump initrd:
    /var/lib/kdump/initrd.img: symbolic link to /var/lib/kdump/initrd.img-5.8.0-43-generic
    current state: ready to kdump

    kexec command:
    /sbin/kexec -p --command-line="BOOT_IMAGE=/boot/vmlinuz-5.8.0-43-generic root=UUID=9ee42fe2-4e73-4703-8b6d-bb238ffdb003 ro find_preseed=/preseed.cfg auto noprompt priority=critical locale=en_US quiet reset_devices systemd.unit=kdump-tools-dump.service nr_cpus=1 irqpoll nousb ata_piix.prefer_ms_hyperv=0" --initrd=/var/lib/kdump/initrd.img /var/lib/kdump/vmlinuz

  • 2、kdump-tools-dump.service 服务本质是调用 kdump-tools start 脚本:

    systemctl cat kdump-tools-dump.service

    /lib/systemd/system/kdump-tools-dump.service

    [Unit]
    Description=Kernel crash dump capture service
    Wants=network-online.target dbus.socket systemd-resolved.service
    After=network-online.target dbus.socket systemd-resolved.service

    [Service]
    Type=oneshot
    StandardOutput=syslog+console
    EnvironmentFile=/etc/default/kdump-tools
    ExecStart=/etc/init.d/kdump-tools start
    ExecStop=/etc/init.d/kdump-tools stop
    RemainAfterExit=yes

  • 3、kdump-tools 调用了 kdump-config savecore

    vim /etc/init.d/kdump-tools

    KDUMP_SCRIPT=/usr/sbin/kdump-config

                echo -n "Starting $DESC: "
                $KDUMP_SCRIPT savecore
  • 4、kdump-config 调用了 makedumpfile -c -d 31 /proc/vmcore dump.xxxxxx

    MAKEDUMP_ARGS=${MAKEDUMP_ARGS:="-c -d 31"}
    vmcore_file=/proc/vmcore

        makedumpfile $MAKEDUMP_ARGS $vmcore_file $KDUMP_CORETEMP

kdump-tools-dump.service 默认调用 makedumpfile 生成压缩的 dump 文件。但是我们想分析原始的 elf 格式的 vmcore 文件,怎么办?

  • 4.1、首先我们修改 /usr/sbin/kdump-config 文件中的 MAKEDUMP_ARGS 参数让其出错。

    MAKEDUMP_ARGS=${MAKEDUMP_ARGS:="-xxxxx -c -d 31"} // 其中 -xxxxx 是随便加的选项

  • 4.2、然后 kdump-config 就会调用 cp /proc/vmcore vmcore.xxxxxx 命令来生成原始 elf 格式的 vmcore 文件了

        log_action_msg "running makedumpfile $MAKEDUMP_ARGS $vmcore_file $KDUMP_CORETEMP"
        makedumpfile $MAKEDUMP_ARGS $vmcore_file $KDUMP_CORETEMP        // 先调用 makedumpfile 生成压缩格式的 dump 文件
        ERROR=$?
        if [ $ERROR -ne 0 ] ; then                                        // 如果 makedumpfile 调用失败
                log_failure_msg "$NAME: makedumpfile failed, falling back to 'cp'"
                logger -t $NAME "makedumpfile failed, falling back to 'cp'"
                KDUMP_CORETEMP="$KDUMP_STAMPDIR/vmcore-incomplete"
                KDUMP_COREFILE="$KDUMP_STAMPDIR/vmcore.$KDUMP_STAMP"
                cp $vmcore_file $KDUMP_CORETEMP                            // 再尝试使用 cp 拷贝原始的 vmcore elf 文件
                ERROR=$?
        fi

2. 原理分析

kexec 实现了crash kernel 的加载。核心分为两部分:

  • kexec_file_load()/kexec_load()。负责在起始时就把备份的 kernel 和 initrd 加载好到内存。
  • __crash_kexec()。负责在故障时跳转到备份 kernel 中。

kdump 主要实现把 vmcore 文件从内存拷贝到磁盘,并进行一些瘦身。

本次并不打算对 kexec 加载内核和地址转换流程 以及 kdump 的拷贝裁剪 进行详细的解析,我们只关注其中的两个重要文件 /proc/kcore/proc/vmcore。其中:

  • /proc/kcore。是在 normal kernel 中把 normal kernel 的内存模拟成一个 elf core 文件,可以使用gdb 对当前系统进行在线调试,因为是自己调试自己会存在一些限制。
  • /proc/vmcore。是在 crash kernel 中把 normal kernel 的内存模拟成一个 elf core 文件,因为这时 normal kernel 已经停止运行,所以可以无限制的进行调试。我们 kdump 最后得到的 dump 文件,就是把 /proc/vmcore 文件从内存简单拷贝到了磁盘,或者再加上点裁剪和压缩。

所以可以看到 /proc/kcore/proc/vmcore 这两个文件是整个机制的核心,我们重点分析这两部分的实现。

关于 ELF 文件格式,我们熟知它有三种格式 .o文件(ET_REL)、.so文件(ET_EXEC)、exe文件(ET_DYN)。但是关于它的第四种格式 core文件(ET_CORE) 一直很神秘,也很神奇 gdb 一调试就能恢复到故障现场。

以下是 elf core 文件的大致格式:

可以看到 elf core 文件只关注运行是状态,所以它只有 segment 信息,没有 section 信息。其主要包含两种类型的 segment 信息:

  • 1、PT_LOAD。每个 segemnt 用来记录一段 memory 区域,还记录了这段 memory 对应的物理地址、虚拟地址和长度。
  • 2、PT_NOTE。这个是 elf core 中新增的 segment,记录了解析 memory 区域的关键信息。PT_NOTE segment 被分成了多个 elf_note结构,其中 NT_PRSTATUS 类型的记录了复位前 CPU 的寄存器信息,NT_TASKSTRUCT 记录了进程的 task_struct 信息,还有一个最关键0类型的自定义 VMCOREINFO 结论记录了内核的一些关键信息。

elf core 文件的大部分内容用 PT_LOAD segemnt 来记录 memeory 信息,但是怎么利用这些内存信息的钥匙记录在PT_NOTE segemnt 当中。

我们来看一个具体 vmcore 文件的例子:

  • 1、首先我们查询 elf header 信息:

    $ sudo readelf -e vmcore.202107011132
    ELF Header:
    Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
    Class: ELF64
    Data: 2's complement, little endian
    Version: 1 (current)
    OS/ABI: UNIX - System V
    ABI Version: 0
    Type: CORE (Core file) // 可以看到文件类型是 ET_CORE
    Machine: Advanced Micro Devices X86-64
    Version: 0x1
    Entry point address: 0x0
    Start of program headers: 64 (bytes into file)
    Start of section headers: 0 (bytes into file)
    Flags: 0x0
    Size of this header: 64 (bytes)
    Size of program headers: 56 (bytes)
    Number of program headers: 6
    Size of section headers: 0 (bytes)
    Number of section headers: 0
    Section header string table index: 0

    There are no sections in this file.

    // 可以看到包含了 PT_NOTE 和 PT_LOAD 两种类型的 segment
    Program Headers:
    Type Offset VirtAddr PhysAddr
    FileSiz MemSiz Flags Align
    NOTE 0x0000000000001000 0x0000000000000000 0x0000000000000000
    0x0000000000001318 0x0000000000001318 0x0
    LOAD 0x0000000000003000 0xffffffffb7200000 0x0000000006c00000
    0x000000000202c000 0x000000000202c000 RWE 0x0
    LOAD 0x000000000202f000 0xffff903a00001000 0x0000000000001000
    0x000000000009d800 0x000000000009d800 RWE 0x0
    LOAD 0x00000000020cd000 0xffff903a00100000 0x0000000000100000
    0x0000000072f00000 0x0000000072f00000 RWE 0x0
    LOAD 0x0000000074fcd000 0xffff903a7f000000 0x000000007f000000
    0x0000000000ee0000 0x0000000000ee0000 RWE 0x0
    LOAD 0x0000000075ead000 0xffff903a7ff00000 0x000000007ff00000
    0x0000000000100000 0x0000000000100000 RWE 0x0

  • 2、可以进一步查看 PT_NOTE 存储的具体内容:

    $ sudo readelf -n vmcore.202107011132

    Displaying notes found at file offset 0x00001000 with length 0x00001318:
    Owner Data size Description
    CORE 0x00000150 NT_PRSTATUS (prstatus structure) // 因为系统有8个CPU,所以保存了8份 prstatus 信息
    CORE 0x00000150 NT_PRSTATUS (prstatus structure)
    CORE 0x00000150 NT_PRSTATUS (prstatus structure)
    CORE 0x00000150 NT_PRSTATUS (prstatus structure)
    CORE 0x00000150 NT_PRSTATUS (prstatus structure)
    CORE 0x00000150 NT_PRSTATUS (prstatus structure)
    CORE 0x00000150 NT_PRSTATUS (prstatus structure)
    CORE 0x00000150 NT_PRSTATUS (prstatus structure)
    VMCOREINFO 0x000007dd Unknown note type: (0x00000000) // 自定义的VMCOREINFO信息
    description data: 4f 53 52 45 4c 45 41 53 45 3d 35 2e 38 2e 31 38 2b 0a 50 41 47 45 53 49 5a 45 3d 34 30 39 36 0a 53 59 4d 42 4f 4c 28 69 6e 69 74 5f 75 74 73 5f 6e 73 29 3d 66 66 66 66 66 66 66 66 …

  • 3、可以进一步解析 VMCOREINFO 存储的信息,description data后面是一段16进制的码流转换以后得到:

    OSRELEASE=5.8.0-43-generic
    PAGESIZE=4096
    SYMBOL(init_uts_ns)=ffffffffa5014620
    SYMBOL(node_online_map)=ffffffffa5276720
    SYMBOL(swapper_pg_dir)=ffffffffa500a000
    SYMBOL(_stext)=ffffffffa3a00000
    SYMBOL(vmap_area_list)=ffffffffa50f2560
    SYMBOL(mem_section)=ffff91673ffd2000
    LENGTH(mem_section)=2048
    SIZE(mem_section)=16
    OFFSET(mem_section.section_mem_map)=0
    SIZE(page)=64
    SIZE(pglist_data)=171968
    SIZE(zone)=1472
    SIZE(free_area)=88
    SIZE(list_head)=16
    SIZE(nodemask_t)=128
    OFFSET(page.flags)=0
    OFFSET(page._refcount)=52
    OFFSET(page.mapping)=24
    OFFSET(page.lru)=8
    OFFSET(page._mapcount)=48
    OFFSET(page.private)=40
    OFFSET(page.compound_dtor)=16
    OFFSET(page.compound_order)=17
    OFFSET(page.compound_head)=8
    OFFSET(pglist_data.node_zones)=0
    OFFSET(pglist_data.nr_zones)=171232
    OFFSET(pglist_data.node_start_pfn)=171240
    OFFSET(pglist_data.node_spanned_pages)=171256
    OFFSET(pglist_data.node_id)=171264
    OFFSET(zone.free_area)=192
    OFFSET(zone.vm_stat)=1280
    OFFSET(zone.spanned_pages)=120
    OFFSET(free_area.free_list)=0
    OFFSET(list_head.next)=0
    OFFSET(list_head.prev)=8
    OFFSET(vmap_area.va_start)=0
    OFFSET(vmap_area.list)=40
    LENGTH(zone.free_area)=11
    SYMBOL(log_buf)=ffffffffa506a6e0
    SYMBOL(log_buf_len)=ffffffffa506a6dc
    SYMBOL(log_first_idx)=ffffffffa55f55d8
    SYMBOL(clear_idx)=ffffffffa55f55a4
    SYMBOL(log_next_idx)=ffffffffa55f55c8
    SIZE(printk_log)=16
    OFFSET(printk_log.ts_nsec)=0
    OFFSET(printk_log.len)=8
    OFFSET(printk_log.text_len)=10
    OFFSET(printk_log.dict_len)=12
    LENGTH(free_area.free_list)=5
    NUMBER(NR_FREE_PAGES)=0
    NUMBER(PG_lru)=4
    NUMBER(PG_private)=13
    NUMBER(PG_swapcache)=10
    NUMBER(PG_swapbacked)=19
    NUMBER(PG_slab)=9
    NUMBER(PG_hwpoison)=23
    NUMBER(PG_head_mask)=65536
    NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129
    NUMBER(HUGETLB_PAGE_DTOR)=2
    NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257
    NUMBER(phys_base)=1073741824
    SYMBOL(init_top_pgt)=ffffffffa500a000
    NUMBER(pgtable_l5_enabled)=0
    SYMBOL(node_data)=ffffffffa5271da0
    LENGTH(node_data)=1024
    KERNELOFFSET=22a00000
    NUMBER(KERNEL_IMAGE_SIZE)=1073741824
    NUMBER(sme_mask)=0
    CRASHTIME=1623937823

参考资料:
1.Anatomy of an ELF core file
2.Extending the ELF Core Format for Forensics Snapshots
3.ELF Coredump
4.Dumping /proc/kcore in 2019
5.readelf -n

3. /proc/kcore

有些同学在清理磁盘空间的常常会碰到 /proc/kcore 文件,因为它显示出来的体积非常的大,有时高达128T。但是实际上她没有占用任何磁盘空间,它是一个内存文件系统中的文件。它也没有占用多少内存空间,除了一些控制头部分占用少量内存,大块的空间都是模拟的,只有在用户读操作的时候才会从对应的内存空间去读取的。

上一节已经介绍了 /proc/kcore 是把当前系统的内存模拟成一个 elf core 文件,可以使用gdb 对当前系统进行在线调试。那本机我们就来看看具体的模拟过程。

初始化就是构建kclist_head链表的一个过程,链表中每一个成员对应一个 PT_LOAD segment。在读操作的时候再用elf的PT_LOAD segment 呈现这些成员。

static int __init proc_kcore_init(void)
{
    /* (1) 创建 /proc/kcore 文件 */
    proc_root_kcore = proc_create("kcore", S_IRUSR, NULL, &kcore_proc_ops);
    if (!proc_root_kcore) {
        pr_err("couldn't create /proc/kcore\n");
        return 0; /* Always returns 0. */
    }
    /* Store text area if it's special */
    /* (2) 将内核代码段 _text 加入kclist_head链表,kclist_head链表中每一个成员对应一个 PT_LOAD segment */
    proc_kcore_text_init();
    /* Store vmalloc area */
    /* (3) 将 VMALLOC 段内存加入kclist_head链表 */
    kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
        VMALLOC_END - VMALLOC_START, KCORE_VMALLOC);
    /* (4) 将 MODULES_VADDR 模块内存加入kclist_head链表 */
    add_modules_range();
    /* Store direct-map area from physical memory map */
    /* (5) 遍历系统内存布局表,将有效内存加入kclist_head链表 */
    kcore_update_ram();
    register_hotmemory_notifier(&kcore_callback_nb);

    return 0;
}

↓

static int kcore_update_ram(void)
{
    LIST_HEAD(list);
    LIST_HEAD(garbage);
    int nphdr;
    size_t phdrs_len, notes_len, data_offset;
    struct kcore_list *tmp, *pos;
    int ret = 0;

    down_write(&kclist_lock);
    if (!xchg(&kcore_need_update, 0))
        goto out;

    /* (5.1) 遍历系统内存布局表,将符合`IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY`内存加入list链表 */
    ret = kcore_ram_list(&list);
    if (ret) {
        /* Couldn't get the RAM list, try again next time. */
        WRITE_ONCE(kcore_need_update, 1);
        list_splice_tail(&list, &garbage);
        goto out;
    }

    /* (5.2) 删除掉原有 kclist_head 链表中的 KCORE_RAM/KCORE_VMEMMAP 区域,因为全局链表中已经覆盖了  */
    list_for_each_entry_safe(pos, tmp, &kclist_head, list) {
        if (pos->type == KCORE_RAM || pos->type == KCORE_VMEMMAP)
            list_move(&pos->list, &garbage);
    }
    /* (5.3) 将原有 kclist_head 链表 和全局链表 list 拼接到一起 */
    list_splice_tail(&list, &kclist_head);

    /* (5.4) 更新 kclist_head 链表的成员个数,一个成员代表一个 PT_LOAD segment。
            计算 PT_NOTE segment 的长度
            计算 `/proc/kcore` 文件的长度,这个长度是个虚值,最大是虚拟地址的最大范围
     */
    proc_root_kcore->size = get_kcore_size(&nphdr, &phdrs_len, &notes_len,
                           &data_offset);

out:
    up_write(&kclist_lock);
    /* (5.5) 释放掉上面删除的链表成员占用的空间 */
    list_for_each_entry_safe(pos, tmp, &garbage, list) {
        list_del(&pos->list);
        kfree(pos);
    }
    return ret;
}

其中的一个关键从遍历系统内存布局表,关键代码如下:

kcore_ram_list() → walk_system_ram_range():

int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
              void *arg, int (*func)(unsigned long, unsigned long, void *))
{
    resource_size_t start, end;
    unsigned long flags;
    struct resource res;
    unsigned long pfn, end_pfn;
    int ret = -EINVAL;

    start = (u64) start_pfn << PAGE_SHIFT;
    end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
    /* (5.1.1) 从 iomem_resource 链表中查找符合 IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY 的资源段 */
    flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
    while (start < end &&
           !find_next_iomem_res(start, end, flags, IORES_DESC_NONE,
                    false, &res)) {
        pfn = PFN_UP(res.start);
        end_pfn = PFN_DOWN(res.end + 1);
        if (end_pfn > pfn)
            ret = (*func)(pfn, end_pfn - pfn, arg);
        if (ret)
            break;
        start = res.end + 1;
    }
    return ret;
}

其实就相当于以下命令:

$ sudo cat /proc/iomem | grep "System RAM"
00001000-0009e7ff : System RAM
00100000-7fedffff : System RAM
7ff00000-7fffffff : System RAM

准备好数据以后,还是在读 /proc/kcore 文件时,以 elf core 的格式呈现。

static const struct proc_ops kcore_proc_ops = {
    .proc_read  = read_kcore,
    .proc_open  = open_kcore,
    .proc_release   = release_kcore,
    .proc_lseek = default_llseek,
};

↓

static ssize_t
read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos)
{
    char *buf = file->private_data;
    size_t phdrs_offset, notes_offset, data_offset;
    size_t phdrs_len, notes_len;
    struct kcore_list *m;
    size_t tsz;
    int nphdr;
    unsigned long start;
    size_t orig_buflen = buflen;
    int ret = 0;

    down_read(&kclist_lock);

    /* (1) 获取到PT_LOAD segment个数、PT_NOTE segment 的长度等信息,开始动态构造 elf core 文件了 */
    get_kcore_size(&nphdr, &phdrs_len, &notes_len, &data_offset);
    phdrs_offset = sizeof(struct elfhdr);
    notes_offset = phdrs_offset + phdrs_len;

    /* ELF file header. */
    /* (2) 构造 ELF 文件头,并拷贝给给用户态读内存 */
    if (buflen && *fpos < sizeof(struct elfhdr)) {
        struct elfhdr ehdr = {
            .e_ident = {
                [EI_MAG0] = ELFMAG0,
                [EI_MAG1] = ELFMAG1,
                [EI_MAG2] = ELFMAG2,
                [EI_MAG3] = ELFMAG3,
                [EI_CLASS] = ELF_CLASS,
                [EI_DATA] = ELF_DATA,
                [EI_VERSION] = EV_CURRENT,
                [EI_OSABI] = ELF_OSABI,
            },
            .e_type = ET_CORE,
            .e_machine = ELF_ARCH,
            .e_version = EV_CURRENT,
            .e_phoff = sizeof(struct elfhdr),
            .e_flags = ELF_CORE_EFLAGS,
            .e_ehsize = sizeof(struct elfhdr),
            .e_phentsize = sizeof(struct elf_phdr),
            .e_phnum = nphdr,
        };

        tsz = min_t(size_t, buflen, sizeof(struct elfhdr) - *fpos);
        if (copy_to_user(buffer, (char *)&ehdr + *fpos, tsz)) {
            ret = -EFAULT;
            goto out;
        }

        buffer += tsz;
        buflen -= tsz;
        *fpos += tsz;
    }

    /* ELF program headers. */
    /* (3) 构造 ELF program 头,并拷贝给给用户态读内存 */
    if (buflen && *fpos < phdrs_offset + phdrs_len) {
        struct elf_phdr *phdrs, *phdr;

        phdrs = kzalloc(phdrs_len, GFP_KERNEL);
        if (!phdrs) {
            ret = -ENOMEM;
            goto out;
        }

        /* (3.1) PT_NOTE segment 不需要物理地址和虚拟地址 */
        phdrs[0].p_type = PT_NOTE;
        phdrs[0].p_offset = notes_offset;
        phdrs[0].p_filesz = notes_len;

        phdr = &phdrs[1];
        /* (3.2) 逐个计算 PT_LOAD segment 的物理地址、虚拟地址和长度 */
        list_for_each_entry(m, &kclist_head, list) {
            phdr->p_type = PT_LOAD;
            phdr->p_flags = PF_R | PF_W | PF_X;
            phdr->p_offset = kc_vaddr_to_offset(m->addr) + data_offset;
            if (m->type == KCORE_REMAP)
                phdr->p_vaddr = (size_t)m->vaddr;
            else
                phdr->p_vaddr = (size_t)m->addr;
            if (m->type == KCORE_RAM || m->type == KCORE_REMAP)
                phdr->p_paddr = __pa(m->addr);
            else if (m->type == KCORE_TEXT)
                phdr->p_paddr = __pa_symbol(m->addr);
            else
                phdr->p_paddr = (elf_addr_t)-1;
            phdr->p_filesz = phdr->p_memsz = m->size;
            phdr->p_align = PAGE_SIZE;
            phdr++;
        }

        tsz = min_t(size_t, buflen, phdrs_offset + phdrs_len - *fpos);
        if (copy_to_user(buffer, (char *)phdrs + *fpos - phdrs_offset,
                 tsz)) {
            kfree(phdrs);
            ret = -EFAULT;
            goto out;
        }
        kfree(phdrs);

        buffer += tsz;
        buflen -= tsz;
        *fpos += tsz;
    }

    /* ELF note segment. */
    /* (4) 构造 PT_NOTE segment,并拷贝给给用户态读内存 */
    if (buflen && *fpos < notes_offset + notes_len) {
        struct elf_prstatus prstatus = {};
        struct elf_prpsinfo prpsinfo = {
            .pr_sname = 'R',
            .pr_fname = "vmlinux",
        };
        char *notes;
        size_t i = 0;

        strlcpy(prpsinfo.pr_psargs, saved_command_line,
            sizeof(prpsinfo.pr_psargs));

        notes = kzalloc(notes_len, GFP_KERNEL);
        if (!notes) {
            ret = -ENOMEM;
            goto out;
        }

        /* (4.1) 添加 NT_PRSTATUS */
        append_kcore_note(notes, &i, CORE_STR, NT_PRSTATUS, &prstatus,
                  sizeof(prstatus));
        /* (4.2) 添加 NT_PRPSINFO */
        append_kcore_note(notes, &i, CORE_STR, NT_PRPSINFO, &prpsinfo,
                  sizeof(prpsinfo));
        /* (4.3) 添加 NT_TASKSTRUCT */
        append_kcore_note(notes, &i, CORE_STR, NT_TASKSTRUCT, current,
                  arch_task_struct_size);
        /*
         * vmcoreinfo_size is mostly constant after init time, but it
         * can be changed by crash_save_vmcoreinfo(). Racing here with a
         * panic on another CPU before the machine goes down is insanely
         * unlikely, but it's better to not leave potential buffer
         * overflows lying around, regardless.
         * Vmcoreinfo_size在初始化后基本保持不变,但可以通过crash_save_vmcoreinfo()修改。在机器宕机之前,在另一个CPU上出现恐慌是不太可能的,但无论如何,最好不要让潜在的缓冲区溢出到处存在。
         */
        /* (4.4) 添加 VMCOREINFO */
        append_kcore_note(notes, &i, VMCOREINFO_NOTE_NAME, 0,
                  vmcoreinfo_data,
                  min(vmcoreinfo_size, notes_len - i));

        tsz = min_t(size_t, buflen, notes_offset + notes_len - *fpos);
        if (copy_to_user(buffer, notes + *fpos - notes_offset, tsz)) {
            kfree(notes);
            ret = -EFAULT;
            goto out;
        }
        kfree(notes);

        buffer += tsz;
        buflen -= tsz;
        *fpos += tsz;
    }

    /*
     * Check to see if our file offset matches with any of
     * the addresses in the elf_phdr on our list.
     */
    start = kc_offset_to_vaddr(*fpos - data_offset);
    if ((tsz = (PAGE_SIZE - (start & ~PAGE_MASK))) > buflen)
        tsz = buflen;

    m = NULL;
    /* (5) 构造 PT_LOAD segment,并拷贝给给用户态读内存 */
    while (buflen) {
        /*
         * If this is the first iteration or the address is not within
         * the previous entry, search for a matching entry.
         */
        if (!m || start < m->addr || start >= m->addr + m->size) {
            list_for_each_entry(m, &kclist_head, list) {
                if (start >= m->addr &&
                    start < m->addr + m->size)
                    break;
            }
        }

        if (&m->list == &kclist_head) {
            if (clear_user(buffer, tsz)) {
                ret = -EFAULT;
                goto out;
            }
            m = NULL;   /* skip the list anchor */
        } else if (!pfn_is_ram(__pa(start) >> PAGE_SHIFT)) {
            if (clear_user(buffer, tsz)) {
                ret = -EFAULT;
                goto out;
            }
        } else if (m->type == KCORE_VMALLOC) {
            vread(buf, (char *)start, tsz);
            /* we have to zero-fill user buffer even if no read */
            if (copy_to_user(buffer, buf, tsz)) {
                ret = -EFAULT;
                goto out;
            }
        } else if (m->type == KCORE_USER) {
            /* User page is handled prior to normal kernel page: */
            if (copy_to_user(buffer, (char *)start, tsz)) {
                ret = -EFAULT;
                goto out;
            }
        } else {
            if (kern_addr_valid(start)) {
                /*
                 * Using bounce buffer to bypass the
                 * hardened user copy kernel text checks.
                 */
                if (copy_from_kernel_nofault(buf, (void *)start,
                        tsz)) {
                    if (clear_user(buffer, tsz)) {
                        ret = -EFAULT;
                        goto out;
                    }
                } else {
                    if (copy_to_user(buffer, buf, tsz)) {
                        ret = -EFAULT;
                        goto out;
                    }
                }
            } else {
                if (clear_user(buffer, tsz)) {
                    ret = -EFAULT;
                    goto out;
                }
            }
        }
        buflen -= tsz;
        *fpos += tsz;
        buffer += tsz;
        start += tsz;
        tsz = (buflen > PAGE_SIZE ? PAGE_SIZE : buflen);
    }

out:
    up_read(&kclist_lock);
    if (ret)
        return ret;
    return orig_buflen - buflen;
}

4. /proc/vmcore

/proc/vmcore是在 crash kernel 中把 normal kernel 的内存模拟成一个 elf core 文件。

它的文件格式构造是和上一节的 /proc/kcore 是类似的,不同的是它的数据准备工作是分成两部分完成的:

  • normal kernel 负责事先把 elf header 准备好。
  • crash kernel 负责把传递过来的 elf header 封装成 /proc/vmcore 文件,并且保存到磁盘。

下面我们就来详细分析具体的过程。

在系统发生故障时状态是很不稳定的,时间也是很紧急的,所以我们在 normal kernel 中就尽可能早的把 /proc/vomcore 文件的 elf header 数据准备好。虽然 normal kernel 不会呈现 /proc/vmcore,只会在 crash kernel 中呈现。

在 kexec_tools 使用 kexec_file_load() 系统调用加载 crash kernel 时,就顺带把 /proc/vmcore 的 elf header 需要的大部分数据准备好了:

kexec_file_load() → kimage_file_alloc_init() → kimage_file_prepare_segments() → arch_kexec_kernel_image_load() → image->fops->load() → kexec_bzImage64_ops.load() → bzImage64_load() → crash_load_segments() → prepare_elf_headers() → crash_prepare_elf64_headers():

static int prepare_elf_headers(struct kimage *image, void **addr,
                    unsigned long *sz)
{
    struct crash_mem *cmem;
    int ret;

    /* (1) 遍历系统内存布局表统计有效内存区域的个数,根据个数分配 cmem 空间 */
    cmem = fill_up_crash_elf_data();
    if (!cmem)
        return -ENOMEM;

    /* (2) 再次遍历系统内存布局表统计有效内存区域,记录到 cmem 空间 */
    ret = walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callback);
    if (ret)
        goto out;

    /* Exclude unwanted mem ranges */
    /* (3) 排除掉一些不会使用的内存区域 */
    ret = elf_header_exclude_ranges(cmem);
    if (ret)
        goto out;

    /* By default prepare 64bit headers */
    /* (4) 开始构造 elf header */
    ret =  crash_prepare_elf64_headers(cmem, IS_ENABLED(CONFIG_X86_64), addr, sz);

out:
    vfree(cmem);
    return ret;
}

↓

int crash_prepare_elf64_headers(struct crash_mem *mem, int kernel_map,
              void **addr, unsigned long *sz)
{
    Elf64_Ehdr *ehdr;
    Elf64_Phdr *phdr;
    unsigned long nr_cpus = num_possible_cpus(), nr_phdr, elf_sz;
    unsigned char *buf;
    unsigned int cpu, i;
    unsigned long long notes_addr;
    unsigned long mstart, mend;

    /* extra phdr for vmcoreinfo elf note */
    nr_phdr = nr_cpus + 1;
    nr_phdr += mem->nr_ranges;

    /*
     * kexec-tools creates an extra PT_LOAD phdr for kernel text mapping
     * area (for example, ffffffff80000000 - ffffffffa0000000 on x86_64).
     * I think this is required by tools like gdb. So same physical
     * memory will be mapped in two elf headers. One will contain kernel
     * text virtual addresses and other will have __va(physical) addresses.
     */

    nr_phdr++;
    elf_sz = sizeof(Elf64_Ehdr) + nr_phdr * sizeof(Elf64_Phdr);
    elf_sz = ALIGN(elf_sz, ELF_CORE_HEADER_ALIGN);

    buf = vzalloc(elf_sz);
    if (!buf)
        return -ENOMEM;

    /* (4.1) 构造 ELF 文件头 */
    ehdr = (Elf64_Ehdr *)buf;
    phdr = (Elf64_Phdr *)(ehdr + 1);
    memcpy(ehdr->e_ident, ELFMAG, SELFMAG);
    ehdr->e_ident[EI_CLASS] = ELFCLASS64;
    ehdr->e_ident[EI_DATA] = ELFDATA2LSB;
    ehdr->e_ident[EI_VERSION] = EV_CURRENT;
    ehdr->e_ident[EI_OSABI] = ELF_OSABI;
    memset(ehdr->e_ident + EI_PAD, 0, EI_NIDENT - EI_PAD);
    ehdr->e_type = ET_CORE;
    ehdr->e_machine = ELF_ARCH;
    ehdr->e_version = EV_CURRENT;
    ehdr->e_phoff = sizeof(Elf64_Ehdr);
    ehdr->e_ehsize = sizeof(Elf64_Ehdr);
    ehdr->e_phentsize = sizeof(Elf64_Phdr);

    /* Prepare one phdr of type PT_NOTE for each present cpu */
    /* (4.2) 构造 ELF program 头,
            每个 cpu 独立构造一个 PT_LOAD segment
            segment 的数据存放在 per_cpu_ptr(crash_notes, cpu) 变量当中
            注意 crash_notes 中目前还没有数据,当前只是记录了物理地址。只有在 crash 发生以后,才会实际往里面存储数据
     */
    for_each_present_cpu(cpu) {
        phdr->p_type = PT_NOTE;
        notes_addr = per_cpu_ptr_to_phys(per_cpu_ptr(crash_notes, cpu));
        phdr->p_offset = phdr->p_paddr = notes_addr;
        phdr->p_filesz = phdr->p_memsz = sizeof(note_buf_t);
        (ehdr->e_phnum)++;
        phdr++;
    }

    /* Prepare one PT_NOTE header for vmcoreinfo */
    /* (4.3) 构造 ELF program 头,VMCOREINFO 独立构造一个 PT_LOAD segment
            注意当前只是记录了 vmcoreinfo_note 的物理地址,实际数据也是分几部分更新的
     */
    phdr->p_type = PT_NOTE;
    phdr->p_offset = phdr->p_paddr = paddr_vmcoreinfo_note();
    phdr->p_filesz = phdr->p_memsz = VMCOREINFO_NOTE_SIZE;
    (ehdr->e_phnum)++;
    phdr++;

    /* Prepare PT_LOAD type program header for kernel text region */
    /* (4.4) 构造 ELF program 头,内核代码段对应的 PT_LOAD segment */
    if (kernel_map) {
        phdr->p_type = PT_LOAD;
        phdr->p_flags = PF_R|PF_W|PF_X;
        phdr->p_vaddr = (unsigned long) _text;
        phdr->p_filesz = phdr->p_memsz = _end - _text;
        phdr->p_offset = phdr->p_paddr = __pa_symbol(_text);
        ehdr->e_phnum++;
        phdr++;
    }

    /* Go through all the ranges in mem->ranges[] and prepare phdr */
    /* (4.5) 遍历 cmem,把系统中的有效内存创建成 PT_LOAD segment */
    for (i = 0; i < mem->nr_ranges; i++) {
        mstart = mem->ranges[i].start;
        mend = mem->ranges[i].end;

        phdr->p_type = PT_LOAD;
        phdr->p_flags = PF_R|PF_W|PF_X;
        phdr->p_offset  = mstart;

        phdr->p_paddr = mstart;
        phdr->p_vaddr = (unsigned long) __va(mstart);
        phdr->p_filesz = phdr->p_memsz = mend - mstart + 1;
        phdr->p_align = 0;
        ehdr->e_phnum++;
        phdr++;
        pr_debug("Crash PT_LOAD elf header. phdr=%p vaddr=0x%llx, paddr=0x%llx, sz=0x%llx e_phnum=%d p_offset=0x%llx\n",
            phdr, phdr->p_vaddr, phdr->p_paddr, phdr->p_filesz,
            ehdr->e_phnum, phdr->p_offset);
    }

    *addr = buf;
    *sz = elf_sz;
    return 0;
}

4.1.1 crash_notes 数据的更新

只有在发生 panic 以后,才会往 crash_notes 中保存实际的 cpu 寄存器数据。其更新过程如下:

__crash_kexec() → machine_crash_shutdown() → crash_save_cpu():
ipi_cpu_crash_stop() → crash_save_cpu():

void crash_save_cpu(struct pt_regs *regs, int cpu)
{
    struct elf_prstatus prstatus;
    u32 *buf;

    if ((cpu < 0) || (cpu >= nr_cpu_ids))
        return;

    /* Using ELF notes here is opportunistic.
     * I need a well defined structure format
     * for the data I pass, and I need tags
     * on the data to indicate what information I have
     * squirrelled away.  ELF notes happen to provide
     * all of that, so there is no need to invent something new.
     */
    buf = (u32 *)per_cpu_ptr(crash_notes, cpu);
    if (!buf)
        return;
    /* (1) 清零 */
    memset(&prstatus, 0, sizeof(prstatus));
    /* (2) 保存 pid */
    prstatus.pr_pid = current->pid;
    /* (3) 保存 寄存器 */
    elf_core_copy_kernel_regs(&prstatus.pr_reg, regs);
    /* (4) 以 elf_note 格式存储到 crash_notes 中 */
    buf = append_elf_note(buf, KEXEC_CORE_NOTE_NAME, NT_PRSTATUS,
                  &prstatus, sizeof(prstatus));
    /* (5) 追加一个全零的 elf_note 当作结尾 */
    final_note(buf);
}

4.1.2 vmcoreinfo_note 数据的更新

vmcoreinfo_note 分成两部分来更新:

  • 1、第一部分在系统初始化的时候准备好了大部分的数据:

    static int __init crash_save_vmcoreinfo_init(void)
    {
    /* (1.1) 分配 vmcoreinfo_data 空间 */
    vmcoreinfo_data = (unsigned char *)get_zeroed_page(GFP_KERNEL);
    if (!vmcoreinfo_data) {
    pr_warn("Memory allocation for vmcoreinfo_data failed\n");
    return -ENOMEM;
    }

    /* (1.2) 分配 vmcoreinfo_note 空间 */
    vmcoreinfo_note = alloc_pages_exact(VMCOREINFO_NOTE_SIZE,
                        GFP_KERNEL | __GFP_ZERO);
    if (!vmcoreinfo_note) {
        free_page((unsigned long)vmcoreinfo_data);
        vmcoreinfo_data = NULL;
        pr_warn("Memory allocation for vmcoreinfo_note failed\n");
        return -ENOMEM;
    }
    
    /* (2.1) 把系统的各种关键信息使用 VMCOREINFO_xxx 一系列宏,以字符串的形式保持到 vmcoreinfo_data  */
    VMCOREINFO_OSRELEASE(init_uts_ns.name.release);
    VMCOREINFO_PAGESIZE(PAGE_SIZE);
    
    VMCOREINFO_SYMBOL(init_uts_ns);
    VMCOREINFO_SYMBOL(node_online_map);

    #ifdef CONFIG_MMU
    VMCOREINFO_SYMBOL_ARRAY(swapper_pg_dir);
    #endif
    VMCOREINFO_SYMBOL(_stext);
    VMCOREINFO_SYMBOL(vmap_area_list);

    #ifndef CONFIG_NEED_MULTIPLE_NODES
    VMCOREINFO_SYMBOL(mem_map);
    VMCOREINFO_SYMBOL(contig_page_data);
    #endif
    #ifdef CONFIG_SPARSEMEM
    VMCOREINFO_SYMBOL_ARRAY(mem_section);
    VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);
    VMCOREINFO_STRUCT_SIZE(mem_section);
    VMCOREINFO_OFFSET(mem_section, section_mem_map);
    #endif
    VMCOREINFO_STRUCT_SIZE(page);
    VMCOREINFO_STRUCT_SIZE(pglist_data);
    VMCOREINFO_STRUCT_SIZE(zone);
    VMCOREINFO_STRUCT_SIZE(free_area);
    VMCOREINFO_STRUCT_SIZE(list_head);
    VMCOREINFO_SIZE(nodemask_t);
    VMCOREINFO_OFFSET(page, flags);
    VMCOREINFO_OFFSET(page, _refcount);
    VMCOREINFO_OFFSET(page, mapping);
    VMCOREINFO_OFFSET(page, lru);
    VMCOREINFO_OFFSET(page, _mapcount);
    VMCOREINFO_OFFSET(page, private);
    VMCOREINFO_OFFSET(page, compound_dtor);
    VMCOREINFO_OFFSET(page, compound_order);
    VMCOREINFO_OFFSET(page, compound_head);
    VMCOREINFO_OFFSET(pglist_data, node_zones);
    VMCOREINFO_OFFSET(pglist_data, nr_zones);
    #ifdef CONFIG_FLAT_NODE_MEM_MAP
    VMCOREINFO_OFFSET(pglist_data, node_mem_map);
    #endif
    VMCOREINFO_OFFSET(pglist_data, node_start_pfn);
    VMCOREINFO_OFFSET(pglist_data, node_spanned_pages);
    VMCOREINFO_OFFSET(pglist_data, node_id);
    VMCOREINFO_OFFSET(zone, free_area);
    VMCOREINFO_OFFSET(zone, vm_stat);
    VMCOREINFO_OFFSET(zone, spanned_pages);
    VMCOREINFO_OFFSET(free_area, free_list);
    VMCOREINFO_OFFSET(list_head, next);
    VMCOREINFO_OFFSET(list_head, prev);
    VMCOREINFO_OFFSET(vmap_area, va_start);
    VMCOREINFO_OFFSET(vmap_area, list);
    VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER);
    log_buf_vmcoreinfo_setup();
    VMCOREINFO_LENGTH(free_area.free_list, MIGRATE_TYPES);
    VMCOREINFO_NUMBER(NR_FREE_PAGES);
    VMCOREINFO_NUMBER(PG_lru);
    VMCOREINFO_NUMBER(PG_private);
    VMCOREINFO_NUMBER(PG_swapcache);
    VMCOREINFO_NUMBER(PG_swapbacked);
    VMCOREINFO_NUMBER(PG_slab);
    #ifdef CONFIG_MEMORY_FAILURE
    VMCOREINFO_NUMBER(PG_hwpoison);
    #endif
    VMCOREINFO_NUMBER(PG_head_mask);
    #define PAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy)
    VMCOREINFO_NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE);
    #ifdef CONFIG_HUGETLB_PAGE
    VMCOREINFO_NUMBER(HUGETLB_PAGE_DTOR);
    #define PAGE_OFFLINE_MAPCOUNT_VALUE (~PG_offline)
    VMCOREINFO_NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE);
    #endif

    /* (2.2) 补充一些架构相关的 vmcoreinfo */
    arch_crash_save_vmcoreinfo();
    
    /* (3) 把 vmcoreinfo_data 中保存的数据以 elf_note 的形式保存到 vmcoreinfo_note 中 */
    update_vmcoreinfo_note();
    
    return 0;

    }

  • 2、第二部分在 panic 发生后追加了 数据:

    __crash_kexec() → crash_save_vmcoreinfo():

    void crash_save_vmcoreinfo(void)
    {
    if (!vmcoreinfo_note)
    return;

    /* Use the safe copy to generate vmcoreinfo note if have */
    if (vmcoreinfo_data_safecopy)
        vmcoreinfo_data = vmcoreinfo_data_safecopy;
    
    /* (1) 补充 "CRASHTIME=xxx" 信息 */
    vmcoreinfo_append_str("CRASHTIME=%lld\n", ktime_get_real_seconds());
    update_vmcoreinfo_note();

    }

vmcoreinfo 对应 readelf -n xxx 读出的数据:

$ readelf -n vmcore.202106170650 

Displaying notes found at file offset 0x00001000 with length 0x00000ac8:
  Owner                Data size     Description
  CORE                 0x00000150    NT_PRSTATUS (prstatus structure)
  CORE                 0x00000150    NT_PRSTATUS (prstatus structure)
  VMCOREINFO           0x000007e6    Unknown note type: (0x00000000)
   description data: 4f 53 52 45 4c 45 41 53 45 3d 35 2e 38 2e 30

// description data 对应 ascii:
OSRELEASE=5.8.0-43-generic
PAGESIZE=4096
SYMBOL(init_uts_ns)=ffffffffa5014620
SYMBOL(node_online_map)=ffffffffa5276720
SYMBOL(swapper_pg_dir)=ffffffffa500a000
SYMBOL(_stext)=ffffffffa3a00000
SYMBOL(vmap_area_list)=ffffffffa50f2560
SYMBOL(mem_section)=ffff91673ffd2000
LENGTH(mem_section)=2048
SIZE(mem_section)=16
OFFSET(mem_section.section_mem_map)=0
SIZE(page)=64
SIZE(pglist_data)=171968
SIZE(zone)=1472
SIZE(free_area)=88
...
CRASHTIME=1623937823

准备好的 elf header 数据怎么传递给 crash kernel 呢?是通过 cmdline 来进行传递的:

kexec_file_load() → kimage_file_alloc_init() → kimage_file_prepare_segments() → arch_kexec_kernel_image_load() → image->fops->load() → kexec_bzImage64_ops.load() → bzImage64_load() → setup_cmdline():

static int setup_cmdline(struct kimage *image, struct boot_params *params,
             unsigned long bootparams_load_addr,
             unsigned long cmdline_offset, char *cmdline,
             unsigned long cmdline_len)
{
    char *cmdline_ptr = ((char *)params) + cmdline_offset;
    unsigned long cmdline_ptr_phys, len = 0;
    uint32_t cmdline_low_32, cmdline_ext_32;

    /* (1) 在 crask kernel 的 cmdline 中追加参数:"elfcorehdr=0x%lx " */
    if (image->type == KEXEC_TYPE_CRASH) {
        len = sprintf(cmdline_ptr,
            "elfcorehdr=0x%lx ", image->arch.elf_load_addr);
    }
    memcpy(cmdline_ptr + len, cmdline, cmdline_len);
    cmdline_len += len;

    cmdline_ptr[cmdline_len - 1] = '\0';

    pr_debug("Final command line is: %s\n", cmdline_ptr);
    cmdline_ptr_phys = bootparams_load_addr + cmdline_offset;
    cmdline_low_32 = cmdline_ptr_phys & 0xffffffffUL;
    cmdline_ext_32 = cmdline_ptr_phys >> 32;

    params->hdr.cmd_line_ptr = cmdline_low_32;
    if (cmdline_ext_32)
        params->ext_cmd_line_ptr = cmdline_ext_32;

    return 0;
}

normal kernel 发生 panic 以后会 跳转到 carsh kernel

die() → crash_kexec() → __crash_kexec() → machine_kexec()

carsh kernel 中首先会接收到 normal kernel 在 cmdline 中传递过来的 vmcore 文件的 elf header 信息:

static int __init setup_elfcorehdr(char *arg)
{
    char *end;
    if (!arg)
        return -EINVAL;
    elfcorehdr_addr = memparse(arg, &end);
    if (*end == '@') {
        elfcorehdr_size = elfcorehdr_addr;
        elfcorehdr_addr = memparse(end + 1, &end);
    }
    return end > arg ? 0 : -EINVAL;
}
early_param("elfcorehdr", setup_elfcorehdr);

然后会读取 vmcore 文件的 elf header 信息,并进行解析和整理:

static int __init vmcore_init(void)
{
    int rc = 0;

    /* Allow architectures to allocate ELF header in 2nd kernel */
    rc = elfcorehdr_alloc(&elfcorehdr_addr, &elfcorehdr_size);
    if (rc)
        return rc;
    /*
     * If elfcorehdr= has been passed in cmdline or created in 2nd kernel,
     * then capture the dump.
     */
    if (!(is_vmcore_usable()))
        return rc;
    /* (1) 解析 normal kernel 传递过来的 elf header 信息 */
    rc = parse_crash_elf_headers();
    if (rc) {
        pr_warn("Kdump: vmcore not initialized\n");
        return rc;
    }
    elfcorehdr_free(elfcorehdr_addr);
    elfcorehdr_addr = ELFCORE_ADDR_ERR;

    /* (2) 创建 /proc/vmcore 文件接口 */
    proc_vmcore = proc_create("vmcore", S_IRUSR, NULL, &vmcore_proc_ops);
    if (proc_vmcore)
        proc_vmcore->size = vmcore_size;
    return 0;
}
fs_initcall(vmcore_init);

↓
parse_crash_elf_headers()
↓

static int __init parse_crash_elf64_headers(void)
{
    int rc=0;
    Elf64_Ehdr ehdr;
    u64 addr;

    addr = elfcorehdr_addr;

    /* Read Elf header */
    /* (1.1) 读出传递过来的 elf header 信息
            注意:涉及到读另一个系统的内存,我们需要对物理地址进行ioremap_cache() 建立映射以后才能读取
            后续的很多地方都是以这种方式来读取
     */
    rc = elfcorehdr_read((char *)&ehdr, sizeof(Elf64_Ehdr), &addr);
    if (rc < 0)
        return rc;

    /* Do some basic Verification. */
    /* (1.2) 对读出的 elf header 信息进行一些合法性判断,防止被破坏 */
    if (memcmp(ehdr.e_ident, ELFMAG, SELFMAG) != 0 ||
        (ehdr.e_type != ET_CORE) ||
        !vmcore_elf64_check_arch(&ehdr) ||
        ehdr.e_ident[EI_CLASS] != ELFCLASS64 ||
        ehdr.e_ident[EI_VERSION] != EV_CURRENT ||
        ehdr.e_version != EV_CURRENT ||
        ehdr.e_ehsize != sizeof(Elf64_Ehdr) ||
        ehdr.e_phentsize != sizeof(Elf64_Phdr) ||
        ehdr.e_phnum == 0) {
        pr_warn("Warning: Core image elf header is not sane\n");
        return -EINVAL;
    }

    /* Read in all elf headers. */
    /* (1.3) 在crash kernel 上分配两个buffer,准备吧数据读到本地
            elfcorebuf 用来存储 elf header + elf program header
            elfnotes_buf 用来存储 PT_NOTE segment
     */
    elfcorebuf_sz_orig = sizeof(Elf64_Ehdr) +
                ehdr.e_phnum * sizeof(Elf64_Phdr);
    elfcorebuf_sz = elfcorebuf_sz_orig;
    elfcorebuf = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
                          get_order(elfcorebuf_sz_orig));
    if (!elfcorebuf)
        return -ENOMEM;
    addr = elfcorehdr_addr;
    /* (1.4) 把整个 elf header + elf program header 读取到 elfcorebuf */
    rc = elfcorehdr_read(elfcorebuf, elfcorebuf_sz_orig, &addr);
    if (rc < 0)
        goto fail;

    /* Merge all PT_NOTE headers into one. */
    /* (1.5) 整理数据把多个 PT_NOTE 合并成一个,并且把 PT_NOTE 数据拷贝到 elfnotes_buf */
    rc = merge_note_headers_elf64(elfcorebuf, &elfcorebuf_sz,
                      &elfnotes_buf, &elfnotes_sz);
    if (rc)
        goto fail;
    /* (1.6) 逐个调试 PT_LOAD segment 控制头,让每个 segment 符合 page 对齐 */
    rc = process_ptload_program_headers_elf64(elfcorebuf, elfcorebuf_sz,
                          elfnotes_sz, &vmcore_list);
    if (rc)
        goto fail;

    /* (1.7) 配合上一步的 page 对齐调整,计算 vmcore_list 链表中的 offset 偏移 */
    set_vmcore_list_offsets(elfcorebuf_sz, elfnotes_sz, &vmcore_list);
    return 0;
fail:
    free_elfcorebuf();
    return rc;
}

↓

static int __init merge_note_headers_elf64(char *elfptr, size_t *elfsz,
                       char **notes_buf, size_t *notes_sz)
{
    int i, nr_ptnote=0, rc=0;
    char *tmp;
    Elf64_Ehdr *ehdr_ptr;
    Elf64_Phdr phdr;
    u64 phdr_sz = 0, note_off;

    ehdr_ptr = (Elf64_Ehdr *)elfptr;

    /* (1.5.1) 更新每个独立 PT_NOTE 的长度,去除尾部全零 elf_note  */
    rc = update_note_header_size_elf64(ehdr_ptr);
    if (rc < 0)
        return rc;

    /* (1.5.2) 计算 所有 PT_NOTE 数据加起来的总长度 */
    rc = get_note_number_and_size_elf64(ehdr_ptr, &nr_ptnote, &phdr_sz);
    if (rc < 0)
        return rc;

    *notes_sz = roundup(phdr_sz, PAGE_SIZE);
    *notes_buf = vmcore_alloc_buf(*notes_sz);
    if (!*notes_buf)
        return -ENOMEM;

    /* (1.5.3) 把所有 PT_NOTE 数据拷贝到一起,拷贝到 notes_buf 中 */
    rc = copy_notes_elf64(ehdr_ptr, *notes_buf);
    if (rc < 0)
        return rc;

    /* Prepare merged PT_NOTE program header. */
    /* (1.5.4) 创建一个新的 PT_NOTE 控制结构来寻址 notes_buf */
    phdr.p_type    = PT_NOTE;
    phdr.p_flags   = 0;
    note_off = sizeof(Elf64_Ehdr) +
            (ehdr_ptr->e_phnum - nr_ptnote +1) * sizeof(Elf64_Phdr);
    phdr.p_offset  = roundup(note_off, PAGE_SIZE);
    phdr.p_vaddr   = phdr.p_paddr = 0;
    phdr.p_filesz  = phdr.p_memsz = phdr_sz;
    phdr.p_align   = 0;

    /* Add merged PT_NOTE program header*/
    /* (1.5.5) 拷贝新的 PT_NOTE 控制结构 */
    tmp = elfptr + sizeof(Elf64_Ehdr);
    memcpy(tmp, &phdr, sizeof(phdr));
    tmp += sizeof(phdr);

    /* Remove unwanted PT_NOTE program headers. */
    /* (1.5.6) 移除掉已经无用的 PT_NOTE 控制结构 */
    i = (nr_ptnote - 1) * sizeof(Elf64_Phdr);
    *elfsz = *elfsz - i;
    memmove(tmp, tmp+i, ((*elfsz)-sizeof(Elf64_Ehdr)-sizeof(Elf64_Phdr)));
    memset(elfptr + *elfsz, 0, i);
    *elfsz = roundup(*elfsz, PAGE_SIZE);

    /* Modify e_phnum to reflect merged headers. */
    ehdr_ptr->e_phnum = ehdr_ptr->e_phnum - nr_ptnote + 1;

    /* Store the size of all notes.  We need this to update the note
     * header when the device dumps will be added.
     */
    elfnotes_orig_sz = phdr.p_memsz;

    return 0;
}

经过上一节的解析 elf 头数据基本已准备好,elfcorebuf 用来存储 elf header + elf program header,elfnotes_buf 用来存储 PT_NOTE segment。

现在可以通过对 /proc/vmcore 文件的读操作来读取 elf core 数据了:

static const struct proc_ops vmcore_proc_ops = {
    .proc_read  = read_vmcore,
    .proc_lseek = default_llseek,
    .proc_mmap  = mmap_vmcore,
};

↓
read_vmcore()
↓

static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
                 int userbuf)
{
    ssize_t acc = 0, tmp;
    size_t tsz;
    u64 start;
    struct vmcore *m = NULL;

    if (buflen == 0 || *fpos >= vmcore_size)
        return 0;

    /* trim buflen to not go beyond EOF */
    if (buflen > vmcore_size - *fpos)
        buflen = vmcore_size - *fpos;

    /* Read ELF core header */
    /* (1) 从 elfcorebuf 中读取 elf header + elf program header,并拷贝给给用户态读内存 */
    if (*fpos < elfcorebuf_sz) {
        tsz = min(elfcorebuf_sz - (size_t)*fpos, buflen);
        if (copy_to(buffer, elfcorebuf + *fpos, tsz, userbuf))
            return -EFAULT;
        buflen -= tsz;
        *fpos += tsz;
        buffer += tsz;
        acc += tsz;

        /* leave now if filled buffer already */
        if (buflen == 0)
            return acc;
    }

    /* Read Elf note segment */
    /* (2) 从 elfnotes_buf 中读取 PT_NOTE segment,并拷贝给给用户态读内存 */
    if (*fpos < elfcorebuf_sz + elfnotes_sz) {
        void *kaddr;

        /* We add device dumps before other elf notes because the
         * other elf notes may not fill the elf notes buffer
         * completely and we will end up with zero-filled data
         * between the elf notes and the device dumps. Tools will
         * then try to decode this zero-filled data as valid notes
         * and we don't want that. Hence, adding device dumps before
         * the other elf notes ensure that zero-filled data can be
         * avoided.
         */
#ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
        /* Read device dumps */
        if (*fpos < elfcorebuf_sz + vmcoredd_orig_sz) {
            tsz = min(elfcorebuf_sz + vmcoredd_orig_sz -
                  (size_t)*fpos, buflen);
            start = *fpos - elfcorebuf_sz;
            if (vmcoredd_copy_dumps(buffer, start, tsz, userbuf))
                return -EFAULT;

            buflen -= tsz;
            *fpos += tsz;
            buffer += tsz;
            acc += tsz;

            /* leave now if filled buffer already */
            if (!buflen)
                return acc;
        }
#endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */

        /* Read remaining elf notes */
        tsz = min(elfcorebuf_sz + elfnotes_sz - (size_t)*fpos, buflen);
        kaddr = elfnotes_buf + *fpos - elfcorebuf_sz - vmcoredd_orig_sz;
        if (copy_to(buffer, kaddr, tsz, userbuf))
            return -EFAULT;

        buflen -= tsz;
        *fpos += tsz;
        buffer += tsz;
        acc += tsz;

        /* leave now if filled buffer already */
        if (buflen == 0)
            return acc;
    }

    /* (3) 从 vmcore_list 链表中读取 PT_LOAD segment,并拷贝给给用户态读内存
            对物理地址进行ioremap_cache() 建立映射以后才能读取
    */
    list_for_each_entry(m, &vmcore_list, list) {
        if (*fpos < m->offset + m->size) {
            tsz = (size_t)min_t(unsigned long long,
                        m->offset + m->size - *fpos,
                        buflen);
            start = m->paddr + *fpos - m->offset;
            tmp = read_from_oldmem(buffer, tsz, &start,
                           userbuf, mem_encrypt_active());
            if (tmp < 0)
                return tmp;
            buflen -= tsz;
            *fpos += tsz;
            buffer += tsz;
            acc += tsz;

            /* leave now if filled buffer already */
            if (buflen == 0)
                return acc;
        }
    }

    return acc;
}

参考资料

1.kdump: usage and internals
2.Linux kdump(系统临终快照)
3.How to use linux-crashdump to capture a kernel oops/panic
4.Documentation for Kdump - The kexec-based Crash Dumping Solution
5.Debugging the Linux kernel with the Crash Utility
6.kexec - A travel to the purgatory
7.vmcore分析和实战
8.Crash工具实战-结构体解析(skb相关解析)
9.Crash工具实战-解析链表

手机扫一扫

移动阅读更方便

阿里云服务器
腾讯云服务器
七牛云服务器

你可能感兴趣的文章