{ "schema_version": "1.4.0", "id": "GHSA-67wm-vqh6-323h", "modified": "2025-09-15T15:31:28Z", "published": "2025-09-15T15:31:28Z", "aliases": [ "CVE-2022-50338" ], "details": "In the Linux kernel, the following vulnerability has been resolved:\n\nbinder: fix UAF of alloc->vma in race with munmap()\n\nIn commit 720c24192404 (\"ANDROID: binder: change down_write to\ndown_read\") binder assumed the mmap read lock is sufficient to protect\nalloc->vma inside binder_update_page_range(). This used to be accurate\nuntil commit dd2283f2605e (\"mm: mmap: zap pages with read mmap_sem in\nmunmap\"), which now downgrades the mmap_lock after detaching the vma\nfrom the rbtree in munmap(). Then it proceeds to teardown and free the\nvma with only the read lock held.\n\nThis means that accesses to alloc->vma in binder_update_page_range() now\nwill race with vm_area_free() in munmap() and can cause a UAF as shown\nin the following KASAN trace:\n\n ==================================================================\n BUG: KASAN: use-after-free in vm_insert_page+0x7c/0x1f0\n Read of size 8 at addr ffff16204ad00600 by task server/558\n\n CPU: 3 PID: 558 Comm: server Not tainted 5.10.150-00001-gdc8dcf942daa #1\n Hardware name: linux,dummy-virt (DT)\n Call trace:\n dump_backtrace+0x0/0x2a0\n show_stack+0x18/0x2c\n dump_stack+0xf8/0x164\n print_address_description.constprop.0+0x9c/0x538\n kasan_report+0x120/0x200\n __asan_load8+0xa0/0xc4\n vm_insert_page+0x7c/0x1f0\n binder_update_page_range+0x278/0x50c\n binder_alloc_new_buf+0x3f0/0xba0\n binder_transaction+0x64c/0x3040\n binder_thread_write+0x924/0x2020\n binder_ioctl+0x1610/0x2e5c\n __arm64_sys_ioctl+0xd4/0x120\n el0_svc_common.constprop.0+0xac/0x270\n do_el0_svc+0x38/0xa0\n el0_svc+0x1c/0x2c\n el0_sync_handler+0xe8/0x114\n el0_sync+0x180/0x1c0\n\n Allocated by task 559:\n kasan_save_stack+0x38/0x6c\n __kasan_kmalloc.constprop.0+0xe4/0xf0\n kasan_slab_alloc+0x18/0x2c\n kmem_cache_alloc+0x1b0/0x2d0\n vm_area_alloc+0x28/0x94\n mmap_region+0x378/0x920\n do_mmap+0x3f0/0x600\n vm_mmap_pgoff+0x150/0x17c\n ksys_mmap_pgoff+0x284/0x2dc\n __arm64_sys_mmap+0x84/0xa4\n el0_svc_common.constprop.0+0xac/0x270\n do_el0_svc+0x38/0xa0\n el0_svc+0x1c/0x2c\n el0_sync_handler+0xe8/0x114\n el0_sync+0x180/0x1c0\n\n Freed by task 560:\n kasan_save_stack+0x38/0x6c\n kasan_set_track+0x28/0x40\n kasan_set_free_info+0x24/0x4c\n __kasan_slab_free+0x100/0x164\n kasan_slab_free+0x14/0x20\n kmem_cache_free+0xc4/0x34c\n vm_area_free+0x1c/0x2c\n remove_vma+0x7c/0x94\n __do_munmap+0x358/0x710\n __vm_munmap+0xbc/0x130\n __arm64_sys_munmap+0x4c/0x64\n el0_svc_common.constprop.0+0xac/0x270\n do_el0_svc+0x38/0xa0\n el0_svc+0x1c/0x2c\n el0_sync_handler+0xe8/0x114\n el0_sync+0x180/0x1c0\n\n [...]\n ==================================================================\n\nTo prevent the race above, revert back to taking the mmap write lock\ninside binder_update_page_range(). One might expect an increase of mmap\nlock contention. However, binder already serializes these calls via top\nlevel alloc->mutex. Also, there was no performance impact shown when\nrunning the binder benchmark tests.\n\nNote this patch is specific to stable branches 5.4 and 5.10. Since in\nnewer kernel releases binder no longer caches a pointer to the vma.\nInstead, it has been refactored to use vma_lookup() which avoids the\nissue described here. This switch was introduced in commit a43cfc87caaf\n(\"android: binder: stop saving a pointer to the VMA\").", "severity": [], "affected": [], "references": [ { "type": "ADVISORY", "url": "https://nvd.nist.gov/vuln/detail/CVE-2022-50338" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/27a594bc7a7c8238d239e3cdbcf2edfa3bbe9a1b" } ], "database_specific": { "cwe_ids": [], "severity": null, "github_reviewed": false, "github_reviewed_at": null, "nvd_published_at": "2025-09-15T15:15:46Z" } }