* [PATCH v2 0/6] ARM: Map sections RO/XN
@ 2025-06-17 14:28 Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 1/6] ARM: pass barebox base to mmu_early_enable() Sascha Hauer
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Sascha Hauer @ 2025-06-17 14:28 UTC (permalink / raw)
To: Ahmad Fatoum, BAREBOX
So far we mapped all RAM as read write with execute permission. This
series series hardens this a bit. The barebox text segment will be
mapped readonly with execute permission, the RO data section as readonly
without execute permission and the remaining RAM will lose its execute
permission.
I tested this series on ARMv5, ARMv7 and ARMv8. I am not confident
though that there are no regressions, so this new behaviour is behind a
Kconfig option. It is default-y, but can be disabled for debugging
purposses. Once this series has proven stable it can be removed.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
Changes in v2:
- Tested and fixed for ARMv5
- merge create_pages() and create_sections() into one functions (ahmad)
- introduce function to create mapping flags based on CONFIG_ARM_MMU_PERMISSIONS
- Link to v1: https://lore.barebox.org/20250606-mmu-xn-ro-v1-0-7ee6ddd134d4@pengutronix.de
---
Sascha Hauer (6):
ARM: pass barebox base to mmu_early_enable()
ARM: mmu: move ARCH_MAP_WRITECOMBINE to header
ARM: MMU: map memory for barebox proper pagewise
ARM: MMU: map text segment ro and data segments execute never
ARM: MMU64: map memory for barebox proper pagewise
ARM: MMU64: map text segment ro and data segments execute never
arch/arm/Kconfig | 12 ++++++
arch/arm/cpu/lowlevel_32.S | 1 +
arch/arm/cpu/mmu-common.h | 20 +++++++++
arch/arm/cpu/mmu_32.c | 89 ++++++++++++++++++++++++++++++++--------
arch/arm/cpu/mmu_64.c | 82 +++++++++++++++++++++++++++---------
arch/arm/cpu/uncompress.c | 9 ++--
arch/arm/include/asm/mmu.h | 2 +-
arch/arm/include/asm/pgtable64.h | 1 +
arch/arm/lib32/barebox.lds.S | 3 +-
arch/arm/lib64/barebox.lds.S | 5 ++-
common/memory.c | 7 +++-
include/mmu.h | 1 +
12 files changed, 186 insertions(+), 46 deletions(-)
---
base-commit: 2ae2aa65792b5634e1e9b682fa7ef649569800d4
change-id: 20250606-mmu-xn-ro-e2a4c4b080a4
Best regards,
--
Sascha Hauer <s.hauer@pengutronix.de>
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/6] ARM: pass barebox base to mmu_early_enable()
2025-06-17 14:28 [PATCH v2 0/6] ARM: Map sections RO/XN Sascha Hauer
@ 2025-06-17 14:28 ` Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 2/6] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header Sascha Hauer
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Sascha Hauer @ 2025-06-17 14:28 UTC (permalink / raw)
To: Ahmad Fatoum, BAREBOX
We'll need the barebox base in the next patches to map the barebox
area differently.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 2 +-
arch/arm/cpu/mmu_64.c | 2 +-
arch/arm/cpu/uncompress.c | 9 ++++-----
arch/arm/include/asm/mmu.h | 2 +-
4 files changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 40462f6fa5cf5735c628a069a660fcce018648fe..2c2144327380c7cbb6c7426212d94a4a9ba5f70f 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -614,7 +614,7 @@ void *dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *dma_ha
return dma_alloc_map(dev, size, dma_handle, ARCH_MAP_WRITECOMBINE);
}
-void mmu_early_enable(unsigned long membase, unsigned long memsize)
+void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
{
uint32_t *ttb = (uint32_t *)arm_mem_ttb(membase + memsize);
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index d72ff020f48546bff37235b9a71624e8667c7af9..cb0803400cfd6a55f0b34dea823576d6593e96f1 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -435,7 +435,7 @@ static void init_range(size_t total_level0_tables)
}
}
-void mmu_early_enable(unsigned long membase, unsigned long memsize)
+void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
{
int el;
u64 optee_membase;
diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
index 4657a4828e67e1b0acfa9dec3aef33bc4c525468..e24e754e0b16bda4974df0dd754936e87fe59d2d 100644
--- a/arch/arm/cpu/uncompress.c
+++ b/arch/arm/cpu/uncompress.c
@@ -63,11 +63,6 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
- if (IS_ENABLED(CONFIG_MMU))
- mmu_early_enable(membase, memsize);
- else if (IS_ENABLED(CONFIG_ARMV7R_MPU))
- set_cr(get_cr() | CR_C);
-
/* Add handoff data now, so arm_mem_barebox_image takes it into account */
if (boarddata)
handoff_data_add_dt(boarddata);
@@ -83,6 +78,10 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
#ifdef DEBUG
print_pbl_mem_layout(membase, endmem, barebox_base);
#endif
+ if (IS_ENABLED(CONFIG_MMU))
+ mmu_early_enable(membase, memsize, barebox_base);
+ else if (IS_ENABLED(CONFIG_ARMV7R_MPU))
+ set_cr(get_cr() | CR_C);
pr_debug("uncompressing barebox binary at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
pg_start, pg_len, barebox_base, uncompressed_len);
diff --git a/arch/arm/include/asm/mmu.h b/arch/arm/include/asm/mmu.h
index ebf1e096c68295858bc8f3e6ab0e6f2dd6512717..5538cd3558e8e6c069923f7f6ccfc38e7df87c9f 100644
--- a/arch/arm/include/asm/mmu.h
+++ b/arch/arm/include/asm/mmu.h
@@ -64,7 +64,7 @@ void __dma_clean_range(unsigned long, unsigned long);
void __dma_flush_range(unsigned long, unsigned long);
void __dma_inv_range(unsigned long, unsigned long);
-void mmu_early_enable(unsigned long membase, unsigned long memsize);
+void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_base);
void mmu_early_disable(void);
#endif /* __ASM_MMU_H */
--
2.39.5
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 2/6] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header
2025-06-17 14:28 [PATCH v2 0/6] ARM: Map sections RO/XN Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 1/6] ARM: pass barebox base to mmu_early_enable() Sascha Hauer
@ 2025-06-17 14:28 ` Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 3/6] ARM: MMU: map memory for barebox proper pagewise Sascha Hauer
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Sascha Hauer @ 2025-06-17 14:28 UTC (permalink / raw)
To: Ahmad Fatoum, BAREBOX
ARCH_MAP_WRITECOMBINE is defined equally for both mmu_32 and mmu_64 and
we'll add more mapping types later, so move it to a header file to be
shared by both mmu implementations.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu-common.h | 2 ++
arch/arm/cpu/mmu_32.c | 1 -
arch/arm/cpu/mmu_64.c | 2 --
3 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
index d0b50662570a48fafd47edac9fbc12fe2a62458a..0f11a4b73d1199ec2400f64a2f057cf940d4ff2d 100644
--- a/arch/arm/cpu/mmu-common.h
+++ b/arch/arm/cpu/mmu-common.h
@@ -9,6 +9,8 @@
#include <linux/kernel.h>
#include <linux/sizes.h>
+#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
+
struct device;
void dma_inv_range(void *ptr, size_t size);
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 2c2144327380c7cbb6c7426212d94a4a9ba5f70f..104780ff6b9827982e16c1e2b8ac8aae6e4b5c6a 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -23,7 +23,6 @@
#include "mmu_32.h"
#define PTRS_PER_PTE (PGDIR_SIZE / PAGE_SIZE)
-#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
static inline uint32_t *get_ttb(void)
{
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index cb0803400cfd6a55f0b34dea823576d6593e96f1..121dd136af33967e516b66091a6c671d45e6e119 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -24,8 +24,6 @@
#include "mmu_64.h"
-#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
-
static uint64_t *get_ttb(void)
{
return (uint64_t *)get_ttbr(current_el());
--
2.39.5
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 3/6] ARM: MMU: map memory for barebox proper pagewise
2025-06-17 14:28 [PATCH v2 0/6] ARM: Map sections RO/XN Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 1/6] ARM: pass barebox base to mmu_early_enable() Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 2/6] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header Sascha Hauer
@ 2025-06-17 14:28 ` Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 4/6] ARM: MMU: map text segment ro and data segments execute never Sascha Hauer
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Sascha Hauer @ 2025-06-17 14:28 UTC (permalink / raw)
To: Ahmad Fatoum, BAREBOX
Map the remainder of the memory explicitly with two level page tables. This is
the place where barebox proper ends at. In barebox proper we'll remap the code
segments readonly/executable and the ro segments readonly/execute never. For this
we need the memory being mapped pagewise. We can't do the split up from section
wise mapping to pagewise mapping later because that would require us to do
a break-before-make sequence which we can't do when barebox proper is running
at the location being remapped.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 37 +++++++++++++++++++++++++++++--------
1 file changed, 29 insertions(+), 8 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 104780ff6b9827982e16c1e2b8ac8aae6e4b5c6a..b21fc75f0ceb0c50f5190662a6cd674b1bd38ced 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -247,7 +247,8 @@ static uint32_t get_pmd_flags(int map_type)
return pte_flags_to_pmd(get_pte_flags(map_type));
}
-static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t size, unsigned map_type)
+static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t size,
+ unsigned map_type, bool force_pages)
{
u32 virt_addr = (u32)_virt_addr;
u32 pte_flags, pmd_flags;
@@ -268,7 +269,7 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
if (size >= PGDIR_SIZE && pgdir_size_aligned &&
IS_ALIGNED(phys_addr, PGDIR_SIZE) &&
- !pgd_type_table(*pgd)) {
+ !pgd_type_table(*pgd) && !force_pages) {
u32 val;
/*
* TODO: Add code to discard a page table and
@@ -339,14 +340,15 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
tlb_invalidate();
}
-static void early_remap_range(u32 addr, size_t size, unsigned map_type)
+
+static void early_remap_range(u32 addr, size_t size, unsigned map_type, bool force_pages)
{
- __arch_remap_range((void *)addr, addr, size, map_type);
+ __arch_remap_range((void *)addr, addr, size, map_type, force_pages);
}
int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsigned map_type)
{
- __arch_remap_range(virt_addr, phys_addr, size, map_type);
+ __arch_remap_range(virt_addr, phys_addr, size, map_type, false);
if (map_type == MAP_UNCACHED)
dma_inv_range(virt_addr, size);
@@ -616,6 +618,7 @@ void *dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *dma_ha
void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
{
uint32_t *ttb = (uint32_t *)arm_mem_ttb(membase + memsize);
+ unsigned long barebox_size, optee_start;
pr_debug("enabling MMU, ttb @ 0x%p\n", ttb);
@@ -637,9 +640,27 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
create_flat_mapping();
/* maps main memory as cachable */
- early_remap_range(membase, memsize - OPTEE_SIZE, MAP_CACHED);
- early_remap_range(membase + memsize - OPTEE_SIZE, OPTEE_SIZE, MAP_UNCACHED);
- early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED);
+ optee_start = membase + memsize - OPTEE_SIZE;
+ barebox_size = optee_start - barebox_start;
+
+ /*
+ * map the bulk of the memory as sections to avoid allocating too many page tables
+ * at this early stage
+ */
+ early_remap_range(membase, barebox_start - membase, MAP_CACHED, false);
+ /*
+ * Map the remainder of the memory explicitly with two level page tables. This is
+ * the place where barebox proper ends at. In barebox proper we'll remap the code
+ * segments readonly/executable and the ro segments readonly/execute never. For this
+ * we need the memory being mapped pagewise. We can't do the split up from section
+ * wise mapping to pagewise mapping later because that would require us to do
+ * a break-before-make sequence which we can't do when barebox proper is running
+ * at the location being remapped.
+ */
+ early_remap_range(barebox_start, barebox_size, MAP_CACHED, true);
+ early_remap_range(optee_start, OPTEE_SIZE, MAP_UNCACHED, false);
+ early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
+ MAP_CACHED, false);
__mmu_cache_on();
}
--
2.39.5
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 4/6] ARM: MMU: map text segment ro and data segments execute never
2025-06-17 14:28 [PATCH v2 0/6] ARM: Map sections RO/XN Sascha Hauer
` (2 preceding siblings ...)
2025-06-17 14:28 ` [PATCH v2 3/6] ARM: MMU: map memory for barebox proper pagewise Sascha Hauer
@ 2025-06-17 14:28 ` Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 5/6] ARM: MMU64: map memory for barebox proper pagewise Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 6/6] ARM: MMU64: map text segment ro and data segments execute never Sascha Hauer
5 siblings, 0 replies; 7+ messages in thread
From: Sascha Hauer @ 2025-06-17 14:28 UTC (permalink / raw)
To: Ahmad Fatoum, BAREBOX
With this all segments in the DRAM except the text segment are mapped
execute-never so that only the barebox code can actually be executed.
Also map the readonly data segment readonly so that it can't be
modified.
The mapping is only implemented in barebox proper. The PBL still maps
the whole DRAM rwx.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/Kconfig | 12 ++++++++++
arch/arm/cpu/lowlevel_32.S | 1 +
arch/arm/cpu/mmu-common.h | 18 +++++++++++++++
arch/arm/cpu/mmu_32.c | 55 ++++++++++++++++++++++++++++++++++++--------
arch/arm/lib32/barebox.lds.S | 3 ++-
common/memory.c | 7 +++++-
include/mmu.h | 1 +
7 files changed, 85 insertions(+), 12 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index d283ef7793a1f13e6428d3b1037460cb806b566f..a67afed02c45f6736628e33ba8ddc69d3f854ba4 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -387,6 +387,18 @@ config ARM_UNWIND
the performance is not affected. Currently, this feature
only works with EABI compilers. If unsure say Y.
+config ARM_MMU_PERMISSIONS
+ bool "Map with extended RO/X permissions"
+ default y
+ help
+ Enable this option to map readonly sections as readonly, executable
+ sections as readonly/executable and the remainder of the SDRAM as
+ read/write/non-executable.
+ Traditionally barebox maps the whole SDRAM as read/write/execute.
+ You get this behaviour by disabling this option which is meant as
+ a debugging facility. It can go away once the extended permission
+ settings are proved to work reliable.
+
config ARM_SEMIHOSTING
bool "enable ARM semihosting support"
select SEMIHOSTING
diff --git a/arch/arm/cpu/lowlevel_32.S b/arch/arm/cpu/lowlevel_32.S
index 960a92b78c0adaf815948517ba917ae85ae65e27..5d524faf9cff9a8b545044169b8255279dd8ab0b 100644
--- a/arch/arm/cpu/lowlevel_32.S
+++ b/arch/arm/cpu/lowlevel_32.S
@@ -70,6 +70,7 @@ THUMB( orr r12, r12, #PSR_T_BIT )
orr r12, r12, #CR_U
bic r12, r12, #CR_A
#else
+ orr r12, r12, #CR_S
orr r12, r12, #CR_A
#endif
diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
index 0f11a4b73d1199ec2400f64a2f057cf940d4ff2d..ac11a87be4165079457bd73682bfbd909c3d1c31 100644
--- a/arch/arm/cpu/mmu-common.h
+++ b/arch/arm/cpu/mmu-common.h
@@ -3,6 +3,7 @@
#ifndef __ARM_MMU_COMMON_H
#define __ARM_MMU_COMMON_H
+#include <mmu.h>
#include <printf.h>
#include <linux/types.h>
#include <linux/ioport.h>
@@ -10,6 +11,8 @@
#include <linux/sizes.h>
#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
+#define ARCH_MAP_CACHED_RWX ((unsigned)-2)
+#define ARCH_MAP_CACHED_RO ((unsigned)-3)
struct device;
@@ -18,6 +21,21 @@ void dma_flush_range(void *ptr, size_t size);
void *dma_alloc_map(struct device *dev, size_t size, dma_addr_t *dma_handle, unsigned flags);
void __mmu_init(bool mmu_on);
+static inline unsigned arm_mmu_maybe_skip_permissions(unsigned map_type)
+{
+ if (IS_ENABLED(CONFIG_ARM_MMU_PERMISSIONS))
+ return map_type;
+
+ switch (map_type) {
+ case MAP_CODE:
+ case MAP_CACHED:
+ case ARCH_MAP_CACHED_RO:
+ return ARCH_MAP_CACHED_RWX;
+ default:
+ return map_type;
+ }
+}
+
static inline void arm_mmu_not_initialized_error(void)
{
/*
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index b21fc75f0ceb0c50f5190662a6cd674b1bd38ced..67f1fe59886a176dddd385d20205f0bdf53d7244 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -19,6 +19,7 @@
#include <asm/system_info.h>
#include <asm/sections.h>
#include <linux/pagemap.h>
+#include <range.h>
#include "mmu_32.h"
@@ -47,11 +48,18 @@ static inline void tlb_invalidate(void)
);
}
+#define PTE_FLAGS_CACHED_V7_RWX (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
+ PTE_EXT_AP_URW_SRW)
#define PTE_FLAGS_CACHED_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
- PTE_EXT_AP_URW_SRW)
+ PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
+#define PTE_FLAGS_CACHED_RO_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
+ PTE_EXT_APX | PTE_EXT_AP0 | PTE_EXT_AP1 | PTE_EXT_XN)
+#define PTE_FLAGS_CODE_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
+ PTE_EXT_APX | PTE_EXT_AP0 | PTE_EXT_AP1)
#define PTE_FLAGS_WC_V7 (PTE_EXT_TEX(1) | PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
#define PTE_FLAGS_UNCACHED_V7 (PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
#define PTE_FLAGS_CACHED_V4 (PTE_SMALL_AP_UNO_SRW | PTE_BUFFERABLE | PTE_CACHEABLE)
+#define PTE_FLAGS_CACHED_RO_V4 (PTE_SMALL_AP_UNO_SRO | PTE_CACHEABLE)
#define PTE_FLAGS_UNCACHED_V4 PTE_SMALL_AP_UNO_SRW
#define PGD_FLAGS_WC_V7 (PMD_SECT_TEX(1) | PMD_SECT_DEF_UNCACHED | \
PMD_SECT_BUFFERABLE | PMD_SECT_XN)
@@ -208,7 +216,9 @@ static u32 pte_flags_to_pmd(u32 pte)
/* AP[2] */
pmd |= ((pte >> 9) & 0x1) << 15;
} else {
- pmd |= PMD_SECT_AP_WRITE | PMD_SECT_AP_READ;
+ pmd |= PMD_SECT_AP_READ;
+ if (pte & PTE_SMALL_AP_MASK)
+ pmd |= PMD_SECT_AP_WRITE;
}
return pmd;
@@ -218,10 +228,16 @@ static uint32_t get_pte_flags(int map_type)
{
if (cpu_architecture() >= CPU_ARCH_ARMv7) {
switch (map_type) {
+ case ARCH_MAP_CACHED_RWX:
+ return PTE_FLAGS_CACHED_V7_RWX;
+ case ARCH_MAP_CACHED_RO:
+ return PTE_FLAGS_CACHED_RO_V7;
case MAP_CACHED:
return PTE_FLAGS_CACHED_V7;
case MAP_UNCACHED:
return PTE_FLAGS_UNCACHED_V7;
+ case MAP_CODE:
+ return PTE_FLAGS_CODE_V7;
case ARCH_MAP_WRITECOMBINE:
return PTE_FLAGS_WC_V7;
case MAP_FAULT:
@@ -230,6 +246,10 @@ static uint32_t get_pte_flags(int map_type)
}
} else {
switch (map_type) {
+ case ARCH_MAP_CACHED_RO:
+ case MAP_CODE:
+ return PTE_FLAGS_CACHED_RO_V4;
+ case ARCH_MAP_CACHED_RWX:
case MAP_CACHED:
return PTE_FLAGS_CACHED_V4;
case MAP_UNCACHED:
@@ -260,6 +280,8 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
pte_flags = get_pte_flags(map_type);
pmd_flags = pte_flags_to_pmd(pte_flags);
+ pr_debug("%s: 0x%08x 0x%08x type %d\n", __func__, virt_addr, size, map_type);
+
size = PAGE_ALIGN(size);
while (size) {
@@ -348,6 +370,8 @@ static void early_remap_range(u32 addr, size_t size, unsigned map_type, bool for
int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsigned map_type)
{
+ map_type = arm_mmu_maybe_skip_permissions(map_type);
+
__arch_remap_range(virt_addr, phys_addr, size, map_type, false);
if (map_type == MAP_UNCACHED)
@@ -555,6 +579,12 @@ void __mmu_init(bool mmu_on)
{
struct memory_bank *bank;
uint32_t *ttb = get_ttb();
+ unsigned long text_start = (unsigned long)&_stext;
+ unsigned long code_start = text_start;
+ unsigned long code_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
+ unsigned long text_size = (unsigned long)&_etext - text_start;
+ unsigned long rodata_start = (unsigned long)&__start_rodata;
+ unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
// TODO: remap writable only while remapping?
// TODO: What memtype for ttb when barebox is EFI loader?
@@ -591,10 +621,19 @@ void __mmu_init(bool mmu_on)
pos = rsv->end + 1;
}
+ if (region_overlap_size(pos, bank->start + bank->size - pos, text_start, text_size)) {
+ remap_range((void *)pos, code_start - pos, MAP_CACHED);
+ /* skip barebox segments here, will be mapped below */
+ pos = text_start + text_size;
+ }
+
remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
}
vectors_init();
+
+ remap_range((void *)code_start, code_size, MAP_CODE);
+ remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
}
/*
@@ -627,11 +666,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
set_ttbr(ttb);
- /* For the XN bit to take effect, we can't be using DOMAIN_MANAGER. */
- if (cpu_architecture() >= CPU_ARCH_ARMv7)
- set_domain(DOMAIN_CLIENT);
- else
- set_domain(DOMAIN_MANAGER);
+ set_domain(DOMAIN_CLIENT);
/*
* This marks the whole address space as uncachable as well as
@@ -647,7 +682,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
* map the bulk of the memory as sections to avoid allocating too many page tables
* at this early stage
*/
- early_remap_range(membase, barebox_start - membase, MAP_CACHED, false);
+ early_remap_range(membase, barebox_start - membase, ARCH_MAP_CACHED_RWX, false);
/*
* Map the remainder of the memory explicitly with two level page tables. This is
* the place where barebox proper ends at. In barebox proper we'll remap the code
@@ -657,10 +692,10 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
* a break-before-make sequence which we can't do when barebox proper is running
* at the location being remapped.
*/
- early_remap_range(barebox_start, barebox_size, MAP_CACHED, true);
+ early_remap_range(barebox_start, barebox_size, ARCH_MAP_CACHED_RWX, true);
early_remap_range(optee_start, OPTEE_SIZE, MAP_UNCACHED, false);
early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
- MAP_CACHED, false);
+ ARCH_MAP_CACHED_RWX, false);
__mmu_cache_on();
}
diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
index a52556a35696aea6f15cad5fd3f0275e8e6349b1..dbfdd2e9c110133f7fb45e06911bfc9ea9e8299c 100644
--- a/arch/arm/lib32/barebox.lds.S
+++ b/arch/arm/lib32/barebox.lds.S
@@ -30,7 +30,7 @@ SECTIONS
}
BAREBOX_BARE_INIT_SIZE
- . = ALIGN(4);
+ . = ALIGN(4096);
__start_rodata = .;
.rodata : {
*(.rodata*)
@@ -53,6 +53,7 @@ SECTIONS
__stop_unwind_tab = .;
}
#endif
+ . = ALIGN(4096);
__end_rodata = .;
_etext = .;
_sdata = .;
diff --git a/common/memory.c b/common/memory.c
index 57f58026df8e0ab52f4c27815b251e75cf30c7dc..bee55bd647e115c5c9f67bfd263e1887b58a78af 100644
--- a/common/memory.c
+++ b/common/memory.c
@@ -125,9 +125,14 @@ static int mem_malloc_resource(void)
MEMTYPE_BOOT_SERVICES_DATA, MEMATTRS_RW);
request_barebox_region("barebox code",
(unsigned long)&_stext,
- (unsigned long)&_etext -
+ (unsigned long)&__start_rodata -
(unsigned long)&_stext,
MEMATTRS_RX);
+ request_barebox_region("barebox RO data",
+ (unsigned long)&__start_rodata,
+ (unsigned long)&__end_rodata -
+ (unsigned long)&__start_rodata,
+ MEMATTRS_RO);
request_barebox_region("barebox data",
(unsigned long)&_sdata,
(unsigned long)&_edata -
diff --git a/include/mmu.h b/include/mmu.h
index 84ec6c5efb3eb8020fdc98e76a3614c137a0f8e9..20855e89eda301527b8cd69d868d58fc79637f5e 100644
--- a/include/mmu.h
+++ b/include/mmu.h
@@ -8,6 +8,7 @@
#define MAP_UNCACHED 0
#define MAP_CACHED 1
#define MAP_FAULT 2
+#define MAP_CODE 3
/*
* Depending on the architecture the default mapping can be
--
2.39.5
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 5/6] ARM: MMU64: map memory for barebox proper pagewise
2025-06-17 14:28 [PATCH v2 0/6] ARM: Map sections RO/XN Sascha Hauer
` (3 preceding siblings ...)
2025-06-17 14:28 ` [PATCH v2 4/6] ARM: MMU: map text segment ro and data segments execute never Sascha Hauer
@ 2025-06-17 14:28 ` Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 6/6] ARM: MMU64: map text segment ro and data segments execute never Sascha Hauer
5 siblings, 0 replies; 7+ messages in thread
From: Sascha Hauer @ 2025-06-17 14:28 UTC (permalink / raw)
To: Ahmad Fatoum, BAREBOX
Map the remainder of the memory explicitly with two level page tables. This is
the place where barebox proper ends at. In barebox proper we'll remap the code
segments readonly/executable and the ro segments readonly/execute never. For this
we need the memory being mapped pagewise. We can't do the split up from section
wise mapping to pagewise mapping later because that would require us to do
a break-before-make sequence which we can't do when barebox proper is running
at the location being remapped.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_64.c | 48 ++++++++++++++++++++++++++++++++++--------------
1 file changed, 34 insertions(+), 14 deletions(-)
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 121dd136af33967e516b66091a6c671d45e6e119..7e46201bbaae06dd4f2f3bd194db93d83401bfc9 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -10,6 +10,7 @@
#include <init.h>
#include <mmu.h>
#include <errno.h>
+#include <range.h>
#include <zero_page.h>
#include <linux/sizes.h>
#include <asm/memory.h>
@@ -128,7 +129,7 @@ static void split_block(uint64_t *pte, int level)
}
static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
- uint64_t attr)
+ uint64_t attr, bool force_pages)
{
uint64_t *ttb = get_ttb();
uint64_t block_size;
@@ -149,19 +150,25 @@ static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
while (size) {
table = ttb;
for (level = 0; level < 4; level++) {
+ bool finish = false;
block_shift = level2shift(level);
idx = (addr & level2mask(level)) >> block_shift;
block_size = (1ULL << block_shift);
pte = table + idx;
- if (size >= block_size && IS_ALIGNED(addr, block_size) &&
- IS_ALIGNED(phys, block_size)) {
+ if (force_pages) {
+ if (level == 3)
+ finish = true;
+ } else if (size >= block_size && IS_ALIGNED(addr, block_size) &&
+ IS_ALIGNED(phys, block_size)) {
+ finish = true;
+ }
+
+ if (finish) {
type = (level == 3) ?
PTE_TYPE_PAGE : PTE_TYPE_BLOCK;
-
- /* TODO: break-before-make missing */
- set_pte(pte, phys | attr | type);
+ *pte = phys | attr | type;
addr += block_size;
phys += block_size;
size -= block_size;
@@ -297,14 +304,14 @@ static unsigned long get_pte_attrs(unsigned flags)
}
}
-static void early_remap_range(uint64_t addr, size_t size, unsigned flags)
+static void early_remap_range(uint64_t addr, size_t size, unsigned flags, bool force_pages)
{
unsigned long attrs = get_pte_attrs(flags);
if (WARN_ON(attrs == ~0UL))
return;
- create_sections(addr, addr, size, attrs);
+ create_sections(addr, addr, size, attrs, force_pages);
}
int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsigned flags)
@@ -317,7 +324,7 @@ int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsign
if (flags != MAP_CACHED)
flush_cacheable_pages(virt_addr, size);
- create_sections((uint64_t)virt_addr, phys_addr, (uint64_t)size, attrs);
+ create_sections((uint64_t)virt_addr, phys_addr, (uint64_t)size, attrs, false);
return 0;
}
@@ -426,7 +433,7 @@ static void init_range(size_t total_level0_tables)
uint64_t addr = 0;
while (total_level0_tables--) {
- early_remap_range(addr, L0_XLAT_SIZE, MAP_UNCACHED);
+ early_remap_range(addr, L0_XLAT_SIZE, MAP_UNCACHED, false);
split_block(ttb, 0);
addr += L0_XLAT_SIZE;
ttb++;
@@ -437,6 +444,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
{
int el;
u64 optee_membase;
+ unsigned long barebox_size;
unsigned long ttb = arm_mem_ttb(membase + memsize);
if (get_cr() & CR_M)
@@ -457,14 +465,26 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
*/
init_range(2);
- early_remap_range(membase, memsize, MAP_CACHED);
+ early_remap_range(membase, memsize, MAP_CACHED, false);
- if (optee_get_membase(&optee_membase))
+ if (optee_get_membase(&optee_membase)) {
optee_membase = membase + memsize - OPTEE_SIZE;
- early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT);
+ barebox_size = optee_membase - barebox_start;
+
+ early_remap_range(optee_membase - barebox_size, barebox_size,
+ get_pte_attrs(ARCH_MAP_CACHED_RWX), true);
+ } else {
+ barebox_size = membase + memsize - barebox_start;
+
+ early_remap_range(membase + memsize - barebox_size, barebox_size,
+ get_pte_attrs(ARCH_MAP_CACHED_RWX), true);
+ }
+
+ early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT, false);
- early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED);
+ early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
+ MAP_CACHED, false);
mmu_enable();
}
--
2.39.5
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 6/6] ARM: MMU64: map text segment ro and data segments execute never
2025-06-17 14:28 [PATCH v2 0/6] ARM: Map sections RO/XN Sascha Hauer
` (4 preceding siblings ...)
2025-06-17 14:28 ` [PATCH v2 5/6] ARM: MMU64: map memory for barebox proper pagewise Sascha Hauer
@ 2025-06-17 14:28 ` Sascha Hauer
5 siblings, 0 replies; 7+ messages in thread
From: Sascha Hauer @ 2025-06-17 14:28 UTC (permalink / raw)
To: Ahmad Fatoum, BAREBOX
With this all segments in the DRAM except the text segment are mapped
execute-never so that only the barebox code can actually be executed.
Also map the readonly data segment readonly so that it can't be
modified.
The mapping is only implemented in barebox proper. The PBL still maps
the whole DRAM rwx.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_64.c | 34 ++++++++++++++++++++++++++++++----
arch/arm/include/asm/pgtable64.h | 1 +
arch/arm/lib64/barebox.lds.S | 5 +++--
3 files changed, 34 insertions(+), 6 deletions(-)
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 7e46201bbaae06dd4f2f3bd194db93d83401bfc9..4c23cb3056d24799e2c0aec9ee567165fca06ce0 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -292,13 +292,19 @@ static unsigned long get_pte_attrs(unsigned flags)
{
switch (flags) {
case MAP_CACHED:
- return CACHED_MEM;
+ return attrs_xn() | CACHED_MEM;
case MAP_UNCACHED:
return attrs_xn() | UNCACHED_MEM;
case MAP_FAULT:
return 0x0;
case ARCH_MAP_WRITECOMBINE:
return attrs_xn() | MEM_ALLOC_WRITECOMBINE;
+ case MAP_CODE:
+ return CACHED_MEM | PTE_BLOCK_RO;
+ case ARCH_MAP_CACHED_RO:
+ return attrs_xn() | CACHED_MEM | PTE_BLOCK_RO;
+ case ARCH_MAP_CACHED_RWX:
+ return CACHED_MEM;
default:
return ~0UL;
}
@@ -316,7 +322,11 @@ static void early_remap_range(uint64_t addr, size_t size, unsigned flags, bool f
int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsigned flags)
{
- unsigned long attrs = get_pte_attrs(flags);
+ unsigned long attrs;
+
+ flags = arm_mmu_maybe_skip_permissions(flags);
+
+ attrs = get_pte_attrs(flags);
if (attrs == ~0UL)
return -EINVAL;
@@ -357,6 +367,12 @@ void __mmu_init(bool mmu_on)
{
uint64_t *ttb = get_ttb();
struct memory_bank *bank;
+ unsigned long text_start = (unsigned long)&_stext;
+ unsigned long code_start = text_start;
+ unsigned long code_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
+ unsigned long text_size = (unsigned long)&_etext - text_start;
+ unsigned long rodata_start = (unsigned long)&__start_rodata;
+ unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
// TODO: remap writable only while remapping?
// TODO: What memtype for ttb when barebox is EFI loader?
@@ -383,9 +399,19 @@ void __mmu_init(bool mmu_on)
pos = rsv->end + 1;
}
+ if (region_overlap_size(pos, bank->start + bank->size - pos,
+ text_start, text_size)) {
+ remap_range((void *)pos, text_start - pos, MAP_CACHED);
+ /* skip barebox segments here, will be mapped below */
+ pos = text_start + text_size;
+ }
+
remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
}
+ remap_range((void *)code_start, code_size, MAP_CODE);
+ remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
+
/* Make zero page faulting to catch NULL pointer derefs */
zero_page_faulting();
create_guard_page();
@@ -465,7 +491,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
*/
init_range(2);
- early_remap_range(membase, memsize, MAP_CACHED, false);
+ early_remap_range(membase, memsize, ARCH_MAP_CACHED_RWX, false);
if (optee_get_membase(&optee_membase)) {
optee_membase = membase + memsize - OPTEE_SIZE;
@@ -484,7 +510,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT, false);
early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
- MAP_CACHED, false);
+ ARCH_MAP_CACHED_RWX, false);
mmu_enable();
}
diff --git a/arch/arm/include/asm/pgtable64.h b/arch/arm/include/asm/pgtable64.h
index b88ffe6be5254e1b9d3968573d5e9b7a37828a55..6f6ef22717b76baaf7857b12d38c6074871ce143 100644
--- a/arch/arm/include/asm/pgtable64.h
+++ b/arch/arm/include/asm/pgtable64.h
@@ -59,6 +59,7 @@
#define PTE_BLOCK_NG (1 << 11)
#define PTE_BLOCK_PXN (UL(1) << 53)
#define PTE_BLOCK_UXN (UL(1) << 54)
+#define PTE_BLOCK_RO (UL(1) << 7)
/*
* AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
diff --git a/arch/arm/lib64/barebox.lds.S b/arch/arm/lib64/barebox.lds.S
index 50e4b6f42cb8d4de92b7450e5b864b9056b61916..caddbedd610f68658b7ecf7616947ce02a84e5e8 100644
--- a/arch/arm/lib64/barebox.lds.S
+++ b/arch/arm/lib64/barebox.lds.S
@@ -28,18 +28,19 @@ SECTIONS
}
BAREBOX_BARE_INIT_SIZE
- . = ALIGN(4);
+ . = ALIGN(4096);
__start_rodata = .;
.rodata : {
*(.rodata*)
RO_DATA_SECTION
}
+ . = ALIGN(4096);
+
__end_rodata = .;
_etext = .;
_sdata = .;
- . = ALIGN(4);
.data : { *(.data*) }
.barebox_imd : { BAREBOX_IMD }
--
2.39.5
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-06-17 17:11 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-06-17 14:28 [PATCH v2 0/6] ARM: Map sections RO/XN Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 1/6] ARM: pass barebox base to mmu_early_enable() Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 2/6] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 3/6] ARM: MMU: map memory for barebox proper pagewise Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 4/6] ARM: MMU: map text segment ro and data segments execute never Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 5/6] ARM: MMU64: map memory for barebox proper pagewise Sascha Hauer
2025-06-17 14:28 ` [PATCH v2 6/6] ARM: MMU64: map text segment ro and data segments execute never Sascha Hauer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox