* [PATCH 1/7] memory: request RO data section as separate region
2025-06-13 7:58 [PATCH 0/7] ARM: Map sections RO/XN Sascha Hauer
@ 2025-06-13 7:58 ` Sascha Hauer
2025-06-13 9:15 ` Ahmad Fatoum
2025-06-13 7:58 ` [PATCH 2/7] ARM: pass barebox base to mmu_early_enable() Sascha Hauer
` (6 subsequent siblings)
7 siblings, 1 reply; 17+ messages in thread
From: Sascha Hauer @ 2025-06-13 7:58 UTC (permalink / raw)
To: BAREBOX
Map the RO data section as separate region so that it becomes visible in
the iomem output.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
common/memory.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/common/memory.c b/common/memory.c
index a5b081be2c709f04667a67fbd2110520ab4973e3..708bfd26a0fc3d295d09632fb92e89f474bb7422 100644
--- a/common/memory.c
+++ b/common/memory.c
@@ -103,8 +103,12 @@ static int mem_malloc_resource(void)
malloc_end - malloc_start + 1);
request_barebox_region("barebox code",
(unsigned long)&_stext,
- (unsigned long)&_etext -
+ (unsigned long)&__start_rodata -
(unsigned long)&_stext);
+ request_barebox_region("barebox RO data",
+ (unsigned long)&__start_rodata,
+ (unsigned long)&__end_rodata -
+ (unsigned long)&__start_rodata);
request_barebox_region("barebox data",
(unsigned long)&_sdata,
(unsigned long)&_edata -
--
2.39.5
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/7] memory: request RO data section as separate region
2025-06-13 7:58 ` [PATCH 1/7] memory: request RO data section as separate region Sascha Hauer
@ 2025-06-13 9:15 ` Ahmad Fatoum
0 siblings, 0 replies; 17+ messages in thread
From: Ahmad Fatoum @ 2025-06-13 9:15 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX
On 6/13/25 09:58, Sascha Hauer wrote:
> Map the RO data section as separate region so that it becomes visible in
> the iomem output.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> common/memory.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/common/memory.c b/common/memory.c
> index a5b081be2c709f04667a67fbd2110520ab4973e3..708bfd26a0fc3d295d09632fb92e89f474bb7422 100644
> --- a/common/memory.c
> +++ b/common/memory.c
> @@ -103,8 +103,12 @@ static int mem_malloc_resource(void)
> malloc_end - malloc_start + 1);
> request_barebox_region("barebox code",
> (unsigned long)&_stext,
> - (unsigned long)&_etext -
> + (unsigned long)&__start_rodata -
> (unsigned long)&_stext);
> + request_barebox_region("barebox RO data",
> + (unsigned long)&__start_rodata,
> + (unsigned long)&__end_rodata -
> + (unsigned long)&__start_rodata);
> request_barebox_region("barebox data",
> (unsigned long)&_sdata,
> (unsigned long)&_edata -
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 2/7] ARM: pass barebox base to mmu_early_enable()
2025-06-13 7:58 [PATCH 0/7] ARM: Map sections RO/XN Sascha Hauer
2025-06-13 7:58 ` [PATCH 1/7] memory: request RO data section as separate region Sascha Hauer
@ 2025-06-13 7:58 ` Sascha Hauer
2025-06-13 9:17 ` Ahmad Fatoum
2025-06-13 7:58 ` [PATCH 3/7] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header Sascha Hauer
` (5 subsequent siblings)
7 siblings, 1 reply; 17+ messages in thread
From: Sascha Hauer @ 2025-06-13 7:58 UTC (permalink / raw)
To: BAREBOX
We'll need the barebox base in the next patches to map the barebox
area differently.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 2 +-
arch/arm/cpu/mmu_64.c | 2 +-
arch/arm/cpu/uncompress.c | 9 ++++-----
arch/arm/include/asm/mmu.h | 2 +-
4 files changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index ec6bd27da4e161a96fd42497e1a54dafc9779937..2754bea88a514c2886e43ffc4dbf310d75055fca 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -591,7 +591,7 @@ void *dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *dma_ha
return dma_alloc_map(dev, size, dma_handle, ARCH_MAP_WRITECOMBINE);
}
-void mmu_early_enable(unsigned long membase, unsigned long memsize)
+void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
{
uint32_t *ttb = (uint32_t *)arm_mem_ttb(membase + memsize);
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index bc1a44d0a7b85af769ffa9c9bfbf70f274fc1aa5..65c6f1bb9f8ac2f4b55baf46c2af3d3714060088 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -408,7 +408,7 @@ static void init_range(size_t total_level0_tables)
}
}
-void mmu_early_enable(unsigned long membase, unsigned long memsize)
+void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
{
int el;
u64 optee_membase;
diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
index 4657a4828e67e1b0acfa9dec3aef33bc4c525468..e24e754e0b16bda4974df0dd754936e87fe59d2d 100644
--- a/arch/arm/cpu/uncompress.c
+++ b/arch/arm/cpu/uncompress.c
@@ -63,11 +63,6 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
- if (IS_ENABLED(CONFIG_MMU))
- mmu_early_enable(membase, memsize);
- else if (IS_ENABLED(CONFIG_ARMV7R_MPU))
- set_cr(get_cr() | CR_C);
-
/* Add handoff data now, so arm_mem_barebox_image takes it into account */
if (boarddata)
handoff_data_add_dt(boarddata);
@@ -83,6 +78,10 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
#ifdef DEBUG
print_pbl_mem_layout(membase, endmem, barebox_base);
#endif
+ if (IS_ENABLED(CONFIG_MMU))
+ mmu_early_enable(membase, memsize, barebox_base);
+ else if (IS_ENABLED(CONFIG_ARMV7R_MPU))
+ set_cr(get_cr() | CR_C);
pr_debug("uncompressing barebox binary at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
pg_start, pg_len, barebox_base, uncompressed_len);
diff --git a/arch/arm/include/asm/mmu.h b/arch/arm/include/asm/mmu.h
index ebf1e096c68295858bc8f3e6ab0e6f2dd6512717..5538cd3558e8e6c069923f7f6ccfc38e7df87c9f 100644
--- a/arch/arm/include/asm/mmu.h
+++ b/arch/arm/include/asm/mmu.h
@@ -64,7 +64,7 @@ void __dma_clean_range(unsigned long, unsigned long);
void __dma_flush_range(unsigned long, unsigned long);
void __dma_inv_range(unsigned long, unsigned long);
-void mmu_early_enable(unsigned long membase, unsigned long memsize);
+void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_base);
void mmu_early_disable(void);
#endif /* __ASM_MMU_H */
--
2.39.5
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 2/7] ARM: pass barebox base to mmu_early_enable()
2025-06-13 7:58 ` [PATCH 2/7] ARM: pass barebox base to mmu_early_enable() Sascha Hauer
@ 2025-06-13 9:17 ` Ahmad Fatoum
0 siblings, 0 replies; 17+ messages in thread
From: Ahmad Fatoum @ 2025-06-13 9:17 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX
On 6/13/25 09:58, Sascha Hauer wrote:
> We'll need the barebox base in the next patches to map the barebox
> area differently.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/arm/cpu/mmu_32.c | 2 +-
> arch/arm/cpu/mmu_64.c | 2 +-
> arch/arm/cpu/uncompress.c | 9 ++++-----
> arch/arm/include/asm/mmu.h | 2 +-
> 4 files changed, 7 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
> index ec6bd27da4e161a96fd42497e1a54dafc9779937..2754bea88a514c2886e43ffc4dbf310d75055fca 100644
> --- a/arch/arm/cpu/mmu_32.c
> +++ b/arch/arm/cpu/mmu_32.c
> @@ -591,7 +591,7 @@ void *dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *dma_ha
> return dma_alloc_map(dev, size, dma_handle, ARCH_MAP_WRITECOMBINE);
> }
>
> -void mmu_early_enable(unsigned long membase, unsigned long memsize)
> +void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
> {
> uint32_t *ttb = (uint32_t *)arm_mem_ttb(membase + memsize);
>
> diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
> index bc1a44d0a7b85af769ffa9c9bfbf70f274fc1aa5..65c6f1bb9f8ac2f4b55baf46c2af3d3714060088 100644
> --- a/arch/arm/cpu/mmu_64.c
> +++ b/arch/arm/cpu/mmu_64.c
> @@ -408,7 +408,7 @@ static void init_range(size_t total_level0_tables)
> }
> }
>
> -void mmu_early_enable(unsigned long membase, unsigned long memsize)
> +void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
> {
> int el;
> u64 optee_membase;
> diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
> index 4657a4828e67e1b0acfa9dec3aef33bc4c525468..e24e754e0b16bda4974df0dd754936e87fe59d2d 100644
> --- a/arch/arm/cpu/uncompress.c
> +++ b/arch/arm/cpu/uncompress.c
> @@ -63,11 +63,6 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
>
> pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
>
> - if (IS_ENABLED(CONFIG_MMU))
> - mmu_early_enable(membase, memsize);
> - else if (IS_ENABLED(CONFIG_ARMV7R_MPU))
> - set_cr(get_cr() | CR_C);
> -
> /* Add handoff data now, so arm_mem_barebox_image takes it into account */
> if (boarddata)
> handoff_data_add_dt(boarddata);
> @@ -83,6 +78,10 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
> #ifdef DEBUG
> print_pbl_mem_layout(membase, endmem, barebox_base);
> #endif
> + if (IS_ENABLED(CONFIG_MMU))
> + mmu_early_enable(membase, memsize, barebox_base);
> + else if (IS_ENABLED(CONFIG_ARMV7R_MPU))
> + set_cr(get_cr() | CR_C);
>
> pr_debug("uncompressing barebox binary at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
> pg_start, pg_len, barebox_base, uncompressed_len);
> diff --git a/arch/arm/include/asm/mmu.h b/arch/arm/include/asm/mmu.h
> index ebf1e096c68295858bc8f3e6ab0e6f2dd6512717..5538cd3558e8e6c069923f7f6ccfc38e7df87c9f 100644
> --- a/arch/arm/include/asm/mmu.h
> +++ b/arch/arm/include/asm/mmu.h
> @@ -64,7 +64,7 @@ void __dma_clean_range(unsigned long, unsigned long);
> void __dma_flush_range(unsigned long, unsigned long);
> void __dma_inv_range(unsigned long, unsigned long);
>
> -void mmu_early_enable(unsigned long membase, unsigned long memsize);
> +void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_base);
> void mmu_early_disable(void);
>
> #endif /* __ASM_MMU_H */
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 3/7] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header
2025-06-13 7:58 [PATCH 0/7] ARM: Map sections RO/XN Sascha Hauer
2025-06-13 7:58 ` [PATCH 1/7] memory: request RO data section as separate region Sascha Hauer
2025-06-13 7:58 ` [PATCH 2/7] ARM: pass barebox base to mmu_early_enable() Sascha Hauer
@ 2025-06-13 7:58 ` Sascha Hauer
2025-06-13 9:18 ` Ahmad Fatoum
2025-06-13 7:58 ` [PATCH 4/7] ARM: MMU: map memory for barebox proper pagewise Sascha Hauer
` (4 subsequent siblings)
7 siblings, 1 reply; 17+ messages in thread
From: Sascha Hauer @ 2025-06-13 7:58 UTC (permalink / raw)
To: BAREBOX
ARCH_MAP_WRITECOMBINE is defined equally for both mmu_32 and mmu_64 and
we'll add more mapping types later, so move it to a header file to be
shared by both mmu implementations.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu-common.h | 2 ++
arch/arm/cpu/mmu_32.c | 1 -
arch/arm/cpu/mmu_64.c | 2 --
3 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
index d0b50662570a48fafd47edac9fbc12fe2a62458a..0f11a4b73d1199ec2400f64a2f057cf940d4ff2d 100644
--- a/arch/arm/cpu/mmu-common.h
+++ b/arch/arm/cpu/mmu-common.h
@@ -9,6 +9,8 @@
#include <linux/kernel.h>
#include <linux/sizes.h>
+#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
+
struct device;
void dma_inv_range(void *ptr, size_t size);
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 2754bea88a514c2886e43ffc4dbf310d75055fca..3e0c761cd43486d3193b80b5f79097bdd5cd333c 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -23,7 +23,6 @@
#include "mmu_32.h"
#define PTRS_PER_PTE (PGDIR_SIZE / PAGE_SIZE)
-#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
static inline uint32_t *get_ttb(void)
{
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 65c6f1bb9f8ac2f4b55baf46c2af3d3714060088..440258fa767735a4537abd71030a5540813fc443 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -24,8 +24,6 @@
#include "mmu_64.h"
-#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
-
static uint64_t *get_ttb(void)
{
return (uint64_t *)get_ttbr(current_el());
--
2.39.5
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/7] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header
2025-06-13 7:58 ` [PATCH 3/7] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header Sascha Hauer
@ 2025-06-13 9:18 ` Ahmad Fatoum
0 siblings, 0 replies; 17+ messages in thread
From: Ahmad Fatoum @ 2025-06-13 9:18 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX
On 6/13/25 09:58, Sascha Hauer wrote:
> ARCH_MAP_WRITECOMBINE is defined equally for both mmu_32 and mmu_64 and
> we'll add more mapping types later, so move it to a header file to be
> shared by both mmu implementations.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/arm/cpu/mmu-common.h | 2 ++
> arch/arm/cpu/mmu_32.c | 1 -
> arch/arm/cpu/mmu_64.c | 2 --
> 3 files changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
> index d0b50662570a48fafd47edac9fbc12fe2a62458a..0f11a4b73d1199ec2400f64a2f057cf940d4ff2d 100644
> --- a/arch/arm/cpu/mmu-common.h
> +++ b/arch/arm/cpu/mmu-common.h
> @@ -9,6 +9,8 @@
> #include <linux/kernel.h>
> #include <linux/sizes.h>
>
> +#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
> +
> struct device;
>
> void dma_inv_range(void *ptr, size_t size);
> diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
> index 2754bea88a514c2886e43ffc4dbf310d75055fca..3e0c761cd43486d3193b80b5f79097bdd5cd333c 100644
> --- a/arch/arm/cpu/mmu_32.c
> +++ b/arch/arm/cpu/mmu_32.c
> @@ -23,7 +23,6 @@
> #include "mmu_32.h"
>
> #define PTRS_PER_PTE (PGDIR_SIZE / PAGE_SIZE)
> -#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
>
> static inline uint32_t *get_ttb(void)
> {
> diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
> index 65c6f1bb9f8ac2f4b55baf46c2af3d3714060088..440258fa767735a4537abd71030a5540813fc443 100644
> --- a/arch/arm/cpu/mmu_64.c
> +++ b/arch/arm/cpu/mmu_64.c
> @@ -24,8 +24,6 @@
>
> #include "mmu_64.h"
>
> -#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
> -
> static uint64_t *get_ttb(void)
> {
> return (uint64_t *)get_ttbr(current_el());
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 4/7] ARM: MMU: map memory for barebox proper pagewise
2025-06-13 7:58 [PATCH 0/7] ARM: Map sections RO/XN Sascha Hauer
` (2 preceding siblings ...)
2025-06-13 7:58 ` [PATCH 3/7] ARM: mmu: move ARCH_MAP_WRITECOMBINE to header Sascha Hauer
@ 2025-06-13 7:58 ` Sascha Hauer
2025-06-13 9:23 ` Ahmad Fatoum
2025-06-13 7:58 ` [PATCH 5/7] ARM: MMU: map text segment ro and data segments execute never Sascha Hauer
` (3 subsequent siblings)
7 siblings, 1 reply; 17+ messages in thread
From: Sascha Hauer @ 2025-06-13 7:58 UTC (permalink / raw)
To: BAREBOX
Map the remainder of the memory explicitly with two level page tables. This is
the place where barebox proper ends at. In barebox proper we'll remap the code
segments readonly/executable and the ro segments readonly/execute never. For this
we need the memory being mapped pagewise. We can't do the split up from section
wise mapping to pagewise mapping later because that would require us to do
a break-before-make sequence which we can't do when barebox proper is running
at the location being remapped.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 37 +++++++++++++++++++++++++++++--------
1 file changed, 29 insertions(+), 8 deletions(-)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 3e0c761cd43486d3193b80b5f79097bdd5cd333c..7ad66070260eae337773e3acd1bcbea4fcab12f3 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -241,7 +241,8 @@ static uint32_t get_pmd_flags(int map_type)
return pte_flags_to_pmd(get_pte_flags(map_type));
}
-static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t size, unsigned map_type)
+static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t size,
+ unsigned map_type, bool force_pages)
{
u32 virt_addr = (u32)_virt_addr;
u32 pte_flags, pmd_flags;
@@ -262,7 +263,7 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
if (size >= PGDIR_SIZE && pgdir_size_aligned &&
IS_ALIGNED(phys_addr, PGDIR_SIZE) &&
- !pgd_type_table(*pgd)) {
+ !pgd_type_table(*pgd) && !force_pages) {
/*
* TODO: Add code to discard a page table and
* replace it with a section
@@ -325,14 +326,15 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
tlb_invalidate();
}
-static void early_remap_range(u32 addr, size_t size, unsigned map_type)
+
+static void early_remap_range(u32 addr, size_t size, unsigned map_type, bool force_pages)
{
- __arch_remap_range((void *)addr, addr, size, map_type);
+ __arch_remap_range((void *)addr, addr, size, map_type, force_pages);
}
int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, unsigned map_type)
{
- __arch_remap_range(virt_addr, phys_addr, size, map_type);
+ __arch_remap_range(virt_addr, phys_addr, size, map_type, false);
if (map_type == MAP_UNCACHED)
dma_inv_range(virt_addr, size);
@@ -593,6 +595,7 @@ void *dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *dma_ha
void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned long barebox_start)
{
uint32_t *ttb = (uint32_t *)arm_mem_ttb(membase + memsize);
+ unsigned long barebox_size = SZ_8M, optee_start;
pr_debug("enabling MMU, ttb @ 0x%p\n", ttb);
@@ -614,9 +617,27 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
create_flat_mapping();
/* maps main memory as cachable */
- early_remap_range(membase, memsize - OPTEE_SIZE, MAP_CACHED);
- early_remap_range(membase + memsize - OPTEE_SIZE, OPTEE_SIZE, MAP_UNCACHED);
- early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED);
+ optee_start = membase + memsize - OPTEE_SIZE;
+ barebox_size = optee_start - barebox_start;
+
+ /*
+ * map the bulk of the memory as sections to avoid allocating too many page tables
+ * at this early stage
+ */
+ early_remap_range(membase, barebox_start - membase, MAP_CACHED, false);
+ /*
+ * Map the remainder of the memory explicitly with two level page tables. This is
+ * the place where barebox proper ends at. In barebox proper we'll remap the code
+ * segments readonly/executable and the ro segments readonly/execute never. For this
+ * we need the memory being mapped pagewise. We can't do the split up from section
+ * wise mapping to pagewise mapping later because that would require us to do
+ * a break-before-make sequence which we can't do when barebox proper is running
+ * at the location being remapped.
+ */
+ early_remap_range(barebox_start, barebox_size, MAP_CACHED, true);
+ early_remap_range(optee_start, OPTEE_SIZE, MAP_UNCACHED, false);
+ early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
+ MAP_CACHED, false);
__mmu_cache_on();
}
--
2.39.5
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 4/7] ARM: MMU: map memory for barebox proper pagewise
2025-06-13 7:58 ` [PATCH 4/7] ARM: MMU: map memory for barebox proper pagewise Sascha Hauer
@ 2025-06-13 9:23 ` Ahmad Fatoum
0 siblings, 0 replies; 17+ messages in thread
From: Ahmad Fatoum @ 2025-06-13 9:23 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX
On 6/13/25 09:58, Sascha Hauer wrote:
> Map the remainder of the memory explicitly with two level page tables. This is
> the place where barebox proper ends at. In barebox proper we'll remap the code
> segments readonly/executable and the ro segments readonly/execute never. For this
> we need the memory being mapped pagewise. We can't do the split up from section
> wise mapping to pagewise mapping later because that would require us to do
> a break-before-make sequence which we can't do when barebox proper is running
> at the location being remapped.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Just a small thing to change below:
> uint32_t *ttb = (uint32_t *)arm_mem_ttb(membase + memsize);
> + unsigned long barebox_size = SZ_8M, optee_start;
SZ_8M is overwritten later, so should be dropped here.
Cheers,
Ahmad
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 5/7] ARM: MMU: map text segment ro and data segments execute never
2025-06-13 7:58 [PATCH 0/7] ARM: Map sections RO/XN Sascha Hauer
` (3 preceding siblings ...)
2025-06-13 7:58 ` [PATCH 4/7] ARM: MMU: map memory for barebox proper pagewise Sascha Hauer
@ 2025-06-13 7:58 ` Sascha Hauer
2025-06-13 10:12 ` Ahmad Fatoum
2025-06-13 10:36 ` Ahmad Fatoum
2025-06-13 7:58 ` [PATCH 6/7] ARM: MMU64: map memory for barebox proper pagewise Sascha Hauer
` (2 subsequent siblings)
7 siblings, 2 replies; 17+ messages in thread
From: Sascha Hauer @ 2025-06-13 7:58 UTC (permalink / raw)
To: BAREBOX
With this all segments in the DRAM except the text segment are mapped
execute-never so that only the barebox code can actually be executed.
Also map the readonly data segment readonly so that it can't be
modified.
The mapping is only implemented in barebox proper. The PBL still maps
the whole DRAM rwx.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/Kconfig | 12 ++++++++++
arch/arm/cpu/mmu-common.h | 2 ++
arch/arm/cpu/mmu_32.c | 52 ++++++++++++++++++++++++++++++++++++++------
arch/arm/lib32/barebox.lds.S | 3 ++-
include/mmu.h | 1 +
5 files changed, 62 insertions(+), 8 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 0800b15d784ca0ab975cf7ceb2f7b47ed10643b1..4c5f58461b82394f3f5a62c2e68cdb36b38bee85 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -389,6 +389,18 @@ config ARM_UNWIND
the performance is not affected. Currently, this feature
only works with EABI compilers. If unsure say Y.
+config ARM_MMU_PERMISSIONS
+ bool "Map with extended RO/X permissions"
+ default y
+ help
+ Enable this option to map readonly sections as readonly, executable
+ sections as readonly/executable and the remainder of the SDRAM as
+ read/write/non-executable.
+ Traditionally barebox maps the whole SDRAM as read/write/execute.
+ You get this behaviour by disabling this option which is meant as
+ a debugging facility. It can go away once the extended permission
+ settings are proved to work reliable.
+
config ARM_SEMIHOSTING
bool "enable ARM semihosting support"
select SEMIHOSTING
diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
index 0f11a4b73d1199ec2400f64a2f057cf940d4ff2d..99770f943099fc64ddc15ad8f3ec4fb3c31e8449 100644
--- a/arch/arm/cpu/mmu-common.h
+++ b/arch/arm/cpu/mmu-common.h
@@ -10,6 +10,8 @@
#include <linux/sizes.h>
#define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
+#define ARCH_MAP_CACHED_RWX ((unsigned)-2)
+#define ARCH_MAP_CACHED_RO ((unsigned)-3)
struct device;
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 7ad66070260eae337773e3acd1bcbea4fcab12f3..532a77c271b6046e769f1a6a9a954f5b93bd5e7f 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -19,6 +19,7 @@
#include <asm/system_info.h>
#include <asm/sections.h>
#include <linux/pagemap.h>
+#include <range.h>
#include "mmu_32.h"
@@ -47,11 +48,18 @@ static inline void tlb_invalidate(void)
);
}
+#define PTE_FLAGS_CACHED_V7_RWX (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
+ PTE_EXT_AP_URW_SRW)
#define PTE_FLAGS_CACHED_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
- PTE_EXT_AP_URW_SRW)
+ PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
+#define PTE_FLAGS_CACHED_RO_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
+ PTE_EXT_APX | PTE_EXT_AP0 | PTE_EXT_AP1 | PTE_EXT_XN)
+#define PTE_FLAGS_CODE_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
+ PTE_EXT_APX | PTE_EXT_AP0 | PTE_EXT_AP1)
#define PTE_FLAGS_WC_V7 (PTE_EXT_TEX(1) | PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
#define PTE_FLAGS_UNCACHED_V7 (PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
#define PTE_FLAGS_CACHED_V4 (PTE_SMALL_AP_UNO_SRW | PTE_BUFFERABLE | PTE_CACHEABLE)
+#define PTE_FLAGS_CACHED_RO_V4 (PTE_SMALL_AP_UNO_SRO | PTE_BUFFERABLE | PTE_CACHEABLE)
#define PTE_FLAGS_UNCACHED_V4 PTE_SMALL_AP_UNO_SRW
#define PGD_FLAGS_WC_V7 (PMD_SECT_TEX(1) | PMD_SECT_DEF_UNCACHED | \
PMD_SECT_BUFFERABLE | PMD_SECT_XN)
@@ -212,10 +220,16 @@ static uint32_t get_pte_flags(int map_type)
{
if (cpu_architecture() >= CPU_ARCH_ARMv7) {
switch (map_type) {
+ case ARCH_MAP_CACHED_RWX:
+ return PTE_FLAGS_CACHED_V7_RWX;
+ case ARCH_MAP_CACHED_RO:
+ return PTE_FLAGS_CACHED_RO_V7;
case MAP_CACHED:
return PTE_FLAGS_CACHED_V7;
case MAP_UNCACHED:
return PTE_FLAGS_UNCACHED_V7;
+ case MAP_CODE:
+ return PTE_FLAGS_CODE_V7;
case ARCH_MAP_WRITECOMBINE:
return PTE_FLAGS_WC_V7;
case MAP_FAULT:
@@ -224,6 +238,10 @@ static uint32_t get_pte_flags(int map_type)
}
} else {
switch (map_type) {
+ case ARCH_MAP_CACHED_RO:
+ case MAP_CODE:
+ return PTE_FLAGS_CACHED_RO_V4;
+ case ARCH_MAP_CACHED_RWX:
case MAP_CACHED:
return PTE_FLAGS_CACHED_V4;
case MAP_UNCACHED:
@@ -254,6 +272,8 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
pte_flags = get_pte_flags(map_type);
pmd_flags = pte_flags_to_pmd(pte_flags);
+ pr_debug("%s: 0x%08x 0x%08x type %d\n", __func__, virt_addr, size, map_type);
+
size = PAGE_ALIGN(size);
while (size) {
@@ -535,6 +555,10 @@ void __mmu_init(bool mmu_on)
{
struct memory_bank *bank;
uint32_t *ttb = get_ttb();
+ unsigned long text_start = (unsigned long)&_stext;
+ unsigned long text_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
+ unsigned long rodata_start = (unsigned long)&__start_rodata;
+ unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
if (!request_barebox_region("ttb", (unsigned long)ttb,
ARM_EARLY_PAGETABLE_SIZE))
@@ -550,6 +574,8 @@ void __mmu_init(bool mmu_on)
pr_debug("ttb: 0x%p\n", ttb);
+ vectors_init();
+
/*
* Early mmu init will have mapped everything but the initial memory area
* (excluding final OPTEE_SIZE bytes) uncached. We have now discovered
@@ -568,10 +594,22 @@ void __mmu_init(bool mmu_on)
pos = rsv->end + 1;
}
- remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
+ if (IS_ENABLED(CONFIG_ARM_MMU_PERMISSIONS)) {
+ if (region_overlap_size(pos, bank->start + bank->size - pos,
+ text_start, text_size)) {
+ remap_range((void *)pos, text_start - pos, MAP_CACHED);
+ remap_range((void *)text_start, text_size, MAP_CODE);
+ remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
+ remap_range((void *)(rodata_start + rodata_size),
+ bank->start + bank->size - (rodata_start + rodata_size),
+ MAP_CACHED);
+ } else {
+ remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
+ }
+ } else {
+ remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
+ }
}
-
- vectors_init();
}
/*
@@ -624,7 +662,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
* map the bulk of the memory as sections to avoid allocating too many page tables
* at this early stage
*/
- early_remap_range(membase, barebox_start - membase, MAP_CACHED, false);
+ early_remap_range(membase, barebox_start - membase, ARCH_MAP_CACHED_RWX, false);
/*
* Map the remainder of the memory explicitly with two level page tables. This is
* the place where barebox proper ends at. In barebox proper we'll remap the code
@@ -634,10 +672,10 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
* a break-before-make sequence which we can't do when barebox proper is running
* at the location being remapped.
*/
- early_remap_range(barebox_start, barebox_size, MAP_CACHED, true);
+ early_remap_range(barebox_start, barebox_size, ARCH_MAP_CACHED_RWX, true);
early_remap_range(optee_start, OPTEE_SIZE, MAP_UNCACHED, false);
early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
- MAP_CACHED, false);
+ ARCH_MAP_CACHED_RWX, false);
__mmu_cache_on();
}
diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
index a52556a35696aea6f15cad5fd3f0275e8e6349b1..dbfdd2e9c110133f7fb45e06911bfc9ea9e8299c 100644
--- a/arch/arm/lib32/barebox.lds.S
+++ b/arch/arm/lib32/barebox.lds.S
@@ -30,7 +30,7 @@ SECTIONS
}
BAREBOX_BARE_INIT_SIZE
- . = ALIGN(4);
+ . = ALIGN(4096);
__start_rodata = .;
.rodata : {
*(.rodata*)
@@ -53,6 +53,7 @@ SECTIONS
__stop_unwind_tab = .;
}
#endif
+ . = ALIGN(4096);
__end_rodata = .;
_etext = .;
_sdata = .;
diff --git a/include/mmu.h b/include/mmu.h
index 84ec6c5efb3eb8020fdc98e76a3614c137a0f8e9..20855e89eda301527b8cd69d868d58fc79637f5e 100644
--- a/include/mmu.h
+++ b/include/mmu.h
@@ -8,6 +8,7 @@
#define MAP_UNCACHED 0
#define MAP_CACHED 1
#define MAP_FAULT 2
+#define MAP_CODE 3
/*
* Depending on the architecture the default mapping can be
--
2.39.5
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 5/7] ARM: MMU: map text segment ro and data segments execute never
2025-06-13 7:58 ` [PATCH 5/7] ARM: MMU: map text segment ro and data segments execute never Sascha Hauer
@ 2025-06-13 10:12 ` Ahmad Fatoum
2025-06-13 10:36 ` Ahmad Fatoum
1 sibling, 0 replies; 17+ messages in thread
From: Ahmad Fatoum @ 2025-06-13 10:12 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX
Hi,
On 6/13/25 09:58, Sascha Hauer wrote:
> + pr_debug("%s: 0x%08x 0x%08x type %d\n", __func__, virt_addr, size, map_type);
I can add a follow up patch turning type into a string.
> + unsigned long text_start = (unsigned long)&_stext;
> + unsigned long text_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
text_size is an unfortunate name as text_start + text_size != _etext -
1, which is surprising.
I would prefer:
text_start = code_start = &_stext;
text_size = &_etext - text_start;
code_size = &__start_ro_data - code_start;
> + unsigned long rodata_start = (unsigned long)&__start_rodata;
> + unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
>
> if (!request_barebox_region("ttb", (unsigned long)ttb,
> ARM_EARLY_PAGETABLE_SIZE))
> @@ -550,6 +574,8 @@ void __mmu_init(bool mmu_on)
>
> pr_debug("ttb: 0x%p\n", ttb);
>
> + vectors_init();
Any particular reason to move this around?
> +
> /*
> * Early mmu init will have mapped everything but the initial memory area
> * (excluding final OPTEE_SIZE bytes) uncached. We have now discovered
> @@ -568,10 +594,22 @@ void __mmu_init(bool mmu_on)
> pos = rsv->end + 1;
> }
>
> - remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
> + if (IS_ENABLED(CONFIG_ARM_MMU_PERMISSIONS)) {
> + if (region_overlap_size(pos, bank->start + bank->size - pos,
> + text_start, text_size)) {
Wouldn't matter, but we should check overlap against the full range we
are going to remap specially. With my suggested text_size/code_size
changes above that would be correct(er).
> + remap_range((void *)pos, text_start - pos, MAP_CACHED);
This is ok.
> + remap_range((void *)text_start, text_size, MAP_CODE);
> + remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
These I would move out of the loop after the iteration.
> + remap_range((void *)(rodata_start + rodata_size),
> + bank->start + bank->size - (rodata_start + rodata_size),
> + MAP_CACHED);
> + } else {
> + remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
> + }
> + } else {
> + remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
> + }
If we combine the two if conditional into a single &&ed one, we can
replace the three remap_range calls by a single one:
pos = text_start + text_end;
if (pos >= bank->start + bank->size)
continue;
/* We carved out a gap for the barebox parts, so fall
* through to remapping the rest
*/
}
remap_range((void *)pos, bank->start + bank->size - pos,
MAP_CACHED);
}
Then we can do the remapping of the sections here. I think that would
aid readability.
Thanks,
Ahmad
> }
> -
> - vectors_init();
> }
>
> /*
> @@ -624,7 +662,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
> * map the bulk of the memory as sections to avoid allocating too many page tables
> * at this early stage
> */
> - early_remap_range(membase, barebox_start - membase, MAP_CACHED, false);
> + early_remap_range(membase, barebox_start - membase, ARCH_MAP_CACHED_RWX, false);
> /*
> * Map the remainder of the memory explicitly with two level page tables. This is
> * the place where barebox proper ends at. In barebox proper we'll remap the code
> @@ -634,10 +672,10 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
> * a break-before-make sequence which we can't do when barebox proper is running
> * at the location being remapped.
> */
> - early_remap_range(barebox_start, barebox_size, MAP_CACHED, true);
> + early_remap_range(barebox_start, barebox_size, ARCH_MAP_CACHED_RWX, true);
> early_remap_range(optee_start, OPTEE_SIZE, MAP_UNCACHED, false);
> early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
> - MAP_CACHED, false);
> + ARCH_MAP_CACHED_RWX, false);
>
> __mmu_cache_on();
> }
> diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
> index a52556a35696aea6f15cad5fd3f0275e8e6349b1..dbfdd2e9c110133f7fb45e06911bfc9ea9e8299c 100644
> --- a/arch/arm/lib32/barebox.lds.S
> +++ b/arch/arm/lib32/barebox.lds.S
> @@ -30,7 +30,7 @@ SECTIONS
> }
> BAREBOX_BARE_INIT_SIZE
>
> - . = ALIGN(4);
> + . = ALIGN(4096);
> __start_rodata = .;
> .rodata : {
> *(.rodata*)
> @@ -53,6 +53,7 @@ SECTIONS
> __stop_unwind_tab = .;
> }
> #endif
> + . = ALIGN(4096);
> __end_rodata = .;
> _etext = .;
> _sdata = .;
> diff --git a/include/mmu.h b/include/mmu.h
> index 84ec6c5efb3eb8020fdc98e76a3614c137a0f8e9..20855e89eda301527b8cd69d868d58fc79637f5e 100644
> --- a/include/mmu.h
> +++ b/include/mmu.h
> @@ -8,6 +8,7 @@
> #define MAP_UNCACHED 0
> #define MAP_CACHED 1
> #define MAP_FAULT 2
> +#define MAP_CODE 3
>
> /*
> * Depending on the architecture the default mapping can be
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 5/7] ARM: MMU: map text segment ro and data segments execute never
2025-06-13 7:58 ` [PATCH 5/7] ARM: MMU: map text segment ro and data segments execute never Sascha Hauer
2025-06-13 10:12 ` Ahmad Fatoum
@ 2025-06-13 10:36 ` Ahmad Fatoum
1 sibling, 0 replies; 17+ messages in thread
From: Ahmad Fatoum @ 2025-06-13 10:36 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX
On 6/13/25 09:58, Sascha Hauer wrote:
> With this all segments in the DRAM except the text segment are mapped
> execute-never so that only the barebox code can actually be executed.
> Also map the readonly data segment readonly so that it can't be
> modified.
>
> The mapping is only implemented in barebox proper. The PBL still maps
> the whole DRAM rwx.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> ---
> arch/arm/Kconfig | 12 ++++++++++
> arch/arm/cpu/mmu-common.h | 2 ++
> arch/arm/cpu/mmu_32.c | 52 ++++++++++++++++++++++++++++++++++++++------
> arch/arm/lib32/barebox.lds.S | 3 ++-
> include/mmu.h | 1 +
> 5 files changed, 62 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 0800b15d784ca0ab975cf7ceb2f7b47ed10643b1..4c5f58461b82394f3f5a62c2e68cdb36b38bee85 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -389,6 +389,18 @@ config ARM_UNWIND
> the performance is not affected. Currently, this feature
> only works with EABI compilers. If unsure say Y.
>
> +config ARM_MMU_PERMISSIONS
> + bool "Map with extended RO/X permissions"
> + default y
> + help
> + Enable this option to map readonly sections as readonly, executable
> + sections as readonly/executable and the remainder of the SDRAM as
> + read/write/non-executable.
> + Traditionally barebox maps the whole SDRAM as read/write/execute.
> + You get this behaviour by disabling this option which is meant as
> + a debugging facility. It can go away once the extended permission
> + settings are proved to work reliable.
> +
> config ARM_SEMIHOSTING
> bool "enable ARM semihosting support"
> select SEMIHOSTING
> diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
> index 0f11a4b73d1199ec2400f64a2f057cf940d4ff2d..99770f943099fc64ddc15ad8f3ec4fb3c31e8449 100644
> --- a/arch/arm/cpu/mmu-common.h
> +++ b/arch/arm/cpu/mmu-common.h
> @@ -10,6 +10,8 @@
> #include <linux/sizes.h>
>
> #define ARCH_MAP_WRITECOMBINE ((unsigned)-1)
> +#define ARCH_MAP_CACHED_RWX ((unsigned)-2)
> +#define ARCH_MAP_CACHED_RO ((unsigned)-3)
>
> struct device;
>
> diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
> index 7ad66070260eae337773e3acd1bcbea4fcab12f3..532a77c271b6046e769f1a6a9a954f5b93bd5e7f 100644
> --- a/arch/arm/cpu/mmu_32.c
> +++ b/arch/arm/cpu/mmu_32.c
> @@ -19,6 +19,7 @@
> #include <asm/system_info.h>
> #include <asm/sections.h>
> #include <linux/pagemap.h>
> +#include <range.h>
>
> #include "mmu_32.h"
>
> @@ -47,11 +48,18 @@ static inline void tlb_invalidate(void)
> );
> }
>
> +#define PTE_FLAGS_CACHED_V7_RWX (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
> + PTE_EXT_AP_URW_SRW)
> #define PTE_FLAGS_CACHED_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
> - PTE_EXT_AP_URW_SRW)
> + PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
> +#define PTE_FLAGS_CACHED_RO_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
> + PTE_EXT_APX | PTE_EXT_AP0 | PTE_EXT_AP1 | PTE_EXT_XN)
> +#define PTE_FLAGS_CODE_V7 (PTE_EXT_TEX(1) | PTE_BUFFERABLE | PTE_CACHEABLE | \
> + PTE_EXT_APX | PTE_EXT_AP0 | PTE_EXT_AP1)
> #define PTE_FLAGS_WC_V7 (PTE_EXT_TEX(1) | PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
> #define PTE_FLAGS_UNCACHED_V7 (PTE_EXT_AP_URW_SRW | PTE_EXT_XN)
> #define PTE_FLAGS_CACHED_V4 (PTE_SMALL_AP_UNO_SRW | PTE_BUFFERABLE | PTE_CACHEABLE)
> +#define PTE_FLAGS_CACHED_RO_V4 (PTE_SMALL_AP_UNO_SRO | PTE_BUFFERABLE | PTE_CACHEABLE)
> #define PTE_FLAGS_UNCACHED_V4 PTE_SMALL_AP_UNO_SRW
> #define PGD_FLAGS_WC_V7 (PMD_SECT_TEX(1) | PMD_SECT_DEF_UNCACHED | \
> PMD_SECT_BUFFERABLE | PMD_SECT_XN)
> @@ -212,10 +220,16 @@ static uint32_t get_pte_flags(int map_type)
> {
> if (cpu_architecture() >= CPU_ARCH_ARMv7) {
> switch (map_type) {
> + case ARCH_MAP_CACHED_RWX:
> + return PTE_FLAGS_CACHED_V7_RWX;
> + case ARCH_MAP_CACHED_RO:
> + return PTE_FLAGS_CACHED_RO_V7;
> case MAP_CACHED:
> return PTE_FLAGS_CACHED_V7;
> case MAP_UNCACHED:
> return PTE_FLAGS_UNCACHED_V7;
> + case MAP_CODE:
> + return PTE_FLAGS_CODE_V7;
> case ARCH_MAP_WRITECOMBINE:
> return PTE_FLAGS_WC_V7;
> case MAP_FAULT:
> @@ -224,6 +238,10 @@ static uint32_t get_pte_flags(int map_type)
> }
> } else {
> switch (map_type) {
> + case ARCH_MAP_CACHED_RO:
> + case MAP_CODE:
> + return PTE_FLAGS_CACHED_RO_V4;
> + case ARCH_MAP_CACHED_RWX:
> case MAP_CACHED:
> return PTE_FLAGS_CACHED_V4;
> case MAP_UNCACHED:
> @@ -254,6 +272,8 @@ static void __arch_remap_range(void *_virt_addr, phys_addr_t phys_addr, size_t s
> pte_flags = get_pte_flags(map_type);
> pmd_flags = pte_flags_to_pmd(pte_flags);
>
> + pr_debug("%s: 0x%08x 0x%08x type %d\n", __func__, virt_addr, size, map_type);
> +
> size = PAGE_ALIGN(size);
>
> while (size) {
> @@ -535,6 +555,10 @@ void __mmu_init(bool mmu_on)
> {
> struct memory_bank *bank;
> uint32_t *ttb = get_ttb();
> + unsigned long text_start = (unsigned long)&_stext;
> + unsigned long text_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
> + unsigned long rodata_start = (unsigned long)&__start_rodata;
> + unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
>
> if (!request_barebox_region("ttb", (unsigned long)ttb,
> ARM_EARLY_PAGETABLE_SIZE))
> @@ -550,6 +574,8 @@ void __mmu_init(bool mmu_on)
>
> pr_debug("ttb: 0x%p\n", ttb);
>
> + vectors_init();
> +
> /*
> * Early mmu init will have mapped everything but the initial memory area
> * (excluding final OPTEE_SIZE bytes) uncached. We have now discovered
> @@ -568,10 +594,22 @@ void __mmu_init(bool mmu_on)
> pos = rsv->end + 1;
> }
>
> - remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
> + if (IS_ENABLED(CONFIG_ARM_MMU_PERMISSIONS)) {
> + if (region_overlap_size(pos, bank->start + bank->size - pos,
> + text_start, text_size)) {
> + remap_range((void *)pos, text_start - pos, MAP_CACHED);
> + remap_range((void *)text_start, text_size, MAP_CODE);
> + remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
> + remap_range((void *)(rodata_start + rodata_size),
> + bank->start + bank->size - (rodata_start + rodata_size),
> + MAP_CACHED);
> + } else {
> + remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
> + }
> + } else {
> + remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
Should be ARCH_MAP_CACHED_RWX instead. We can still combine the calls by
introducing
#ifdef CONFIG_ARM_MMU_PERMISSIONS
#define ARCH_MAP_SDRAM_DEFAULT MAP_CACHED
#else
#define ARCH_MAP_SDRAM_DEFAULT ARCH_MAP_CACHED_RWX
#endif
> + }
> }
> -
> - vectors_init();
> }
>
> /*
> @@ -624,7 +662,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
> * map the bulk of the memory as sections to avoid allocating too many page tables
> * at this early stage
> */
> - early_remap_range(membase, barebox_start - membase, MAP_CACHED, false);
> + early_remap_range(membase, barebox_start - membase, ARCH_MAP_CACHED_RWX, false);
> /*
> * Map the remainder of the memory explicitly with two level page tables. This is
> * the place where barebox proper ends at. In barebox proper we'll remap the code
> @@ -634,10 +672,10 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
> * a break-before-make sequence which we can't do when barebox proper is running
> * at the location being remapped.
> */
> - early_remap_range(barebox_start, barebox_size, MAP_CACHED, true);
> + early_remap_range(barebox_start, barebox_size, ARCH_MAP_CACHED_RWX, true);
> early_remap_range(optee_start, OPTEE_SIZE, MAP_UNCACHED, false);
> early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
> - MAP_CACHED, false);
> + ARCH_MAP_CACHED_RWX, false);
>
> __mmu_cache_on();
> }
> diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
> index a52556a35696aea6f15cad5fd3f0275e8e6349b1..dbfdd2e9c110133f7fb45e06911bfc9ea9e8299c 100644
> --- a/arch/arm/lib32/barebox.lds.S
> +++ b/arch/arm/lib32/barebox.lds.S
> @@ -30,7 +30,7 @@ SECTIONS
> }
> BAREBOX_BARE_INIT_SIZE
>
> - . = ALIGN(4);
> + . = ALIGN(4096);
> __start_rodata = .;
> .rodata : {
> *(.rodata*)
> @@ -53,6 +53,7 @@ SECTIONS
> __stop_unwind_tab = .;
> }
> #endif
> + . = ALIGN(4096);
> __end_rodata = .;
> _etext = .;
> _sdata = .;
> diff --git a/include/mmu.h b/include/mmu.h
> index 84ec6c5efb3eb8020fdc98e76a3614c137a0f8e9..20855e89eda301527b8cd69d868d58fc79637f5e 100644
> --- a/include/mmu.h
> +++ b/include/mmu.h
> @@ -8,6 +8,7 @@
> #define MAP_UNCACHED 0
> #define MAP_CACHED 1
> #define MAP_FAULT 2
> +#define MAP_CODE 3
>
> /*
> * Depending on the architecture the default mapping can be
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 6/7] ARM: MMU64: map memory for barebox proper pagewise
2025-06-13 7:58 [PATCH 0/7] ARM: Map sections RO/XN Sascha Hauer
` (4 preceding siblings ...)
2025-06-13 7:58 ` [PATCH 5/7] ARM: MMU: map text segment ro and data segments execute never Sascha Hauer
@ 2025-06-13 7:58 ` Sascha Hauer
2025-06-13 10:29 ` Ahmad Fatoum
2025-06-13 7:58 ` [PATCH 7/7] ARM: MMU64: map text segment ro and data segments execute never Sascha Hauer
2025-06-13 12:44 ` [PATCH 0/7] ARM: Map sections RO/XN Ahmad Fatoum
7 siblings, 1 reply; 17+ messages in thread
From: Sascha Hauer @ 2025-06-13 7:58 UTC (permalink / raw)
To: BAREBOX
Map the remainder of the memory explicitly with two level page tables. This is
the place where barebox proper ends at. In barebox proper we'll remap the code
segments readonly/executable and the ro segments readonly/execute never. For this
we need the memory being mapped pagewise. We can't do the split up from section
wise mapping to pagewise mapping later because that would require us to do
a break-before-make sequence which we can't do when barebox proper is running
at the location being remapped.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_64.c | 68 +++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 66 insertions(+), 2 deletions(-)
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 440258fa767735a4537abd71030a5540813fc443..dc81c1da6add38b59b44a9a4e247ab51ebc2692e 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -10,6 +10,7 @@
#include <init.h>
#include <mmu.h>
#include <errno.h>
+#include <range.h>
#include <zero_page.h>
#include <linux/sizes.h>
#include <asm/memory.h>
@@ -172,6 +173,56 @@ static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
tlb_invalidate();
}
+/*
+ * like create_sections(), but this one creates pages instead of sections
+ */
+static void create_pages(uint64_t phys, uint64_t size, uint64_t attr)
+{
+ uint64_t virt = phys;
+ uint64_t *ttb = get_ttb();
+ uint64_t block_size;
+ uint64_t block_shift;
+ uint64_t *pte;
+ uint64_t idx;
+ uint64_t addr;
+ uint64_t *table;
+ uint64_t type;
+ int level;
+
+ addr = virt;
+
+ attr &= ~PTE_TYPE_MASK;
+
+ size = PAGE_ALIGN(size);
+
+ while (size) {
+ table = ttb;
+ for (level = 0; level < 4; level++) {
+ block_shift = level2shift(level);
+ idx = (addr & level2mask(level)) >> block_shift;
+ block_size = (1ULL << block_shift);
+
+ pte = table + idx;
+
+ if (level == 3) {
+ type = PTE_TYPE_PAGE;
+ *pte = phys | attr | type;
+ addr += block_size;
+ phys += block_size;
+ size -= block_size;
+ break;
+ } else {
+ split_block(pte, level);
+ }
+
+ table = get_level_table(pte);
+ }
+
+ }
+
+ tlb_invalidate();
+}
+
static size_t granule_size(int level)
{
switch (level) {
@@ -410,6 +461,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
{
int el;
u64 optee_membase;
+ unsigned long barebox_size;
unsigned long ttb = arm_mem_ttb(membase + memsize);
if (get_cr() & CR_M)
@@ -432,12 +484,24 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
early_remap_range(membase, memsize, MAP_CACHED);
- if (optee_get_membase(&optee_membase))
+ if (optee_get_membase(&optee_membase)) {
optee_membase = membase + memsize - OPTEE_SIZE;
+ barebox_size = optee_membase - barebox_start;
+
+ create_pages(optee_membase - barebox_size, barebox_size,
+ get_pte_attrs(ARCH_MAP_CACHED_RWX));
+ } else {
+ barebox_size = membase + memsize - barebox_start;
+
+ create_pages(membase + memsize - barebox_size, barebox_size,
+ get_pte_attrs(ARCH_MAP_CACHED_RWX));
+ }
+
early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT);
- early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED);
+ early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
+ MAP_CACHED);
mmu_enable();
}
--
2.39.5
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 6/7] ARM: MMU64: map memory for barebox proper pagewise
2025-06-13 7:58 ` [PATCH 6/7] ARM: MMU64: map memory for barebox proper pagewise Sascha Hauer
@ 2025-06-13 10:29 ` Ahmad Fatoum
0 siblings, 0 replies; 17+ messages in thread
From: Ahmad Fatoum @ 2025-06-13 10:29 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX
Hi,
On 6/13/25 09:58, Sascha Hauer wrote:
> Map the remainder of the memory explicitly with two level page tables. This is
> the place where barebox proper ends at. In barebox proper we'll remap the code
> segments readonly/executable and the ro segments readonly/execute never. For this
> we need the memory being mapped pagewise. We can't do the split up from section
> wise mapping to pagewise mapping later because that would require us to do
> a break-before-make sequence which we can't do when barebox proper is running
> at the location being remapped.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> ---
> arch/arm/cpu/mmu_64.c | 68 +++++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 66 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
> index 440258fa767735a4537abd71030a5540813fc443..dc81c1da6add38b59b44a9a4e247ab51ebc2692e 100644
> --- a/arch/arm/cpu/mmu_64.c
> +++ b/arch/arm/cpu/mmu_64.c
> @@ -10,6 +10,7 @@
> #include <init.h>
> #include <mmu.h>
> #include <errno.h>
> +#include <range.h>
> #include <zero_page.h>
> #include <linux/sizes.h>
> #include <asm/memory.h>
> @@ -172,6 +173,56 @@ static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
> tlb_invalidate();
> }
>
> +/*
> + * like create_sections(), but this one creates pages instead of sections
> + */
> +static void create_pages(uint64_t phys, uint64_t size, uint64_t attr)
Why not add a parameter to create_sections? Code looks nearly
equivalent, except that we would tweak the inner most condition.
Looks good otherwise:
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Cheers,
Ahmad
> +{
> + uint64_t virt = phys;
> + uint64_t *ttb = get_ttb();
> + uint64_t block_size;
> + uint64_t block_shift;
> + uint64_t *pte;
> + uint64_t idx;
> + uint64_t addr;
> + uint64_t *table;
> + uint64_t type;
> + int level;
> +
> + addr = virt;
> +
> + attr &= ~PTE_TYPE_MASK;
> +
> + size = PAGE_ALIGN(size);
> +
> + while (size) {
> + table = ttb;
> + for (level = 0; level < 4; level++) {
> + block_shift = level2shift(level);
> + idx = (addr & level2mask(level)) >> block_shift;
> + block_size = (1ULL << block_shift);
> +
> + pte = table + idx;
> +
> + if (level == 3) {
> + type = PTE_TYPE_PAGE;
> + *pte = phys | attr | type;
> + addr += block_size;
> + phys += block_size;
> + size -= block_size;
> + break;
> + } else {
> + split_block(pte, level);
> + }
> +
> + table = get_level_table(pte);
> + }
> +
> + }
> +
> + tlb_invalidate();
> +}
> +
> static size_t granule_size(int level)
> {
> switch (level) {
> @@ -410,6 +461,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
> {
> int el;
> u64 optee_membase;
> + unsigned long barebox_size;
> unsigned long ttb = arm_mem_ttb(membase + memsize);
>
> if (get_cr() & CR_M)
> @@ -432,12 +484,24 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
>
> early_remap_range(membase, memsize, MAP_CACHED);
>
> - if (optee_get_membase(&optee_membase))
> + if (optee_get_membase(&optee_membase)) {
> optee_membase = membase + memsize - OPTEE_SIZE;
>
> + barebox_size = optee_membase - barebox_start;
> +
> + create_pages(optee_membase - barebox_size, barebox_size,
> + get_pte_attrs(ARCH_MAP_CACHED_RWX));
> + } else {
> + barebox_size = membase + memsize - barebox_start;
> +
> + create_pages(membase + memsize - barebox_size, barebox_size,
> + get_pte_attrs(ARCH_MAP_CACHED_RWX));
> + }
> +
> early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT);
>
> - early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext), MAP_CACHED);
> + early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
> + MAP_CACHED);
>
> mmu_enable();
> }
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 7/7] ARM: MMU64: map text segment ro and data segments execute never
2025-06-13 7:58 [PATCH 0/7] ARM: Map sections RO/XN Sascha Hauer
` (5 preceding siblings ...)
2025-06-13 7:58 ` [PATCH 6/7] ARM: MMU64: map memory for barebox proper pagewise Sascha Hauer
@ 2025-06-13 7:58 ` Sascha Hauer
2025-06-13 10:40 ` Ahmad Fatoum
2025-06-13 12:44 ` [PATCH 0/7] ARM: Map sections RO/XN Ahmad Fatoum
7 siblings, 1 reply; 17+ messages in thread
From: Sascha Hauer @ 2025-06-13 7:58 UTC (permalink / raw)
To: BAREBOX
With this all segments in the DRAM except the text segment are mapped
execute-never so that only the barebox code can actually be executed.
Also map the readonly data segment readonly so that it can't be
modified.
The mapping is only implemented in barebox proper. The PBL still maps
the whole DRAM rwx.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_64.c | 31 +++++++++++++++++++++++++++----
arch/arm/include/asm/pgtable64.h | 1 +
arch/arm/lib64/barebox.lds.S | 5 +++--
3 files changed, 31 insertions(+), 6 deletions(-)
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index dc81c1da6add38b59b44a9a4e247ab51ebc2692e..7b021a3f2909f7a445d253579a16cc68f6cbd765 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -312,13 +312,19 @@ static unsigned long get_pte_attrs(unsigned flags)
{
switch (flags) {
case MAP_CACHED:
- return CACHED_MEM;
+ return attrs_xn() | CACHED_MEM;
case MAP_UNCACHED:
return attrs_xn() | UNCACHED_MEM;
case MAP_FAULT:
return 0x0;
case ARCH_MAP_WRITECOMBINE:
return attrs_xn() | MEM_ALLOC_WRITECOMBINE;
+ case MAP_CODE:
+ return CACHED_MEM | PTE_BLOCK_RO;
+ case ARCH_MAP_CACHED_RO:
+ return attrs_xn() | CACHED_MEM | PTE_BLOCK_RO;
+ case ARCH_MAP_CACHED_RWX:
+ return CACHED_MEM;
default:
return ~0UL;
}
@@ -376,6 +382,10 @@ void __mmu_init(bool mmu_on)
{
uint64_t *ttb = get_ttb();
struct memory_bank *bank;
+ unsigned long text_start = (unsigned long)&_stext;
+ unsigned long text_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
+ unsigned long rodata_start = (unsigned long)&__start_rodata;
+ unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
if (!request_barebox_region("ttb", (unsigned long)ttb,
ARM_EARLY_PAGETABLE_SIZE))
@@ -400,7 +410,20 @@ void __mmu_init(bool mmu_on)
pos = rsv->end + 1;
}
- remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
+ if (IS_ENABLED(CONFIG_ARM_MMU_PERMISSIONS)) {
+ if (region_overlap_size(pos, bank->start + bank->size - pos,
+ text_start, text_size)) {
+ remap_range((void *)pos, text_start - pos, MAP_CACHED);
+ remap_range((void *)text_start, text_size, MAP_CODE);
+ remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
+ remap_range((void *)(rodata_start + rodata_size),
+ bank->start + bank->size - (rodata_start + rodata_size), MAP_CACHED);
+ } else {
+ remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
+ }
+ } else {
+ remap_range((void *)pos, bank->start + bank->size - pos, ARCH_MAP_CACHED_RWX);
+ }
}
/* Make zero page faulting to catch NULL pointer derefs */
@@ -482,7 +505,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
*/
init_range(2);
- early_remap_range(membase, memsize, MAP_CACHED);
+ early_remap_range(membase, memsize, ARCH_MAP_CACHED_RWX);
if (optee_get_membase(&optee_membase)) {
optee_membase = membase + memsize - OPTEE_SIZE;
@@ -501,7 +524,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT);
early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
- MAP_CACHED);
+ ARCH_MAP_CACHED_RWX);
mmu_enable();
}
diff --git a/arch/arm/include/asm/pgtable64.h b/arch/arm/include/asm/pgtable64.h
index b88ffe6be5254e1b9d3968573d5e9b7a37828a55..6f6ef22717b76baaf7857b12d38c6074871ce143 100644
--- a/arch/arm/include/asm/pgtable64.h
+++ b/arch/arm/include/asm/pgtable64.h
@@ -59,6 +59,7 @@
#define PTE_BLOCK_NG (1 << 11)
#define PTE_BLOCK_PXN (UL(1) << 53)
#define PTE_BLOCK_UXN (UL(1) << 54)
+#define PTE_BLOCK_RO (UL(1) << 7)
/*
* AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
diff --git a/arch/arm/lib64/barebox.lds.S b/arch/arm/lib64/barebox.lds.S
index 50e4b6f42cb8d4de92b7450e5b864b9056b61916..caddbedd610f68658b7ecf7616947ce02a84e5e8 100644
--- a/arch/arm/lib64/barebox.lds.S
+++ b/arch/arm/lib64/barebox.lds.S
@@ -28,18 +28,19 @@ SECTIONS
}
BAREBOX_BARE_INIT_SIZE
- . = ALIGN(4);
+ . = ALIGN(4096);
__start_rodata = .;
.rodata : {
*(.rodata*)
RO_DATA_SECTION
}
+ . = ALIGN(4096);
+
__end_rodata = .;
_etext = .;
_sdata = .;
- . = ALIGN(4);
.data : { *(.data*) }
.barebox_imd : { BAREBOX_IMD }
--
2.39.5
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 7/7] ARM: MMU64: map text segment ro and data segments execute never
2025-06-13 7:58 ` [PATCH 7/7] ARM: MMU64: map text segment ro and data segments execute never Sascha Hauer
@ 2025-06-13 10:40 ` Ahmad Fatoum
0 siblings, 0 replies; 17+ messages in thread
From: Ahmad Fatoum @ 2025-06-13 10:40 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX
Hi,
On 6/13/25 09:58, Sascha Hauer wrote:
> With this all segments in the DRAM except the text segment are mapped
> execute-never so that only the barebox code can actually be executed.
> Also map the readonly data segment readonly so that it can't be
> modified.
>
> The mapping is only implemented in barebox proper. The PBL still maps
> the whole DRAM rwx.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> ---
> arch/arm/cpu/mmu_64.c | 31 +++++++++++++++++++++++++++----
> arch/arm/include/asm/pgtable64.h | 1 +
> arch/arm/lib64/barebox.lds.S | 5 +++--
> 3 files changed, 31 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
> index dc81c1da6add38b59b44a9a4e247ab51ebc2692e..7b021a3f2909f7a445d253579a16cc68f6cbd765 100644
> --- a/arch/arm/cpu/mmu_64.c
> +++ b/arch/arm/cpu/mmu_64.c
> @@ -312,13 +312,19 @@ static unsigned long get_pte_attrs(unsigned flags)
> {
> switch (flags) {
> case MAP_CACHED:
> - return CACHED_MEM;
> + return attrs_xn() | CACHED_MEM;
> case MAP_UNCACHED:
> return attrs_xn() | UNCACHED_MEM;
> case MAP_FAULT:
> return 0x0;
> case ARCH_MAP_WRITECOMBINE:
> return attrs_xn() | MEM_ALLOC_WRITECOMBINE;
> + case MAP_CODE:
> + return CACHED_MEM | PTE_BLOCK_RO;
> + case ARCH_MAP_CACHED_RO:
> + return attrs_xn() | CACHED_MEM | PTE_BLOCK_RO;
> + case ARCH_MAP_CACHED_RWX:
> + return CACHED_MEM;
> default:
> return ~0UL;
> }
> @@ -376,6 +382,10 @@ void __mmu_init(bool mmu_on)
> {
> uint64_t *ttb = get_ttb();
> struct memory_bank *bank;
> + unsigned long text_start = (unsigned long)&_stext;
> + unsigned long text_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
> + unsigned long rodata_start = (unsigned long)&__start_rodata;
> + unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
>
> if (!request_barebox_region("ttb", (unsigned long)ttb,
> ARM_EARLY_PAGETABLE_SIZE))
> @@ -400,7 +410,20 @@ void __mmu_init(bool mmu_on)
> pos = rsv->end + 1;
> }
>
> - remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
> + if (IS_ENABLED(CONFIG_ARM_MMU_PERMISSIONS)) {
> + if (region_overlap_size(pos, bank->start + bank->size - pos,
> + text_start, text_size)) {
> + remap_range((void *)pos, text_start - pos, MAP_CACHED);
> + remap_range((void *)text_start, text_size, MAP_CODE);
> + remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
> + remap_range((void *)(rodata_start + rodata_size),
> + bank->start + bank->size - (rodata_start + rodata_size), MAP_CACHED);
Same feedback as in mmu_32.c.
Looks good otherwise.
Thanks,
Ahmad
> + } else {
> + remap_range((void *)pos, bank->start + bank->size - pos, MAP_CACHED);
> + }
> + } else {
> + remap_range((void *)pos, bank->start + bank->size - pos, ARCH_MAP_CACHED_RWX);
> + }
> }
>
> /* Make zero page faulting to catch NULL pointer derefs */
> @@ -482,7 +505,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
> */
> init_range(2);
>
> - early_remap_range(membase, memsize, MAP_CACHED);
> + early_remap_range(membase, memsize, ARCH_MAP_CACHED_RWX);
>
> if (optee_get_membase(&optee_membase)) {
> optee_membase = membase + memsize - OPTEE_SIZE;
> @@ -501,7 +524,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
> early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT);
>
> early_remap_range(PAGE_ALIGN_DOWN((uintptr_t)_stext), PAGE_ALIGN(_etext - _stext),
> - MAP_CACHED);
> + ARCH_MAP_CACHED_RWX);
>
> mmu_enable();
> }
> diff --git a/arch/arm/include/asm/pgtable64.h b/arch/arm/include/asm/pgtable64.h
> index b88ffe6be5254e1b9d3968573d5e9b7a37828a55..6f6ef22717b76baaf7857b12d38c6074871ce143 100644
> --- a/arch/arm/include/asm/pgtable64.h
> +++ b/arch/arm/include/asm/pgtable64.h
> @@ -59,6 +59,7 @@
> #define PTE_BLOCK_NG (1 << 11)
> #define PTE_BLOCK_PXN (UL(1) << 53)
> #define PTE_BLOCK_UXN (UL(1) << 54)
> +#define PTE_BLOCK_RO (UL(1) << 7)
>
> /*
> * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
> diff --git a/arch/arm/lib64/barebox.lds.S b/arch/arm/lib64/barebox.lds.S
> index 50e4b6f42cb8d4de92b7450e5b864b9056b61916..caddbedd610f68658b7ecf7616947ce02a84e5e8 100644
> --- a/arch/arm/lib64/barebox.lds.S
> +++ b/arch/arm/lib64/barebox.lds.S
> @@ -28,18 +28,19 @@ SECTIONS
> }
> BAREBOX_BARE_INIT_SIZE
>
> - . = ALIGN(4);
> + . = ALIGN(4096);
> __start_rodata = .;
> .rodata : {
> *(.rodata*)
> RO_DATA_SECTION
> }
>
> + . = ALIGN(4096);
> +
> __end_rodata = .;
> _etext = .;
> _sdata = .;
>
> - . = ALIGN(4);
> .data : { *(.data*) }
>
> .barebox_imd : { BAREBOX_IMD }
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 0/7] ARM: Map sections RO/XN
2025-06-13 7:58 [PATCH 0/7] ARM: Map sections RO/XN Sascha Hauer
` (6 preceding siblings ...)
2025-06-13 7:58 ` [PATCH 7/7] ARM: MMU64: map text segment ro and data segments execute never Sascha Hauer
@ 2025-06-13 12:44 ` Ahmad Fatoum
7 siblings, 0 replies; 17+ messages in thread
From: Ahmad Fatoum @ 2025-06-13 12:44 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX
Hello Sascha,
On 6/13/25 09:58, Sascha Hauer wrote:
> So far we mapped all RAM as read write with execute permission. This
> series series hardens this a bit. The barebox text segment will be
> mapped readonly with execute permission, the RO data section as readonly
> without execute permission and the remaining RAM will lose its execute
> permission.
Very nice. Thanks for working on this.
> I tested this series on ARMv7 and ARMv8. I am not confident though that
> there are no regressions, so this new behaviour is behind a Kconfig
> option. It is default-y, but can be disabled for debugging purposses.
> Once this series has proven stable it can be removed.
>
> I haven't tested it on ARMv4-v6 due to the lack of hardware. I tried on
> Qemu, but the write protection did not work as expected. This should be
> resolved before merging.
I will give it a try and see what I find.
Cheers,
Ahmad
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> ---
> Sascha Hauer (7):
> memory: request RO data section as separate region
> ARM: pass barebox base to mmu_early_enable()
> ARM: mmu: move ARCH_MAP_WRITECOMBINE to header
> ARM: MMU: map memory for barebox proper pagewise
> ARM: MMU: map text segment ro and data segments execute never
> ARM: MMU64: map memory for barebox proper pagewise
> ARM: MMU64: map text segment ro and data segments execute never
>
> arch/arm/Kconfig | 12 +++++
> arch/arm/cpu/mmu-common.h | 4 ++
> arch/arm/cpu/mmu_32.c | 86 +++++++++++++++++++++++++++------
> arch/arm/cpu/mmu_64.c | 101 +++++++++++++++++++++++++++++++++++----
> arch/arm/cpu/uncompress.c | 9 ++--
> arch/arm/include/asm/mmu.h | 2 +-
> arch/arm/include/asm/pgtable64.h | 1 +
> arch/arm/lib32/barebox.lds.S | 3 +-
> arch/arm/lib64/barebox.lds.S | 5 +-
> common/memory.c | 6 ++-
> include/mmu.h | 1 +
> 11 files changed, 198 insertions(+), 32 deletions(-)
> ---
> base-commit: 340e930140e76827cf5cac731e6afe2836e28242
> change-id: 20250613-arm-mmu-xn-ro-1a1d996496ae
>
> Best regards,
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 17+ messages in thread