mail archive of the barebox mailing list
 help / color / mirror / Atom feed
* [PATCH] ARM: mmu64: simplify early mapping
@ 2025-08-29 13:09 Sascha Hauer
  0 siblings, 0 replies; only message in thread
From: Sascha Hauer @ 2025-08-29 13:09 UTC (permalink / raw)
  To: Barebox List; +Cc: Marco Felsch, Ahmad Fatoum

We initially map the whole DRAM using sections. Later in barebox proper
we want to change the XN/RO attributes of the area where barebox is
located. As sections are too coarse for that and we can't change to
pagewise mapping in barebox proper while executing code from that same
region, we setup a pagewise mapping in the PBL already.

For this we distignuish two cases:

1) OP-TEE is located at its standard location end_mem - OPTEE_SIZE
2) OP-TEE is located at a location specified in the OP-TEE header

For 1) we setup a pagewise mapping beginning where barebox starts up to
the start of OP-TEE. For 2) we setup a pagewise mapping beginning where
barebox starts up to the end of DRAM.

There is no reason to distinguish between these two cases: We only need
to map up to the location we originally reserved for OP-TEE, no matter
if OP-TEE is actually located at the reserved space or somewhere else.
The space we originally reserved for OP-TEE will be unused later anyway.

For this reason just always map pagewise up to the OP-TEE reserved
space.

OPTEE_SIZE has the default value of 32MiB. mapping 32MiB pagewise
requires 32MiB / 4096 = 8192 pages which need 8192 / (4096/sizeof(u64))
= 16 pages for page table entries. We only reserved 16 pages for page
table entries, so together with the pages we need for other stuff this
became too small. This was fixed in ea4adae23e ("ARM: mmu: increase early
page table size to 256K for now"), but the reason why this happened was
not clear. This commit explains and fixes this.

While at it add some more comments expaining what is happening.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/arm/cpu/mmu_64.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index f32cd9a0ac..f22fcb5f8e 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -413,19 +413,21 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, unsigned lon
 
 	early_remap_range(membase, memsize, ARCH_MAP_CACHED_RWX);
 
-	if (optee_get_membase(&optee_membase)) {
-                optee_membase = membase + memsize - OPTEE_SIZE;
+	/* Default location for OP-TEE: end of DRAM, leave OPTEE_SIZE space for it */
+	optee_membase = membase + memsize - OPTEE_SIZE;
 
-		barebox_size = optee_membase - barebox_start;
+	barebox_size = optee_membase - barebox_start;
 
-		early_remap_range(optee_membase - barebox_size, barebox_size,
-			     ARCH_MAP_CACHED_RWX | ARCH_MAP_FLAG_PAGEWISE);
-	} else {
-		barebox_size = membase + memsize - barebox_start;
+	/*
+	 * map barebox area using pagewise mapping. We want to modify the XN/RO
+	 * attributes later, but can't switch from sections to pages later when
+	 * executing code from it
+	 */
+	early_remap_range(barebox_start, barebox_size,
+		     ARCH_MAP_CACHED_RWX | ARCH_MAP_FLAG_PAGEWISE);
 
-		early_remap_range(membase + memsize - barebox_size, barebox_size,
-			     ARCH_MAP_CACHED_RWX | ARCH_MAP_FLAG_PAGEWISE);
-	}
+	/* OP-TEE might be at location specified in OP-TEE header */
+	optee_get_membase(&optee_membase);
 
 	early_remap_range(optee_membase, OPTEE_SIZE, MAP_FAULT);
 
-- 
2.47.2




^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2025-08-29 16:23 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-08-29 13:09 [PATCH] ARM: mmu64: simplify early mapping Sascha Hauer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox