mail archive of the barebox mailing list
 help / color / mirror / Atom feed
* [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations
@ 2026-01-08 15:49 Sascha Hauer
  2026-01-08 15:49 ` [PATCH v3 01/23] Makefile.compiler: add objcopy-option Sascha Hauer
                   ` (22 more replies)
  0 siblings, 23 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:49 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

Until now we linked the raw barebox proper binary into the PBL which
comes with a number of disadvantages. We rely on self-modifying code
to in barebox proper (relocate_to_current_adr()) and have no initialized
bss segment (setup_c()). Also we can only mark the .text and .rodata as
readonly during runtime of barebox proper.

This series overcomes this by linking a ELF image into the PBL. This
image is properly layed out, linked and initialized in the PBL. With
this barebox proper has a proper C environment and text/rodata
protection from the start.

As a bonus this series also adds initial MMU support for RISCV, also
based on loading the ELF image and configuring the MMU from the PBL.

I lost track about the review feedback for v1, partly because I asked
Claude to integrate the review feedback for me, which it did, but not
completely. Nevertheless I think this series has enough changes for now
so that it deserves a second look.

What I didn't care about yet is that we found out that neither ARM nor
RiscV use any absolute relocations, so we might be able to remove
support for them, make sure these are not emitted during compile time,
or properly test/fix them when we discover that they are indeed needed.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
Changes in v3:
- integrate Ahmads feedback
- cleanup barebox proper entry
- add missing "ax" flag to ARM exception table, and let linker do the
  fixup
- Link to v2: https://lore.barebox.org/20260106-pbl-load-elf-v2-0-487bc760f045@pengutronix.de

Changes in v2:
- rebased on Ahmads patches and with it reused existing relocate_image()
- hopefully integrated all feedback for v1
- Link to v1: https://lore.barebox.org/20260105-pbl-load-elf-v1-0-e97853f98232@pengutronix.de

---
Sascha Hauer (23):
      Makefile.compiler: add objcopy-option
      elf: only accept images matching the native ELF_CLASS
      elf: build for PBL as well
      elf: add dynamic relocation support
      ARM: implement elf_apply_relocations() for ELF relocation support
      riscv: define generic relocate_image
      riscv: implement elf_apply_relocations() for ELF relocation support
      elf: implement elf_load_inplace()
      elf: create elf_open_binary_into()
      Makefile: add vmbarebox build target
      PBL: allow to link ELF image into PBL
      mmu: add MAP_CACHED_RO mapping type
      mmu: introduce pbl_remap_range()
      ARM: drop arm_fixup_vectors()
      ARM: linker script: create separate PT_LOAD segments for text, rodata, and data
      ARM: link ELF image into PBL
      ARM: PBL: setup MMU with proper permissions from ELF segments
      riscv: linker script: create separate PT_LOAD segments for text, rodata, and data
      riscv: link ELF image into PBL
      riscv: Allwinner D1: Drop M-Mode
      riscv: add ELF segment-based memory protection with MMU
      ARM: cleanup barebox proper entry
      riscv: cleanup barebox proper entry

 Makefile                           |  15 +-
 arch/arm/Kconfig                   |   1 +
 arch/arm/cpu/exceptions_32.S       |  54 +----
 arch/arm/cpu/interrupts_32.c       |   5 +-
 arch/arm/cpu/mmu-common.c          |  29 ++-
 arch/arm/cpu/mmu-common.h          |   3 +-
 arch/arm/cpu/mmu_32.c              |  12 +-
 arch/arm/cpu/mmu_64.c              |   9 +-
 arch/arm/cpu/no-mmu.c              |   2 -
 arch/arm/cpu/start.c               |  22 +-
 arch/arm/cpu/uncompress.c          |  40 +++-
 arch/arm/include/asm/barebox-arm.h |   4 -
 arch/arm/include/asm/elf.h         |   7 +
 arch/arm/lib32/barebox.lds.S       |  41 ++--
 arch/arm/lib32/reloc.c             |  19 ++
 arch/arm/lib64/barebox.lds.S       |  36 +--
 arch/arm/lib64/reloc.c             |  21 +-
 arch/riscv/Kconfig                 |  18 ++
 arch/riscv/Kconfig.socs            |   1 -
 arch/riscv/boot/start.c            |  19 +-
 arch/riscv/boot/uncompress.c       |  36 ++-
 arch/riscv/cpu/Makefile            |   1 +
 arch/riscv/cpu/mmu.c               | 387 ++++++++++++++++++++++++++++++++
 arch/riscv/cpu/mmu.h               | 120 ++++++++++
 arch/riscv/include/asm/asm.h       |   3 +-
 arch/riscv/include/asm/mmu.h       |  44 ++++
 arch/riscv/lib/barebox.lds.S       |  45 ++--
 arch/riscv/lib/reloc.c             |  43 ++--
 common/Makefile                    |   2 +-
 common/elf.c                       | 446 ++++++++++++++++++++++++++++++++++++-
 include/elf.h                      |  73 ++++++
 include/mmu.h                      |   6 +-
 include/pbl/mmu.h                  |  29 +++
 pbl/Kconfig                        |   9 +
 pbl/Makefile                       |   1 +
 pbl/mmu.c                          | 111 +++++++++
 scripts/Makefile.compiler          |   5 +
 37 files changed, 1508 insertions(+), 211 deletions(-)
---
base-commit: 7649b642aaa43c1b7dca44f0f9596fbc9d0f7caa
change-id: 20251227-pbl-load-elf-cb4cb0ceb7d8

Best regards,
-- 
Sascha Hauer <s.hauer@pengutronix.de>




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 01/23] Makefile.compiler: add objcopy-option
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
@ 2026-01-08 15:49 ` Sascha Hauer
  2026-01-08 16:25   ` Ahmad Fatoum
  2026-01-08 15:49 ` [PATCH v3 02/23] elf: only accept images matching the native ELF_CLASS Sascha Hauer
                   ` (21 subsequent siblings)
  22 siblings, 1 reply; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:49 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5

Similar to other *-option macros this one is for testing if objcopy
flags are supported.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 scripts/Makefile.compiler | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/scripts/Makefile.compiler b/scripts/Makefile.compiler
index 1d34239b3bbaf49dede33f67d69ebeb511b5dc28..f2fdddd07bbef9ea90da219602086b5be591a75d 100644
--- a/scripts/Makefile.compiler
+++ b/scripts/Makefile.compiler
@@ -69,6 +69,11 @@ cc-ifversion = $(shell [ $(call cc-version, $(CC)) $(1) $(2) ] && echo $(3))
 ld-option = $(call try-run,\
 	$(CC) -x c /dev/null -c -o "$$TMPO" ; $(LD) $(1) "$$TMPO" -o "$$TMP",$(1),$(2))
 
+# objcopy-option
+# Usage: KBUILD_LDFLAGS += $(call objcopy-option,--strip-section-headers,--strip-all)
+objcopy-option = $(call try-run,\
+        $(CC) -x c /dev/null -c -o "$$TMPO"; $(OBJCOPY) $(1) "$$TMPO" "$$TMP",$(1),$(2))
+
 # Prefix -I with $(srctree) if it is not an absolute path.
 # skip if -I has no parameter
 addtree = $(if $(patsubst -I%,%,$(1)), \

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 02/23] elf: only accept images matching the native ELF_CLASS
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
  2026-01-08 15:49 ` [PATCH v3 01/23] Makefile.compiler: add objcopy-option Sascha Hauer
@ 2026-01-08 15:49 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 03/23] elf: build for PBL as well Sascha Hauer
                   ` (20 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:49 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 common/elf.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/common/elf.c b/common/elf.c
index 18c541bf827e6077e64c15f62cb4abedc68cf278..a0e67a9353a12779ec841c53db7f6dba47070d8d 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -213,6 +213,9 @@ static int elf_check_image(struct elf_image *elf, void *buf)
 		return -ENOEXEC;
 	}
 
+	if (elf->class != ELF_CLASS)
+		return -EINVAL;
+
 	if (!elf_hdr_e_phnum(elf, buf)) {
 		pr_err("No phdr found.\n");
 		return -ENOEXEC;

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 03/23] elf: build for PBL as well
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
  2026-01-08 15:49 ` [PATCH v3 01/23] Makefile.compiler: add objcopy-option Sascha Hauer
  2026-01-08 15:49 ` [PATCH v3 02/23] elf: only accept images matching the native ELF_CLASS Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 04/23] elf: add dynamic relocation support Sascha Hauer
                   ` (19 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

We'll link barebox proper as an ELF image into the PBL later, so compile
ELF support for PBL as well.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 common/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/common/Makefile b/common/Makefile
index 36dee5f7a98a7b2bbf67263ee5942520e6ce53e0..b1170ea4a801de2e7d0279edaddbf6f370a053c1 100644
--- a/common/Makefile
+++ b/common/Makefile
@@ -14,7 +14,7 @@ obj-y				+= misc.o
 obj-pbl-y			+= memsize.o
 obj-y				+= resource.o
 obj-pbl-y			+= bootsource.o
-obj-$(CONFIG_ELF)		+= elf.o
+obj-pbl-$(CONFIG_ELF)		+= elf.o
 obj-y				+= restart.o
 obj-y				+= poweroff.o
 obj-y				+= slice.o

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 04/23] elf: add dynamic relocation support
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (2 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 03/23] elf: build for PBL as well Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 05/23] ARM: implement elf_apply_relocations() for ELF " Sascha Hauer
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

Add support for applying dynamic relocations to ELF binaries. This allows
loading ET_DYN (position-independent) binaries and ET_EXEC binaries at
custom load addresses.

Key changes:
- Add elf_image.reloc_offset to track offset between vaddr and load address
- Implement elf_compute_load_offset() to calculate relocation offset
- Add elf_set_load_address() API to specify custom load address
- Implement elf_find_dynamic_segment() to locate PT_DYNAMIC
- Add elf_relocate() to apply relocations
- Provide weak default elf_apply_relocations() stub for unsupported architectures
- Add ELF dynamic section accessors

The relocation offset type is unsigned long to properly handle pointer
arithmetic and avoid casting issues.

Architecture-specific implementations should override the weak
elf_apply_relocations() function to handle their relocation types.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 common/elf.c  | 287 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 include/elf.h |  64 +++++++++++++
 2 files changed, 346 insertions(+), 5 deletions(-)

diff --git a/common/elf.c b/common/elf.c
index a0e67a9353a12779ec841c53db7f6dba47070d8d..67bb931576896ffd4fab15fd02893cc797dbd871 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -21,6 +21,18 @@ struct elf_segment {
 	bool is_iomem_region;
 };
 
+static void *elf_phdr_relocated_paddr(struct elf_image *elf, void *phdr)
+{
+	void *dst;
+
+	if (elf->reloc_offset)
+		dst = (void *)(unsigned long)(elf->reloc_offset + elf_phdr_p_vaddr(elf, phdr));
+	else
+		dst = (void *)(unsigned long)elf_phdr_p_paddr(elf, phdr);
+
+	return dst;
+}
+
 static int elf_request_region(struct elf_image *elf, resource_size_t start,
 			      resource_size_t size, void *phdr)
 {
@@ -65,9 +77,61 @@ static void elf_release_regions(struct elf_image *elf)
 	}
 }
 
+static int elf_compute_load_offset(struct elf_image *elf)
+{
+	void *buf = elf->hdr_buf;
+	void *phdr = buf + elf_hdr_e_phoff(elf, buf);
+	u64 min_vaddr = (u64)-1;
+	u64 min_paddr = (u64)-1;
+	int i;
+
+	/* Find lowest p_vaddr and p_paddr in PT_LOAD segments */
+	for (i = 0; i < elf_hdr_e_phnum(elf, buf); i++) {
+		if (elf_phdr_p_type(elf, phdr) == PT_LOAD) {
+			u64 vaddr = elf_phdr_p_vaddr(elf, phdr);
+			u64 paddr = elf_phdr_p_paddr(elf, phdr);
+
+			if (vaddr < min_vaddr)
+				min_vaddr = vaddr;
+			if (paddr < min_paddr)
+				min_paddr = paddr;
+		}
+		phdr += elf_size_of_phdr(elf);
+	}
+
+	/*
+	 * Determine base load address:
+	 * 1. If user specified load_address, use it
+	 * 2. Otherwise for ET_EXEC, use NULL (segments use p_paddr directly)
+	 * 3. For ET_DYN, use lowest p_paddr
+	 */
+	if (elf->load_address)
+		elf->base_load_addr = elf->load_address;
+	else if (elf->type == ET_EXEC)
+		elf->base_load_addr = NULL;
+	else
+		elf->base_load_addr = (void *)(phys_addr_t)min_paddr;
+
+	/*
+	 * Calculate relocation offset:
+	 * - For ET_EXEC with no custom load address: no offset needed
+	 * - Otherwise: offset = base_load_addr - lowest_vaddr
+	 */
+	if (elf->type == ET_EXEC && !elf->load_address)
+		elf->reloc_offset = 0;
+	else
+		elf->reloc_offset = ((unsigned long)elf->base_load_addr - min_vaddr);
+
+	pr_debug("ELF load: type=%s, base=%p, offset=%08lx\n",
+		 elf->type == ET_EXEC ? "ET_EXEC" : "ET_DYN",
+		 elf->base_load_addr, elf->reloc_offset);
+
+	return 0;
+}
+
 static int request_elf_segment(struct elf_image *elf, void *phdr)
 {
-	void *dst = (void *) (phys_addr_t) elf_phdr_p_paddr(elf, phdr);
+	void *dst;
 	int ret;
 	u64 p_memsz = elf_phdr_p_memsz(elf, phdr);
 
@@ -78,6 +142,15 @@ static int request_elf_segment(struct elf_image *elf, void *phdr)
 	if (!p_memsz)
 		return 0;
 
+	/*
+	 * Calculate destination address:
+	 * - If reloc_offset is set (custom load address or ET_DYN):
+	 *   dst = reloc_offset + p_vaddr
+	 * - Otherwise (ET_EXEC, no custom address):
+	 *   dst = p_paddr (original behavior)
+	 */
+	dst = elf_phdr_relocated_paddr(elf, phdr);
+
 	if (dst < elf->low_addr)
 		elf->low_addr = dst;
 	if (dst + p_memsz > elf->high_addr)
@@ -129,7 +202,8 @@ static int load_elf_to_memory(struct elf_image *elf)
 		p_offset = elf_phdr_p_offset(elf, r->phdr);
 		p_filesz = elf_phdr_p_filesz(elf, r->phdr);
 		p_memsz = elf_phdr_p_memsz(elf, r->phdr);
-		dst = (void *) (phys_addr_t) elf_phdr_p_paddr(elf, r->phdr);
+
+		dst = elf_phdr_relocated_paddr(elf, r->phdr);
 
 		pr_debug("Loading phdr offset 0x%llx to 0x%p (%llu bytes)\n",
 			 p_offset, dst, p_filesz);
@@ -173,6 +247,11 @@ static int load_elf_image_segments(struct elf_image *elf)
 	if (!list_empty(&elf->list))
 		return -EINVAL;
 
+	/* Calculate load offset for ET_DYN */
+	ret = elf_compute_load_offset(elf);
+	if (ret)
+		return ret;
+
 	for (i = 0; i < elf_hdr_e_phnum(elf, buf) ; ++i) {
 		ret = request_elf_segment(elf, phdr);
 		if (ret)
@@ -201,6 +280,8 @@ static int load_elf_image_segments(struct elf_image *elf)
 
 static int elf_check_image(struct elf_image *elf, void *buf)
 {
+	u16 e_type;
+
 	if (memcmp(buf, ELFMAG, SELFMAG)) {
 		pr_err("ELF magic not found.\n");
 		return -EINVAL;
@@ -208,14 +289,17 @@ static int elf_check_image(struct elf_image *elf, void *buf)
 
 	elf->class = ((char *) buf)[EI_CLASS];
 
-	if (elf_hdr_e_type(elf, buf) != ET_EXEC) {
-		pr_err("Non EXEC ELF image.\n");
+	e_type = elf_hdr_e_type(elf, buf);
+	if (e_type != ET_EXEC && e_type != ET_DYN) {
+		pr_err("Unsupported ELF type: %u (only ET_EXEC and ET_DYN supported)\n", e_type);
 		return -ENOEXEC;
 	}
 
 	if (elf->class != ELF_CLASS)
 		return -EINVAL;
 
+	elf->type = e_type;
+
 	if (!elf_hdr_e_phnum(elf, buf)) {
 		pr_err("No phdr found.\n");
 		return -ENOEXEC;
@@ -341,9 +425,202 @@ struct elf_image *elf_open(const char *filename)
 	return elf_check_init(filename);
 }
 
+void elf_set_load_address(struct elf_image *elf, void *addr)
+{
+	elf->load_address = addr;
+}
+
+static void *elf_find_dynamic_segment(struct elf_image *elf)
+{
+	void *buf = elf->hdr_buf;
+	void *phdr = buf + elf_hdr_e_phoff(elf, buf);
+	int i;
+
+	for (i = 0; i < elf_hdr_e_phnum(elf, buf); i++) {
+		if (elf_phdr_p_type(elf, phdr) == PT_DYNAMIC)
+			return elf_phdr_relocated_paddr(elf, phdr);
+
+		phdr += elf_size_of_phdr(elf);
+	}
+
+	return NULL;  /* No PT_DYNAMIC segment */
+}
+
+/**
+ * elf_parse_dynamic_section - Parse the dynamic section and extract relocation info
+ * @elf: ELF image structure
+ * @dyn_seg: Pointer to the PT_DYNAMIC segment
+ * @rel_out: Output pointer to the relocation table (either REL or RELA)
+ * @relsz_out: Output size of the relocation table in bytes
+ * @is_rela: flag indicating RELA (true) vs REL (false) format is expected
+ *
+ * This is a generic function that works for both 32-bit and 64-bit ELF files,
+ * and handles both REL and RELA relocation formats.
+ *
+ * Returns: 0 on success, -EINVAL on error
+ */
+static int elf_parse_dynamic_section(struct elf_image *elf, const void *dyn_seg,
+				     void **rel_out, u64 *relsz_out, void **symtab,
+				     bool is_rela)
+{
+	const void *dyn = dyn_seg;
+	void *rel = NULL, *rela = NULL;
+	u64 relsz = 0, relasz = 0;
+	u64 relent = 0, relaent = 0;
+	phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+	size_t expected_rel_size, expected_rela_size;
+
+	/* Calculate expected sizes based on ELF class */
+	if (ELF_CLASS == ELFCLASS32) {
+		expected_rel_size = sizeof(Elf32_Rel);
+		expected_rela_size = sizeof(Elf32_Rela);
+	} else {
+		expected_rel_size = sizeof(Elf64_Rel);
+		expected_rela_size = sizeof(Elf64_Rela);
+	}
+
+	/* Iterate through dynamic entries until DT_NULL */
+	while (elf_dyn_d_tag(elf, dyn) != DT_NULL) {
+		unsigned long tag = elf_dyn_d_tag(elf, dyn);
+
+		switch (tag) {
+		case DT_REL:
+			/* REL table address - needs to be adjusted by load offset */
+			rel = (void *)(unsigned long)(base + elf_dyn_d_ptr(elf, dyn));
+			break;
+		case DT_RELSZ:
+			relsz = elf_dyn_d_val(elf, dyn);
+			break;
+		case DT_RELENT:
+			relent = elf_dyn_d_val(elf, dyn);
+			break;
+		case DT_RELA:
+			/* RELA table address - needs to be adjusted by load offset */
+			rela = (void *)(unsigned long)(base + elf_dyn_d_ptr(elf, dyn));
+			break;
+		case DT_RELASZ:
+			relasz = elf_dyn_d_val(elf, dyn);
+			break;
+		case DT_RELAENT:
+			relaent = elf_dyn_d_val(elf, dyn);
+			break;
+		case DT_SYMTAB:
+			*symtab = (void *)(unsigned long)(base + elf_dyn_d_val(elf, dyn));
+			break;
+		default:
+			break;
+		}
+
+		dyn += elf_size_of_dyn(elf);
+	}
+
+	/* Check that we found exactly one relocation type */
+	if (rel && rela) {
+		pr_err("ELF has both REL and RELA relocations\n");
+		return -EINVAL;
+	}
+
+	if (rel && !is_rela) {
+		/* REL relocations */
+		if (!relsz || relent != expected_rel_size) {
+			pr_debug("No REL relocations or invalid relocation info\n");
+			return -EINVAL;
+		}
+		*rel_out = rel;
+		*relsz_out = relsz;
+
+		return 0;
+	} else if (rela && is_rela) {
+		/* RELA relocations */
+		if (!relasz || relaent != expected_rela_size) {
+			pr_debug("No RELA relocations or invalid relocation info\n");
+			return -EINVAL;
+		}
+		*rel_out = rela;
+		*relsz_out = relasz;
+
+		return 0;
+	}
+
+	pr_debug("No relocations found in dynamic section\n");
+
+	return -EINVAL;
+}
+
+int elf_parse_dynamic_section_rel(struct elf_image *elf, const void *dyn_seg,
+				     void **rel_out, u64 *relsz_out, void **symtab)
+{
+	return elf_parse_dynamic_section(elf, dyn_seg, rel_out, relsz_out, symtab,
+					 false);
+}
+
+int elf_parse_dynamic_section_rela(struct elf_image *elf, const void *dyn_seg,
+				     void **rel_out, u64 *relsz_out, void **symtab)
+{
+	return elf_parse_dynamic_section(elf, dyn_seg, rel_out, relsz_out, symtab,
+					 true);
+}
+
+/*
+ * Weak default implementation for architectures that don't support
+ * ELF relocations yet. Can be overridden by arch-specific implementation.
+ */
+int __weak elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+	pr_warn("ELF relocations not supported for this architecture\n");
+	return -ENOSYS;
+}
+
+static int elf_relocate(struct elf_image *elf)
+{
+	void *dyn_seg;
+
+	/*
+	 * Relocations needed if:
+	 * - ET_DYN (position-independent), OR
+	 * - ET_EXEC with custom load address
+	 */
+	if (elf->type == ET_EXEC && !elf->load_address)
+		return 0;
+
+	/* Find PT_DYNAMIC segment */
+	dyn_seg = elf_find_dynamic_segment(elf);
+	if (!dyn_seg) {
+		/*
+		 * No PT_DYNAMIC segment found.
+		 * For ET_DYN this is unusual but legal.
+		 * For ET_EXEC with custom load address, this means no relocations
+		 * can be applied - warn the user.
+		 */
+		if (elf->type == ET_EXEC && elf->load_address) {
+			pr_warn("ET_EXEC loaded at custom address but no PT_DYNAMIC segment - "
+				"relocations cannot be applied\n");
+		} else {
+			pr_debug("No PT_DYNAMIC segment found\n");
+		}
+		return 0;
+	}
+
+	/* Call architecture-specific relocation handler */
+	return elf_apply_relocations(elf, dyn_seg);
+}
+
 int elf_load(struct elf_image *elf)
 {
-	return load_elf_image_segments(elf);
+	int ret;
+
+	ret = load_elf_image_segments(elf);
+	if (ret)
+		return ret;
+
+	/* Apply relocations if needed */
+	ret = elf_relocate(elf);
+	if (ret) {
+		pr_err("Relocation failed: %d\n", ret);
+		return ret;
+	}
+
+	return 0;
 }
 
 void elf_close(struct elf_image *elf)
diff --git a/include/elf.h b/include/elf.h
index 994db642b0789942530f6ef7fdffdd2218afd7b6..c0b318f19fcf8adf8c3b83961456023307abf113 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -394,11 +394,15 @@ extern Elf64_Dyn _DYNAMIC [];
 struct elf_image {
 	struct list_head list;
 	u8 class;
+	u16 type;		/* ET_EXEC or ET_DYN */
 	u64 entry;
 	void *low_addr;
 	void *high_addr;
 	void *hdr_buf;
 	const char *filename;
+	void *load_address;	/* User-specified load address (NULL = use p_paddr) */
+	void *base_load_addr;	/* Calculated base address for ET_DYN */
+	unsigned long reloc_offset;	/* Offset between p_vaddr and actual load address */
 };
 
 static inline size_t elf_get_mem_size(struct elf_image *elf)
@@ -411,6 +415,31 @@ struct elf_image *elf_open(const char *filename);
 void elf_close(struct elf_image *elf);
 int elf_load(struct elf_image *elf);
 
+/*
+ * Set the load address for the ELF file.
+ * Must be called before elf_load().
+ * If not set, ET_EXEC uses p_paddr, ET_DYN uses lowest p_paddr.
+ */
+void elf_set_load_address(struct elf_image *elf, void *addr);
+
+/*
+ * Architecture-specific relocation handler.
+ * Returns 0 on success, -ENOSYS if architecture doesn't support relocations,
+ * other negative error codes on failure.
+ */
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg);
+
+/*
+ * Parse the dynamic section and extract relocation information.
+ * This is a generic function that works for both 32-bit and 64-bit ELF files,
+ * and handles both REL and RELA relocation formats.
+ * Returns 0 on success, -EINVAL on error.
+ */
+int elf_parse_dynamic_section_rel(struct elf_image *elf, const void *dyn_seg,
+				  void **rel_out, u64 *relsz_out, void **symtab);
+int elf_parse_dynamic_section_rela(struct elf_image *elf, const void *dyn_seg,
+				   void **rel_out, u64 *relsz_out, void **symtab);
+
 #define ELF_GET_FIELD(__s, __field, __type) \
 static inline __type elf_##__s##_##__field(struct elf_image *elf, void *arg) { \
 	if (elf->class == ELFCLASS32) \
@@ -426,10 +455,12 @@ ELF_GET_FIELD(hdr, e_phentsize, u16)
 ELF_GET_FIELD(hdr, e_type, u16)
 ELF_GET_FIELD(hdr, e_machine, u16)
 ELF_GET_FIELD(phdr, p_paddr, u64)
+ELF_GET_FIELD(phdr, p_vaddr, u64)
 ELF_GET_FIELD(phdr, p_filesz, u64)
 ELF_GET_FIELD(phdr, p_memsz, u64)
 ELF_GET_FIELD(phdr, p_type, u32)
 ELF_GET_FIELD(phdr, p_offset, u64)
+ELF_GET_FIELD(phdr, p_flags, u32)
 
 static inline unsigned long elf_size_of_phdr(struct elf_image *elf)
 {
@@ -439,4 +470,37 @@ static inline unsigned long elf_size_of_phdr(struct elf_image *elf)
 		return sizeof(Elf64_Phdr);
 }
 
+/* Dynamic section accessors */
+static inline s64 elf_dyn_d_tag(struct elf_image *elf, const void *arg)
+{
+	if (elf->class == ELFCLASS32)
+		return (s64)((Elf32_Dyn *)arg)->d_tag;
+	else
+		return (s64)((Elf64_Dyn *)arg)->d_tag;
+}
+
+static inline u64 elf_dyn_d_val(struct elf_image *elf, const void *arg)
+{
+	if (elf->class == ELFCLASS32)
+		return (u64)((Elf32_Dyn *)arg)->d_un.d_val;
+	else
+		return (u64)((Elf64_Dyn *)arg)->d_un.d_val;
+}
+
+static inline u64 elf_dyn_d_ptr(struct elf_image *elf, const void *arg)
+{
+	if (elf->class == ELFCLASS32)
+		return (u64)((Elf32_Dyn *)arg)->d_un.d_ptr;
+	else
+		return (u64)((Elf64_Dyn *)arg)->d_un.d_ptr;
+}
+
+static inline unsigned long elf_size_of_dyn(struct elf_image *elf)
+{
+	if (elf->class == ELFCLASS32)
+		return sizeof(Elf32_Dyn);
+	else
+		return sizeof(Elf64_Dyn);
+}
+
 #endif /* _LINUX_ELF_H */

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 05/23] ARM: implement elf_apply_relocations() for ELF relocation support
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (3 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 04/23] elf: add dynamic relocation support Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 06/23] riscv: define generic relocate_image Sascha Hauer
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

Implement architecture-specific ELF relocation handlers for ARM32 and ARM64.
The implementation reuses the existing relocate_image().

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/arm/include/asm/elf.h |  7 +++++++
 arch/arm/lib32/reloc.c     | 19 +++++++++++++++++++
 arch/arm/lib64/reloc.c     | 21 +++++++++++++++++++--
 3 files changed, 45 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/elf.h b/arch/arm/include/asm/elf.h
index 0b4704a4a5615e648f67705ad9daf8dea9f41bab..630c85f2b421cba7e37ea7147c047be948ecdc7e 100644
--- a/arch/arm/include/asm/elf.h
+++ b/arch/arm/include/asm/elf.h
@@ -36,6 +36,13 @@ typedef struct user_fp elf_fpregset_t;
 #define R_ARM_THM_CALL		10
 #define R_ARM_THM_JUMP24	30
 
+/* Additional relocation types for dynamic linking */
+#define R_ARM_RELATIVE		23
+
+#define R_AARCH64_NONE		0
+#define R_AARCH64_ABS64		257
+#define R_AARCH64_RELATIVE	1027
+
 /*
  * These are used to set parameters in the core dumps.
  */
diff --git a/arch/arm/lib32/reloc.c b/arch/arm/lib32/reloc.c
index 378ba95b2ffd455436e704a803f5448fc2c2b18c..edd3d7eb486649d70a3ce5a77c25ad52ccd12bc1 100644
--- a/arch/arm/lib32/reloc.c
+++ b/arch/arm/lib32/reloc.c
@@ -54,3 +54,22 @@ void __prereloc relocate_image(unsigned long offset,
 	if (dynend)
 		__memset(dynsym, 0, (unsigned long)dynend - (unsigned long)dynsym);
 }
+
+/*
+ * Apply ARM32 ELF relocations
+ */
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+	void *rel_ptr = NULL, *symtab = NULL;
+	u64 relsz;
+	phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+	int ret;
+
+	ret = elf_parse_dynamic_section_rel(elf, dyn_seg, &rel_ptr, &relsz, &symtab);
+	if (ret)
+		return ret;
+
+	relocate_image(base, rel_ptr, rel_ptr + relsz, symtab, NULL);
+
+	return 0;
+}
diff --git a/arch/arm/lib64/reloc.c b/arch/arm/lib64/reloc.c
index 2288f9e2e336887c5edfbf6b080f487394754113..b4981578747007c84eca643db327807ce15ba7c9 100644
--- a/arch/arm/lib64/reloc.c
+++ b/arch/arm/lib64/reloc.c
@@ -8,8 +8,6 @@
 #include <debug_ll.h>
 #include <asm/reloc.h>
 
-#define R_AARCH64_RELATIVE 1027
-
 /*
  * relocate binary to the currently running address
  */
@@ -45,3 +43,22 @@ void __prereloc relocate_image(unsigned long offset,
 		dstart += sizeof(*rel);
 	}
 }
+
+/*
+ * Apply ARM64 ELF relocations
+ */
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+	void *rela_ptr = NULL, *symtab = NULL;
+	u64 relasz;
+	phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+	int ret;
+
+	ret = elf_parse_dynamic_section_rela(elf, dyn_seg, &rela_ptr, &relasz, &symtab);
+	if (ret)
+		return ret;
+
+	relocate_image(base, rela_ptr, rela_ptr + relasz, symtab, NULL);
+
+	return 0;
+}

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 06/23] riscv: define generic relocate_image
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (4 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 05/23] ARM: implement elf_apply_relocations() for ELF " Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 07/23] riscv: implement elf_apply_relocations() for ELF relocation support Sascha Hauer
                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

For use by the ELF loader in PBL to relocate barebox proper, export a
new relocate_image capable of relocating barebox and implement
relocate_to_current_adr() in terms of it.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/riscv/lib/reloc.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/arch/riscv/lib/reloc.c b/arch/riscv/lib/reloc.c
index 0c1ec8b4880227d16ab5e7b580244f1db2e967ec..18b13a7013cff4032c12b999470f265dbda13c51 100644
--- a/arch/riscv/lib/reloc.c
+++ b/arch/riscv/lib/reloc.c
@@ -30,26 +30,15 @@ void sync_caches_for_execution(void)
 	local_flush_icache_all();
 }
 
-void relocate_to_current_adr(void)
+static void relocate_image(unsigned long offset,
+			   void *dstart, void *dend,
+			   long *dynsym, long *dynend)
 {
-	unsigned long offset;
-	unsigned long *dynsym;
-	void *dstart, *dend;
 	Elf_Rela *rela;
 
-	/* Get offset between linked address and runtime address */
-	offset = get_runtime_offset();
 	if (!offset)
 		return;
 
-	/*
-	 * We have yet to relocate, so using runtime_address
-	 * to compute the relocated address
-	 */
-	dstart = runtime_address(__rel_dyn_start);
-	dend = runtime_address(__rel_dyn_end);
-	dynsym = runtime_address(__dynsym_start);
-
 	for (rela = dstart; (void *)rela < dend; rela++) {
 		unsigned long *fixup;
 
@@ -74,5 +63,15 @@ void relocate_to_current_adr(void)
 		}
 	}
 
+}
+
+void relocate_to_current_adr(void)
+{
+	relocate_image(get_runtime_offset(),
+		       runtime_address(__rel_dyn_start),
+		       runtime_address(__rel_dyn_end),
+		       runtime_address(__dynsym_start),
+		       NULL);
+
 	sync_caches_for_execution();
 }

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 07/23] riscv: implement elf_apply_relocations() for ELF relocation support
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (5 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 06/23] riscv: define generic relocate_image Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 08/23] elf: implement elf_load_inplace() Sascha Hauer
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

Add architecture-specific ELF relocation support for RISC-V,
enabling dynamic relocation of position-independent ELF binaries.
The implemetation reuses the existing relocate_image().

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/riscv/lib/reloc.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/riscv/lib/reloc.c b/arch/riscv/lib/reloc.c
index 18b13a7013cff4032c12b999470f265dbda13c51..71d59d4ab62ec07e23bc2ae3d9932f8d2baa7a81 100644
--- a/arch/riscv/lib/reloc.c
+++ b/arch/riscv/lib/reloc.c
@@ -75,3 +75,19 @@ void relocate_to_current_adr(void)
 
 	sync_caches_for_execution();
 }
+
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+	void *rela_ptr = NULL, *symtab = NULL;
+	u64 relasz;
+	phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+	int ret;
+
+	ret = elf_parse_dynamic_section_rela(elf, dyn_seg, &rela_ptr, &relasz, &symtab);
+	if (ret)
+		return ret;
+
+	relocate_image(base, rela_ptr, rela_ptr + relasz, symtab, NULL);
+
+	return 0;
+}

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 08/23] elf: implement elf_load_inplace()
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (6 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 07/23] riscv: implement elf_apply_relocations() for ELF relocation support Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 09/23] elf: create elf_open_binary_into() Sascha Hauer
                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

Implement elf_load_inplace() to apply dynamic relocations to an ELF binary
that is already loaded in memory. Unlike elf_load(), this function does not
allocate memory or copy segments - it only modifies the existing image in
place.

This is useful for self-relocating loaders or when the ELF has been loaded
by external means (e.g., firmware or another bootloader).

For ET_DYN (position-independent) binaries, the relocation offset is
calculated relative to the first executable PT_LOAD segment (.text section),
taking into account the difference between the segment's virtual address
and its file offset.

The entry point is also adjusted to point to the relocated image.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Acked-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 common/elf.c  | 130 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 include/elf.h |   8 ++++
 2 files changed, 138 insertions(+)

diff --git a/common/elf.c b/common/elf.c
index 67bb931576896ffd4fab15fd02893cc797dbd871..cba11640e52204add6cf601335b1904bd9188be0 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -634,3 +634,133 @@ void elf_close(struct elf_image *elf)
 
 	free(elf);
 }
+
+/**
+ * elf_load_inplace() - Apply dynamic relocations to an ELF binary in place
+ * @elf: ELF image previously opened with elf_open_binary()
+ *
+ * This function applies dynamic relocations to an ELF binary that is already
+ * loaded at its target address in memory. Unlike elf_load(), this does not
+ * allocate memory or copy segments - it only modifies the existing image.
+ *
+ * This is useful for self-relocating loaders or when the ELF has been loaded
+ * by external means (e.g., loaded by firmware or another bootloader).
+ *
+ * The ELF image must have been previously opened with elf_open_binary().
+ *
+ * For ET_DYN (position-independent) binaries, the relocation offset is
+ * calculated relative to the first executable PT_LOAD segment (.text section).
+ *
+ * For ET_EXEC binaries, no relocation is applied as they are expected to
+ * be at their link-time addresses.
+ *
+ * Returns: 0 on success, negative error code on failure
+ */
+int elf_load_inplace(struct elf_image *elf)
+{
+	const void *dyn_seg;
+	void *buf, *phdr;
+	void *elf_buf;
+	int i, ret;
+
+	buf = elf->hdr_buf;
+	elf_buf = elf->hdr_buf;
+
+	/*
+	 * First pass: Clear BSS segments (p_memsz > p_filesz).
+	 * This must be done before relocations as uninitialized data
+	 * must be zeroed per C standard.
+	 */
+	phdr = buf + elf_hdr_e_phoff(elf, buf);
+	for (i = 0; i < elf_hdr_e_phnum(elf, buf); i++) {
+		if (elf_phdr_p_type(elf, phdr) == PT_LOAD) {
+			u64 p_offset = elf_phdr_p_offset(elf, phdr);
+			u64 p_filesz = elf_phdr_p_filesz(elf, phdr);
+			u64 p_memsz = elf_phdr_p_memsz(elf, phdr);
+
+			/* Clear BSS (uninitialized data) */
+			if (p_filesz < p_memsz) {
+				void *bss_start = elf_buf + p_offset + p_filesz;
+				size_t bss_size = p_memsz - p_filesz;
+				memset(bss_start, 0x00, bss_size);
+			}
+		}
+		phdr += elf_size_of_phdr(elf);
+	}
+
+	/*
+	 * Calculate relocation offset for the in-place binary.
+	 * For ET_DYN, we need to find the first PT_LOAD segment
+	 * and use it as the relocation base.
+	 */
+	if (elf->type == ET_DYN) {
+		u64 text_vaddr = 0;
+		u64 text_offset = 0;
+		bool found_text = false;
+
+		/* Find first PT_LOAD segment */
+		phdr = buf + elf_hdr_e_phoff(elf, buf);
+		for (i = 0; i < elf_hdr_e_phnum(elf, buf); i++) {
+			if (elf_phdr_p_type(elf, phdr) == PT_LOAD) {
+				text_vaddr = elf_phdr_p_vaddr(elf, phdr);
+				text_offset = elf_phdr_p_offset(elf, phdr);
+				found_text = true;
+				break;
+			}
+			phdr += elf_size_of_phdr(elf);
+		}
+
+		if (!found_text) {
+			pr_err("No PT_LOAD segment found\n");
+			ret = -EINVAL;
+			goto out;
+		}
+
+		/*
+		 * Calculate relocation offset relative to .text section:
+		 * - .text is at file offset text_offset, so in memory at: elf_buf + text_offset
+		 * - .text has virtual address text_vaddr
+		 * - reloc_offset = (actual .text address) - (virtual .text address)
+		 */
+		elf->reloc_offset = ((unsigned long)elf_buf + text_offset) - text_vaddr;
+
+		pr_debug("In-place ELF relocation: text_vaddr=0x%llx, text_offset=0x%llx, "
+			 "load_addr=%p, offset=0x%08lx\n",
+			 text_vaddr, text_offset, elf_buf, elf->reloc_offset);
+
+		/* Adjust entry point to point to relocated image */
+		elf->entry += elf->reloc_offset;
+	} else {
+		/*
+		 * ET_EXEC binaries are at their link-time addresses,
+		 * no relocation needed
+		 */
+		elf->reloc_offset = 0;
+	}
+
+	/* Find PT_DYNAMIC segment */
+	dyn_seg = elf_find_dynamic_segment(elf);
+	if (!dyn_seg) {
+		/*
+		 * No PT_DYNAMIC segment found.
+		 * This is fine for statically-linked binaries or
+		 * binaries without relocations.
+		 */
+		pr_debug("No PT_DYNAMIC segment found\n");
+		ret = 0;
+		goto out;
+	}
+
+	/* Apply architecture-specific relocations */
+	ret = elf_apply_relocations(elf, dyn_seg);
+	if (ret) {
+		pr_err("In-place relocation failed: %d\n", ret);
+		goto out;
+	}
+
+	pr_debug("In-place ELF relocation completed successfully\n");
+	return 0;
+
+out:
+	return ret;
+}
diff --git a/include/elf.h b/include/elf.h
index c0b318f19fcf8adf8c3b83961456023307abf113..236e38fb29887315e01a165795e1bb861f738054 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -422,6 +422,14 @@ int elf_load(struct elf_image *elf);
  */
 void elf_set_load_address(struct elf_image *elf, void *addr);
 
+/*
+ * Apply dynamic relocations to an ELF binary already loaded in memory.
+ * This modifies the ELF image in place without allocating new memory.
+ * Useful for self-relocating loaders or externally loaded binaries.
+ * The elf parameter must have been previously opened with elf_open_binary().
+ */
+int elf_load_inplace(struct elf_image *elf);
+
 /*
  * Architecture-specific relocation handler.
  * Returns 0 on success, -ENOSYS if architecture doesn't support relocations,

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 09/23] elf: create elf_open_binary_into()
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (7 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 08/23] elf: implement elf_load_inplace() Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 10/23] Makefile: add vmbarebox build target Sascha Hauer
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

elf_open_binary() returns a dynamically allocated struct elf_image *. We
do not have malloc in the PBL, so for better PBL support create
elf_open_binary_into() which takes a struct elf_image * as argument.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 common/elf.c  | 26 +++++++++++++++++++-------
 include/elf.h |  1 +
 2 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/common/elf.c b/common/elf.c
index cba11640e52204add6cf601335b1904bd9188be0..e2e6c821cc6a673ee08d82a82176f39770e5eb42 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -316,6 +316,23 @@ static void elf_init_struct(struct elf_image *elf)
 	elf->filename = NULL;
 }
 
+int elf_open_binary_into(struct elf_image *elf, void *buf)
+{
+	int ret;
+
+	memset(elf, 0, sizeof(*elf));
+	elf_init_struct(elf);
+
+	elf->hdr_buf = buf;
+	ret = elf_check_image(elf, buf);
+	if (ret)
+		return ret;
+
+	elf->entry = elf_hdr_e_entry(elf, elf->hdr_buf);
+
+	return 0;
+}
+
 struct elf_image *elf_open_binary(void *buf)
 {
 	int ret;
@@ -325,17 +342,12 @@ struct elf_image *elf_open_binary(void *buf)
 	if (!elf)
 		return ERR_PTR(-ENOMEM);
 
-	elf_init_struct(elf);
-
-	elf->hdr_buf = buf;
-	ret = elf_check_image(elf, buf);
+	ret = elf_open_binary_into(elf, buf);
 	if (ret) {
 		free(elf);
-		return ERR_PTR(-EINVAL);
+		return ERR_PTR(ret);
 	}
 
-	elf->entry = elf_hdr_e_entry(elf, elf->hdr_buf);
-
 	return elf;
 }
 
diff --git a/include/elf.h b/include/elf.h
index 236e38fb29887315e01a165795e1bb861f738054..d0e9ae2afd36c74fa50aa0cb3dab060167ba0326 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -410,6 +410,7 @@ static inline size_t elf_get_mem_size(struct elf_image *elf)
 	return elf->high_addr - elf->low_addr;
 }
 
+int elf_open_binary_into(struct elf_image *elf, void *buf);
 struct elf_image *elf_open_binary(void *buf);
 struct elf_image *elf_open(const char *filename);
 void elf_close(struct elf_image *elf);

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 10/23] Makefile: add vmbarebox build target
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (8 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 09/23] elf: create elf_open_binary_into() Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 11/23] PBL: allow to link ELF image into PBL Sascha Hauer
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5

Add a build target to create vmbarebox, which provides an ELF format
version of barebox that will be used later to link into the PBL

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 Makefile | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index 3b31cecc22c431a063b8d2d3c387da487b698e74..9d8fe242411422163cb800c0b2dd1eb9f0709f47 100644
--- a/Makefile
+++ b/Makefile
@@ -1096,6 +1096,15 @@ barebox.fit: images/barebox-$(CONFIG_ARCH_LINUX_NAME).fit
 barebox.srec: barebox
 	$(OBJCOPY) -O srec $< $@
 
+OBJCOPYFLAGS_vmbarebox = $(call objcopy-option,--strip-section-headers,--strip-all)  \
+			 --remove-section=.comment \
+			 --remove-section=.note* \
+			 --remove-section=.note.gnu.build-id \
+			 --remove-section=.gnu.hash
+
+vmbarebox: barebox FORCE
+	$(call if_changed,objcopy)
+
 quiet_cmd_barebox_proper__ = CC      $@
       cmd_barebox_proper__ = $(CC) -r -o $@ -Wl,--whole-archive $(BAREBOX_OBJS)
 
@@ -1378,7 +1387,7 @@ CLEAN_FILES +=	barebox System.map include/generated/barebox_default_env.h \
                 .tmp_version .tmp_barebox* barebox.bin barebox.map \
 		.tmp_kallsyms* compile_commands.json \
 		.tmp_barebox.o barebox.o barebox-flash-image \
-		barebox.srec barebox.efi
+		barebox.srec barebox.efi vmbarebox
 
 CLEAN_FILES +=	scripts/bareboxenv-target scripts/kernel-install-target \
 		scripts/bareboxcrc32-target scripts/bareboximd-target \

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 11/23] PBL: allow to link ELF image into PBL
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (9 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 10/23] Makefile: add vmbarebox build target Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 12/23] mmu: add MAP_CACHED_RO mapping type Sascha Hauer
                   ` (11 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5

Some architectures want to link the barebox proper ELF image into the
PBL. Allow that and provide a Kconfig option to select the ELF image.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 Makefile    | 4 ++++
 pbl/Kconfig | 9 +++++++++
 2 files changed, 13 insertions(+)

diff --git a/Makefile b/Makefile
index 9d8fe242411422163cb800c0b2dd1eb9f0709f47..44011868083135e2a4c53413535bedf70f1704f7 100644
--- a/Makefile
+++ b/Makefile
@@ -831,7 +831,11 @@ export KBUILD_BINARY ?= barebox.bin
 # Also any assignments in arch/$(SRCARCH)/Makefile take precedence over
 # the default value.
 
+ifeq ($(CONFIG_PBL_IMAGE_ELF),y)
+export BAREBOX_PROPER ?= vmbarebox
+else
 export BAREBOX_PROPER ?= barebox.bin
+endif
 
 barebox-flash-images: $(KBUILD_IMAGE)
 	@echo $^ > $@
diff --git a/pbl/Kconfig b/pbl/Kconfig
index cab9325d16e8625bcca10125b3281062abffedbc..63f29cd6135926c48b355b80fc7a123b90098c20 100644
--- a/pbl/Kconfig
+++ b/pbl/Kconfig
@@ -21,6 +21,15 @@ config PBL_IMAGE_NO_PIGGY
 	  want to use the piggy mechanism to load barebox proper.
 	  It's so far only intended for sandbox.
 
+config PBL_IMAGE_ELF
+	bool
+	depends on PBL_IMAGE
+	select ELF
+	help
+	  If yes, link ELF image into the PBL, otherwise a raw binary
+	  is linked into the PBL. This must match the loader code in the
+	  PBL.
+
 config PBL_MULTI_IMAGES
 	bool
 	select PBL_IMAGE

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 12/23] mmu: add MAP_CACHED_RO mapping type
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (10 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 11/23] PBL: allow to link ELF image into PBL Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 13/23] mmu: introduce pbl_remap_range() Sascha Hauer
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

ARM32 and ARM64 have ARCH_MAP_CACHED_RO. We'll move parts of the MMU
initialization to generic code later, so add a new mapping type to
include/mmu.h.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/arm/cpu/mmu-common.c | 4 ++--
 arch/arm/cpu/mmu-common.h | 3 +--
 arch/arm/cpu/mmu_32.c     | 4 ++--
 arch/arm/cpu/mmu_64.c     | 2 +-
 include/mmu.h             | 3 ++-
 5 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index a1431c0ff46112552d2919269cc8a7a66d7a20c1..67317f127cadb138cc2e85bb18c92ab47bc1206f 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -22,7 +22,7 @@ const char *map_type_tostr(maptype_t map_type)
 
 	switch (map_type) {
 	case ARCH_MAP_CACHED_RWX:	return "RWX";
-	case ARCH_MAP_CACHED_RO:	return "RO";
+	case MAP_CACHED_RO:		return "RO";
 	case MAP_CACHED:		return "CACHED";
 	case MAP_UNCACHED:		return "UNCACHED";
 	case MAP_CODE:			return "CODE";
@@ -158,7 +158,7 @@ static void mmu_remap_memory_banks(void)
 	}
 
 	remap_range((void *)code_start, code_size, MAP_CODE);
-	remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
+	remap_range((void *)rodata_start, rodata_size, MAP_CACHED_RO);
 
 	setup_trap_pages();
 }
diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
index a111e15a21b479b5ffa2ea8973e2ad189e531925..b42c421ffde8ebba84b17c6311b735f7759dc69b 100644
--- a/arch/arm/cpu/mmu-common.h
+++ b/arch/arm/cpu/mmu-common.h
@@ -12,7 +12,6 @@
 #include <linux/bits.h>
 
 #define ARCH_MAP_CACHED_RWX	MAP_ARCH(2)
-#define ARCH_MAP_CACHED_RO	MAP_ARCH(3)
 
 #define ARCH_MAP_FLAG_PAGEWISE	BIT(31)
 
@@ -32,7 +31,7 @@ static inline maptype_t arm_mmu_maybe_skip_permissions(maptype_t map_type)
 	switch (map_type & MAP_TYPE_MASK) {
 	case MAP_CODE:
 	case MAP_CACHED:
-	case ARCH_MAP_CACHED_RO:
+	case MAP_CACHED_RO:
 		return ARCH_MAP_CACHED_RWX;
 	default:
 		return map_type;
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 912d14e8cf82afcfd1800e4e11503899e10ccbbc..71ead41c3d274548c9427c1ce9833de309114c4d 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -304,7 +304,7 @@ static uint32_t get_pte_flags(maptype_t map_type)
 		switch (map_type & MAP_TYPE_MASK) {
 		case ARCH_MAP_CACHED_RWX:
 			return PTE_FLAGS_CACHED_V7_RWX;
-		case ARCH_MAP_CACHED_RO:
+		case MAP_CACHED_RO:
 			return PTE_FLAGS_CACHED_RO_V7;
 		case MAP_CACHED:
 			return PTE_FLAGS_CACHED_V7;
@@ -320,7 +320,7 @@ static uint32_t get_pte_flags(maptype_t map_type)
 		}
 	} else {
 		switch (map_type & MAP_TYPE_MASK) {
-		case ARCH_MAP_CACHED_RO:
+		case MAP_CACHED_RO:
 		case MAP_CODE:
 			return PTE_FLAGS_CACHED_RO_V4;
 		case ARCH_MAP_CACHED_RWX:
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 56c6a21f2b2a8d8300fd6dbfaaf36a54d264a0f3..ddf1373ec0a801baad043146187d7f4c3eac6a2a 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -159,7 +159,7 @@ static unsigned long get_pte_attrs(maptype_t map_type)
 		return attrs_xn() | MEM_ALLOC_WRITECOMBINE;
 	case MAP_CODE:
 		return CACHED_MEM | PTE_BLOCK_RO;
-	case ARCH_MAP_CACHED_RO:
+	case MAP_CACHED_RO:
 		return attrs_xn() | CACHED_MEM | PTE_BLOCK_RO;
 	case ARCH_MAP_CACHED_RWX:
 		return CACHED_MEM;
diff --git a/include/mmu.h b/include/mmu.h
index f79619808829532ed05f018b982e4bc76bca72a4..9f582f25e1de14d47cfe2eff64f9cce81c4e492d 100644
--- a/include/mmu.h
+++ b/include/mmu.h
@@ -9,9 +9,10 @@
 #define MAP_CACHED		1
 #define MAP_FAULT		2
 #define MAP_CODE		3
+#define MAP_CACHED_RO		4
 
 #ifdef CONFIG_ARCH_HAS_DMA_WRITE_COMBINE
-#define MAP_WRITECOMBINE	4
+#define MAP_WRITECOMBINE	5
 #else
 #define MAP_WRITECOMBINE	MAP_UNCACHED
 #endif

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 13/23] mmu: introduce pbl_remap_range()
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (11 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 12/23] mmu: add MAP_CACHED_RO mapping type Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 14/23] ARM: drop arm_fixup_vectors() Sascha Hauer
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

Add PBL-specific memory remapping function that always uses page-wise
mapping (ARCH_MAP_FLAG_PAGEWISE) for fine-grained permissions on
adjacent ELF segments with different protection requirements.

Wraps arch-specific __arch_remap_range() for ARMv7 (4KB pages) and
ARMv8 (page tables with BBM). Needed for ELF segment permission setup.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/arm/cpu/mmu_32.c | 6 ++++++
 arch/arm/cpu/mmu_64.c | 7 +++++++
 include/mmu.h         | 3 +++
 3 files changed, 16 insertions(+)

diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 71ead41c3d274548c9427c1ce9833de309114c4d..981202b789befbd04e212f48490aebb41dfb6b98 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -435,6 +435,12 @@ static void early_remap_range(u32 addr, size_t size, maptype_t map_type)
 	__arch_remap_range((void *)addr, addr, size, map_type);
 }
 
+void pbl_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
+		     maptype_t map_type)
+{
+	__arch_remap_range(virt_addr, phys_addr, size, map_type);
+}
+
 static bool pte_is_cacheable(uint32_t pte, int level)
 {
 	return	(level == 2 && (pte & PTE_CACHEABLE)) ||
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index ddf1373ec0a801baad043146187d7f4c3eac6a2a..5ea0d39ad4c7fb219b38cf957335615d0d6e96c5 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -282,6 +282,13 @@ static void early_remap_range(uint64_t addr, size_t size, maptype_t map_type)
 	__arch_remap_range(addr, addr, size, map_type, false);
 }
 
+void pbl_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
+		     maptype_t map_type)
+{
+	__arch_remap_range((uint64_t)virt_addr, phys_addr,
+			   (uint64_t)size, map_type, true);
+}
+
 int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, maptype_t map_type)
 {
 	map_type = arm_mmu_maybe_skip_permissions(map_type);
diff --git a/include/mmu.h b/include/mmu.h
index 9f582f25e1de14d47cfe2eff64f9cce81c4e492d..32d9a7aca3b9a61d542bf3e21e27f1ac51f43ee2 100644
--- a/include/mmu.h
+++ b/include/mmu.h
@@ -65,6 +65,9 @@ static inline bool arch_can_remap(void)
 }
 #endif
 
+void pbl_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
+		     maptype_t map_type);
+
 static inline int remap_range(void *start, size_t size, maptype_t map_type)
 {
 	return arch_remap_range(start, virt_to_phys(start), size, map_type);

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 14/23] ARM: drop arm_fixup_vectors()
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (12 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 13/23] mmu: introduce pbl_remap_range() Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 15/23] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5

Add the missing "ax" flag for the exception table. With this the jumps
in the exception table are correctly relocated and we no longer have to
fix them up during runtime. Remove the now unnecessary
arm_fixup_vectors().

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/arm/cpu/exceptions_32.S       | 54 +++++---------------------------------
 arch/arm/cpu/interrupts_32.c       |  5 +---
 arch/arm/cpu/mmu_32.c              |  2 --
 arch/arm/cpu/no-mmu.c              |  2 --
 arch/arm/include/asm/barebox-arm.h |  4 ---
 5 files changed, 8 insertions(+), 59 deletions(-)

diff --git a/arch/arm/cpu/exceptions_32.S b/arch/arm/cpu/exceptions_32.S
index dc3d42663cbedd37947e27d449eb4ac8c3d8c3f1..68eb696fc7b78924156172a8867d3c9ed1dce588 100644
--- a/arch/arm/cpu/exceptions_32.S
+++ b/arch/arm/cpu/exceptions_32.S
@@ -127,58 +127,18 @@ fiq:
 	bad_save_user_regs
 	bl 	do_fiq
 
-#ifdef CONFIG_ARM_EXCEPTIONS
-/*
- * With relocatable binary support the runtime exception vectors do not match
- * the addresses in the binary. We have to fix them up during runtime
- */
-ENTRY(arm_fixup_vectors)
-	ldr	r0, =undefined_instruction
-	ldr	r1, =_undefined_instruction
-	str	r0, [r1]
-	ldr	r0, =software_interrupt
-	ldr	r1, =_software_interrupt
-	str	r0, [r1]
-	ldr	r0, =prefetch_abort
-	ldr	r1, =_prefetch_abort
-	str	r0, [r1]
-	ldr	r0, =data_abort
-	ldr	r1, =_data_abort
-	str	r0, [r1]
-	ldr	r0, =irq
-	ldr	r1, =_irq
-	str	r0, [r1]
-	ldr	r0, =fiq
-	ldr	r1, =_fiq
-	str	r0, [r1]
-	bx	lr
-ENDPROC(arm_fixup_vectors)
-#endif
-
-.section .text_exceptions
+.section .text_exceptions, "ax"
 .globl extable
 extable:
 1:	b 1b				/* barebox_arm_reset_vector */
 #ifdef CONFIG_ARM_EXCEPTIONS
-	ldr pc, _undefined_instruction	/* undefined instruction */
-	ldr pc, _software_interrupt	/* software interrupt (SWI) */
-	ldr pc, _prefetch_abort		/* prefetch abort */
-	ldr pc, _data_abort		/* data abort */
+	ldr pc, =undefined_instruction	/* undefined instruction */
+	ldr pc, =software_interrupt	/* software interrupt (SWI) */
+	ldr pc, =prefetch_abort		/* prefetch abort */
+	ldr pc, =data_abort		/* data abort */
 1:	b 1b				/* (reserved) */
-	ldr pc, _irq			/* irq (interrupt) */
-	ldr pc, _fiq			/* fiq (fast interrupt) */
-.globl _undefined_instruction
-_undefined_instruction: .word undefined_instruction
-.globl _software_interrupt
-_software_interrupt: .word software_interrupt
-.globl _prefetch_abort
-_prefetch_abort: .word prefetch_abort
-.globl _data_abort
-_data_abort: .word data_abort
-.globl _irq
-_irq: .word irq
-.globl _fiq
-_fiq: .word fiq
+	ldr pc, =irq			/* irq (interrupt) */
+	ldr pc, =fiq			/* fiq (fast interrupt) */
 #else
 1:	b 1b				/* undefined instruction */
 1:	b 1b				/* software interrupt (SWI) */
diff --git a/arch/arm/cpu/interrupts_32.c b/arch/arm/cpu/interrupts_32.c
index 0b88db10fe487378fe08018701bc672f63139fc1..af2231f2a78059f7dc6b4a786a384fb870e9f14c 100644
--- a/arch/arm/cpu/interrupts_32.c
+++ b/arch/arm/cpu/interrupts_32.c
@@ -231,10 +231,8 @@ static __maybe_unused int arm_init_vectors(void)
 	 * First try to use the vectors where they actually are, works
 	 * on ARMv7 and later.
 	 */
-	if (!set_vector_table((unsigned long)__exceptions_start)) {
-		arm_fixup_vectors();
+	if (!set_vector_table((unsigned long)__exceptions_start))
 		return 0;
-	}
 
 	/*
 	 * Next try high vectors at 0xffff0000.
@@ -265,6 +263,5 @@ void arm_pbl_init_exceptions(void)
 		return;
 
 	set_vbar((unsigned long)__exceptions_start);
-	arm_fixup_vectors();
 }
 #endif
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 981202b789befbd04e212f48490aebb41dfb6b98..1d2ead9160cf3b921c2900d4edaad0d465f636d9 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -534,8 +534,6 @@ void create_vector_table(unsigned long adr)
 			      get_pte_flags(MAP_CACHED), true);
 	}
 
-	arm_fixup_vectors();
-
 	memset(vectors, 0, PAGE_SIZE);
 	memcpy(vectors, __exceptions_start, __exceptions_stop - __exceptions_start);
 }
diff --git a/arch/arm/cpu/no-mmu.c b/arch/arm/cpu/no-mmu.c
index c4ef5d1f9d55136d606c244309dbeeb8fd988784..8e00cffebfd9193a45f5b5ab1863e4e6b13464d8 100644
--- a/arch/arm/cpu/no-mmu.c
+++ b/arch/arm/cpu/no-mmu.c
@@ -58,8 +58,6 @@ static int nommu_v7_vectors_init(void)
 	cr &= ~CR_V;
 	set_cr(cr);
 
-	arm_fixup_vectors();
-
 	vectors = xmemalign(PAGE_SIZE, PAGE_SIZE);
 	memset(vectors, 0, PAGE_SIZE);
 	memcpy(vectors, __exceptions_start, __exceptions_size);
diff --git a/arch/arm/include/asm/barebox-arm.h b/arch/arm/include/asm/barebox-arm.h
index e1d89d5684d36f003ba8da3651ae86bda1d9b34c..99f82311945ceb0d06ef9e36ba34a9be54f1ae8e 100644
--- a/arch/arm/include/asm/barebox-arm.h
+++ b/arch/arm/include/asm/barebox-arm.h
@@ -45,12 +45,8 @@ unsigned long arm_mem_membase_get(void);
 unsigned long arm_mem_endmem_get(void);
 
 #ifdef CONFIG_ARM_EXCEPTIONS
-void arm_fixup_vectors(void);
 ulong arm_get_vector_table(void);
 #else
-static inline void arm_fixup_vectors(void)
-{
-}
 static inline ulong arm_get_vector_table(void)
 {
 	return ~0;

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 15/23] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (13 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 14/23] ARM: drop arm_fixup_vectors() Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 16/23] ARM: link ELF image into PBL Sascha Hauer
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5

Fix the linker scripts to generate three distinct PT_LOAD segments with
correct permissions instead of combining .rodata with .data.

Before this fix, the linker auto-generated only two PT_LOAD segments:
1. Text segment (PF_R|PF_X)
2. Data segment (PF_R|PF_W) - containing .rodata, .data, .bss, etc.

This caused .rodata to be mapped with write permissions when
pbl_mmu_setup_from_elf() set up MMU permissions based on ELF segments,
defeating the W^X protection that commit d9ccb0cf14 intended to provide.

With explicit PHDRS directives, we now generate three segments:
1. text segment (PF_R|PF_X): .text and related code sections
2. rodata segment (PF_R): .rodata and unwind tables
3. data segment (PF_R|PF_W): .data, .bss, and related sections

This ensures pbl_mmu_setup_from_elf() correctly maps .rodata as
read-only (MAP_CACHED_RO) instead of read-write (MAP_CACHED).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/arm/lib32/barebox.lds.S | 34 ++++++++++++++++++++++------------
 arch/arm/lib64/barebox.lds.S | 29 +++++++++++++++++++----------
 2 files changed, 41 insertions(+), 22 deletions(-)

diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
index c704dd6d70f3ab157ceb67dfb14760e03f2a5d62..2fb43b4619ff29d8d21dd579d3a3002b7134ff71 100644
--- a/arch/arm/lib32/barebox.lds.S
+++ b/arch/arm/lib32/barebox.lds.S
@@ -7,14 +7,23 @@
 OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
 OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
 ENTRY(start)
+
+PHDRS
+{
+	text PT_LOAD FLAGS(5);     /* PF_R | PF_X */
+	rodata PT_LOAD FLAGS(4);   /* PF_R */
+	data PT_LOAD FLAGS(6);     /* PF_R | PF_W */
+	dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
+}
+
 SECTIONS
 {
 	. = 0x0;
-	.image_start : { *(.__image_start) }
+	.image_start : { *(.__image_start) } :text
 
 	. = ALIGN(4);
 
-	._text : { *(._text) }
+	._text : { *(._text) } :text
 	.text      :
 	{
 		_stext = .;
@@ -27,7 +36,7 @@ SECTIONS
 		KEEP(*(.text_exceptions*))
 		__exceptions_stop = .;
 		*(.text*)
-	}
+	} :text
 	BAREBOX_BARE_INIT_SIZE
 
 	. = ALIGN(4096);
@@ -35,7 +44,7 @@ SECTIONS
 	.rodata : {
 		*(.rodata*)
 		RO_DATA_SECTION
-	}
+	} :rodata
 
 #ifdef CONFIG_ARM_UNWIND
 	/*
@@ -46,20 +55,21 @@ SECTIONS
 		__start_unwind_idx = .;
 		*(.ARM.exidx*)
 		__stop_unwind_idx = .;
-	}
+	} :rodata
 	.ARM.unwind_tab : {
 		__start_unwind_tab = .;
 		*(.ARM.extab*)
 		__stop_unwind_tab = .;
-	}
+	} :rodata
 #endif
 	. = ALIGN(4096);
 	__end_rodata = .;
 	_etext = .;
 	_sdata = .;
 
-	. = ALIGN(4);
-	.data : { *(.data*) }
+	.data : { *(.data*) } :data
+
+	.dynamic : { *(.dynamic) } :data :dynamic
 
 	. = .;
 
@@ -69,12 +79,12 @@ SECTIONS
 
 	BAREBOX_EFI_RUNTIME
 
-	.image_end : { *(.__image_end) }
+	.image_end : { *(.__image_end) } :data
 
 	. = ALIGN(4);
-	.__bss_start :  { *(.__bss_start) }
-	.bss : { *(.bss*) }
-	.__bss_stop :  { *(.__bss_stop) }
+	.__bss_start :  { *(.__bss_start) } :data
+	.bss : { *(.bss*) } :data
+	.__bss_stop :  { *(.__bss_stop) } :data
 
 #ifdef CONFIG_ARM_SECURE_MONITOR
 	. = ALIGN(16);
diff --git a/arch/arm/lib64/barebox.lds.S b/arch/arm/lib64/barebox.lds.S
index 5ee5fbc3741e1f7644c00f9b37c0903c27704a3e..71f677a917851270e09c6d439fe5cbe4b6b41034 100644
--- a/arch/arm/lib64/barebox.lds.S
+++ b/arch/arm/lib64/barebox.lds.S
@@ -6,14 +6,23 @@
 OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
 OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
 ENTRY(start)
+
+PHDRS
+{
+	text PT_LOAD FLAGS(5);     /* PF_R | PF_X */
+	rodata PT_LOAD FLAGS(4);   /* PF_R */
+	data PT_LOAD FLAGS(6);     /* PF_R | PF_W */
+	dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
+}
+
 SECTIONS
 {
 	. = 0x0;
 
-	.image_start : { *(.__image_start) }
+	.image_start : { *(.__image_start) } :text
 
 	. = ALIGN(4);
-	._text : { *(._text) }
+	._text : { *(._text) } :text
 	.text      :
 	{
 		_stext = .;
@@ -22,7 +31,7 @@ SECTIONS
 		*(.text_bare_init*)
 		__bare_init_end = .;
 		*(.text*)
-	}
+	} :text
 	BAREBOX_BARE_INIT_SIZE
 
 	. = ALIGN(4096);
@@ -30,7 +39,7 @@ SECTIONS
 	.rodata : {
 		*(.rodata*)
 		RO_DATA_SECTION
-	}
+	} :rodata
 
 	. = ALIGN(4096);
 
@@ -38,20 +47,20 @@ SECTIONS
 	_etext = .;
 	_sdata = .;
 
-	.data : { *(.data*) }
+	.data : { *(.data*) } :data
 
-	BAREBOX_RELOCATION_TABLE
+	.dynamic : { *(.dynamic) } :data :dynamic
 
 	_edata = .;
 
 	BAREBOX_EFI_RUNTIME
 
-	.image_end : { *(.__image_end) }
+	.image_end : { *(.__image_end) } :data
 
 	. = ALIGN(4);
-	.__bss_start :  { *(.__bss_start) }
-	.bss : { *(.bss*) }
-	.__bss_stop :  { *(.__bss_stop) }
+	.__bss_start :  { *(.__bss_start) } :data
+	.bss : { *(.bss*) } :data
+	.__bss_stop :  { *(.__bss_stop) } :data
 	_end = .;
 	_barebox_image_size = __bss_start;
 }

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 16/23] ARM: link ELF image into PBL
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (14 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 15/23] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 17/23] ARM: PBL: setup MMU with proper permissions from ELF segments Sascha Hauer
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

Instead of linking the raw binary barebox proper image into the PBL link
the ELF image into the PBL. With this barebox proper starts with a properly
linked and fully initialized C environment, so the calls to
relocate_to_adr() and setup_c() can be removed from barebox proper.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/arm/Kconfig          |  1 +
 arch/arm/cpu/start.c      | 13 ++++---------
 arch/arm/cpu/uncompress.c | 26 +++++++++++++++++++-------
 3 files changed, 24 insertions(+), 16 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 5123e9b1402c56db94df6a7a33ae993c61d51fbc..65856977ab41298be36b78c64de857570d575744 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -18,6 +18,7 @@ config ARM
 	select HW_HAS_PCI
 	select ARCH_HAS_DMA_WRITE_COMBINE
 	select HAVE_EFI_LOADER if MMU # for payload unaligned accesses
+	select PBL_IMAGE_ELF
 	default y
 
 config ARCH_LINUX_NAME
diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
index f7d4507e71588ba5e241b24b952d55e2a4b0f794..2498bdb894c261f587af6542ea4574c497a6edc0 100644
--- a/arch/arm/cpu/start.c
+++ b/arch/arm/cpu/start.c
@@ -127,8 +127,9 @@ static int barebox_memory_areas_init(void)
 }
 device_initcall(barebox_memory_areas_init);
 
-__noreturn __prereloc void barebox_non_pbl_start(unsigned long membase,
-		unsigned long memsize, struct handoff_data *hd)
+__noreturn void barebox_non_pbl_start(unsigned long membase,
+				      unsigned long memsize,
+				      struct handoff_data *hd)
 {
 	unsigned long endmem = membase + memsize;
 	unsigned long malloc_start, malloc_end;
@@ -139,12 +140,6 @@ __noreturn __prereloc void barebox_non_pbl_start(unsigned long membase,
 	if (IS_ENABLED(CONFIG_CPU_V7))
 		armv7_hyp_install();
 
-	relocate_to_adr(barebox_base);
-
-	setup_c();
-
-	barrier();
-
 	pbl_barebox_break();
 
 	pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
@@ -200,7 +195,7 @@ void start(unsigned long membase, unsigned long memsize, struct handoff_data *hd
  * First function in the uncompressed image. We get here from
  * the pbl. The stack already has been set up by the pbl.
  */
-void NAKED __prereloc __section(.text_entry) start(unsigned long membase,
+void __section(.text_entry) start(unsigned long membase,
 		unsigned long memsize, struct handoff_data *hd)
 {
 	barebox_non_pbl_start(membase, memsize, hd);
diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
index b9fc1d04db96e77c8fcd7fd1930798ea1d9294d7..8cc7102290986e71d2f3a2f34df1a9f946c56ced 100644
--- a/arch/arm/cpu/uncompress.c
+++ b/arch/arm/cpu/uncompress.c
@@ -20,6 +20,7 @@
 #include <asm/mmu.h>
 #include <asm/unaligned.h>
 #include <compressed-dtb.h>
+#include <elf.h>
 
 #include <debug_ll.h>
 
@@ -41,6 +42,8 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
 	void *pg_start, *pg_end;
 	unsigned long pc = get_pc();
 	void *handoff_data;
+	struct elf_image elf;
+	int ret;
 
 	/* piggy data is not relocated, so determine the bounds now */
 	pg_start = runtime_address(input_data);
@@ -85,21 +88,30 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
 	else if (IS_ENABLED(CONFIG_ARMV7R_MPU))
 		set_cr(get_cr() | CR_C);
 
-	pr_debug("uncompressing barebox binary at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
+	pr_debug("uncompressing barebox ELF at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
 			pg_start, pg_len, barebox_base, uncompressed_len);
 
 	pbl_barebox_uncompress((void*)barebox_base, pg_start, pg_len);
 
+	pr_debug("relocating ELF in place\n");
+
+	ret = elf_open_binary_into(&elf, (void *)barebox_base);
+	if (ret)
+		panic("Failed to open ELF binary: %d\n", ret);
+
+	ret = elf_load_inplace(&elf);
+	if (ret)
+		panic("Failed to relocate ELF: %d\n", ret);
+
+	pr_debug("ELF entry point: 0x%llx\n", elf.entry);
+
+	barebox = (void *)(unsigned long)elf.entry;
+
 	handoff_data_move(handoff_data);
 
 	sync_caches_for_execution();
 
-	if (IS_ENABLED(CONFIG_THUMB2_BAREBOX))
-		barebox = (void *)(barebox_base + 1);
-	else
-		barebox = (void *)barebox_base;
-
-	pr_debug("jumping to uncompressed image at 0x%p\n", barebox);
+	pr_debug("jumping to ELF entry point at 0x%p\n", barebox);
 
 	if (IS_ENABLED(CONFIG_CPU_V7) && boot_cpu_mode() == HYP_MODE)
 		armv7_switch_to_hyp();

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 17/23] ARM: PBL: setup MMU with proper permissions from ELF segments
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (15 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 16/23] ARM: link ELF image into PBL Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 18/23] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5

Move complete MMU setup into PBL by leveraging ELF segment information
to apply correct memory permissions before jumping to barebox proper.

After ELF relocation, parse PT_LOAD segments and map each with
permissions derived from p_flags:
- Text segments (PF_R|PF_X): Read-only + executable (MAP_CODE)
- Data segments (PF_R|PF_W): Read-write (MAP_CACHED)
- RO data segments (PF_R): Read-only (ARCH_MAP_CACHED_RO)

This ensures barebox proper starts with full W^X protection already
in place, eliminating the need for complex remapping in barebox proper.

The framework is portable - common ELF parsing in pbl/mmu.c uses
architecture-specific early_remap_range() exported from mmu_*.c.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/arm/cpu/mmu-common.c |  27 +++++------
 arch/arm/cpu/uncompress.c |  14 ++++++
 include/pbl/mmu.h         |  29 ++++++++++++
 pbl/Makefile              |   1 +
 pbl/mmu.c                 | 111 ++++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 167 insertions(+), 15 deletions(-)

diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index 67317f127cadb138cc2e85bb18c92ab47bc1206f..2c80b2a426cb8b487db5c407e950c2e10c23a096 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -109,16 +109,16 @@ static inline void remap_range_end(unsigned long start, unsigned long end,
 	remap_range((void *)start, end - start, map_type);
 }
 
-static inline void remap_range_end_sans_text(unsigned long start, unsigned long end,
+static inline void remap_range_end_sans_image(unsigned long start, unsigned long end,
 					     unsigned map_type)
 {
-	unsigned long text_start = (unsigned long)&_stext;
-	unsigned long text_end = (unsigned long)&_etext;
+	unsigned long image_start = (unsigned long)&_stext;
+	unsigned long image_end = (unsigned long)&_etext;
 
-	if (region_overlap_end_exclusive(start, end, text_start, text_end)) {
-		remap_range_end(start, text_start, MAP_CACHED);
+	if (region_overlap_end_exclusive(start, end, image_start, image_end)) {
+		remap_range_end(start, image_start, MAP_CACHED);
 		/* skip barebox segments here, will be mapped later */
-		start = text_end;
+		start = image_end;
 	}
 
 	remap_range_end(start, end, MAP_CACHED);
@@ -127,10 +127,6 @@ static inline void remap_range_end_sans_text(unsigned long start, unsigned long
 static void mmu_remap_memory_banks(void)
 {
 	struct memory_bank *bank;
-	unsigned long code_start = (unsigned long)&_stext;
-	unsigned long code_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
-	unsigned long rodata_start = (unsigned long)&__start_rodata;
-	unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
 
 	/*
 	 * Early mmu init will have mapped everything but the initial memory area
@@ -138,6 +134,10 @@ static void mmu_remap_memory_banks(void)
 	 * all memory banks, so let's map all pages, excluding reserved memory areas
 	 * and barebox text area cacheable.
 	 *
+	 * PBL has already set up the MMU with proper permissions for text, data
+	 * and rodata based on ELF segment information, so we don't need to remap
+	 * those here.
+	 *
 	 * This code will become much less complex once we switch over to using
 	 * CONFIG_MEMORY_ATTRIBUTES for MMU as well.
 	 */
@@ -150,16 +150,13 @@ static void mmu_remap_memory_banks(void)
 		/* Skip reserved regions */
 		for_each_reserved_region(bank, rsv) {
 			if (pos != rsv->start)
-				remap_range_end_sans_text(pos, rsv->start, MAP_CACHED);
+				remap_range_end_sans_image(pos, rsv->start, MAP_CACHED);
 			pos = rsv->end + 1;
 		}
 
-		remap_range_end_sans_text(pos, bank->res->end + 1, MAP_CACHED);
+		remap_range_end_sans_image(pos, bank->res->end + 1, MAP_CACHED);
 	}
 
-	remap_range((void *)code_start, code_size, MAP_CODE);
-	remap_range((void *)rodata_start, rodata_size, MAP_CACHED_RO);
-
 	setup_trap_pages();
 }
 
diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
index 8cc7102290986e71d2f3a2f34df1a9f946c56ced..619bd8d5b0b56ab2704a0fa1e4964bb603b761d9 100644
--- a/arch/arm/cpu/uncompress.c
+++ b/arch/arm/cpu/uncompress.c
@@ -21,6 +21,7 @@
 #include <asm/unaligned.h>
 #include <compressed-dtb.h>
 #include <elf.h>
+#include <pbl/mmu.h>
 
 #include <debug_ll.h>
 
@@ -105,6 +106,19 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
 
 	pr_debug("ELF entry point: 0x%llx\n", elf.entry);
 
+	/*
+	 * Now that the ELF image is relocated, we know the exact addresses
+	 * of all segments. Set up MMU with proper permissions based on
+	 * ELF segment flags (PF_R/W/X).
+	 */
+	if (IS_ENABLED(CONFIG_MMU)) {
+		ret = pbl_mmu_setup_from_elf(&elf, membase, memsize);
+		if (ret) {
+			pr_err("Failed to setup MMU from ELF: %d\n", ret);
+			hang();
+		}
+	}
+
 	barebox = (void *)(unsigned long)elf.entry;
 
 	handoff_data_move(handoff_data);
diff --git a/include/pbl/mmu.h b/include/pbl/mmu.h
new file mode 100644
index 0000000000000000000000000000000000000000..4a00d8e528ab5452981347185c9114235f213e2b
--- /dev/null
+++ b/include/pbl/mmu.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __PBL_MMU_H
+#define __PBL_MMU_H
+
+#include <linux/types.h>
+
+struct elf_image;
+
+/**
+ * pbl_mmu_setup_from_elf() - Configure MMU using ELF segment information
+ * @elf: ELF image structure from elf_open_binary_into()
+ * @membase: Base address of RAM
+ * @memsize: Size of RAM
+ *
+ * This function sets up the MMU with proper permissions based on ELF
+ * segment flags. It should be called after elf_load_inplace() has
+ * relocated the barebox image.
+ *
+ * Segment permissions are mapped as follows:
+ *   PF_R | PF_X  -> Read-only + executable (text)
+ *   PF_R | PF_W  -> Read-write (data, bss)
+ *   PF_R         -> Read-only (rodata)
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+int pbl_mmu_setup_from_elf(struct elf_image *elf, unsigned long membase,
+			    unsigned long memsize);
+
+#endif /* __PBL_MMU_H */
diff --git a/pbl/Makefile b/pbl/Makefile
index f66391be7b2898388425657f54afcd6e4c72e3db..b78124cdcd2a4690be11d5503006723252b4904f 100644
--- a/pbl/Makefile
+++ b/pbl/Makefile
@@ -9,3 +9,4 @@ pbl-$(CONFIG_HAVE_IMAGE_COMPRESSION) += decomp.o
 pbl-$(CONFIG_LIBFDT) += fdt.o
 pbl-$(CONFIG_PBL_CONSOLE) += console.o
 obj-pbl-y += handoff-data.o
+obj-pbl-$(CONFIG_MMU) += mmu.o
diff --git a/pbl/mmu.c b/pbl/mmu.c
new file mode 100644
index 0000000000000000000000000000000000000000..853fdcba55699025ea1d2a49385747e29cb2debc
--- /dev/null
+++ b/pbl/mmu.c
@@ -0,0 +1,111 @@
+// SPDX-License-Identifier: GPL-2.0-only
+// SPDX-FileCopyrightText: 2025 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix
+
+#define pr_fmt(fmt) "pbl-mmu: " fmt
+
+#include <common.h>
+#include <elf.h>
+#include <mmu.h>
+#include <pbl/mmu.h>
+#include <asm/mmu.h>
+#include <linux/bits.h>
+#include <linux/sizes.h>
+
+/*
+ * Map ELF segment permissions (p_flags) to architecture MMU flags
+ */
+static unsigned int elf_flags_to_mmu_flags(u32 p_flags)
+{
+	bool readable = p_flags & PF_R;
+	bool writable = p_flags & PF_W;
+	bool executable = p_flags & PF_X;
+
+	if (readable && writable) {
+		/* Data, BSS: Read-write, cached, non-executable */
+		return MAP_CACHED;
+	} else if (readable && executable) {
+		/* Text: Read-only, cached, executable */
+		return MAP_CODE;
+	} else if (readable) {
+		/* Read-only data: Read-only, cached, non-executable */
+		return MAP_CACHED_RO;
+	} else {
+		/*
+		 * Unusual: segment with no read permission.
+		 * Map as uncached, non-executable for safety.
+		 */
+		pr_warn("Segment with unusual permissions: flags=0x%x\n", p_flags);
+		return MAP_UNCACHED;
+	}
+}
+
+int pbl_mmu_setup_from_elf(struct elf_image *elf, unsigned long membase,
+			    unsigned long memsize)
+{
+	void *phdr;
+	int i;
+	int phnum = elf_hdr_e_phnum(elf, elf->hdr_buf);
+	size_t phoff = elf_hdr_e_phoff(elf, elf->hdr_buf);
+	size_t phentsize = elf_size_of_phdr(elf);
+
+	pr_debug("Setting up MMU from ELF segments\n");
+	pr_debug("ELF entry point: 0x%llx\n", elf->entry);
+	pr_debug("ELF loaded at: 0x%p - 0x%p\n", elf->low_addr, elf->high_addr);
+
+	/*
+	 * Iterate through all PT_LOAD segments and set up MMU permissions
+	 * based on the segment's p_flags
+	 */
+	for (i = 0; i < phnum; i++) {
+		phdr = elf->hdr_buf + phoff + i * phentsize;
+
+		if (elf_phdr_p_type(elf, phdr) != PT_LOAD)
+			continue;
+
+		u64 p_vaddr = elf_phdr_p_vaddr(elf, phdr);
+		u64 p_memsz = elf_phdr_p_memsz(elf, phdr);
+		u32 p_flags = elf_phdr_p_flags(elf, phdr);
+
+		/*
+		 * Calculate actual address after relocation.
+		 * For ET_EXEC: reloc_offset is 0, use p_vaddr directly
+		 * For ET_DYN: reloc_offset adjusts virtual to actual address
+		 */
+		unsigned long addr = p_vaddr + elf->reloc_offset;
+		unsigned long size = p_memsz;
+		unsigned long segment_end = addr + size;
+
+		/* Validate segment is within available memory */
+		if (segment_end < addr || /* overflow check */
+		    addr < membase ||
+		    segment_end > membase + memsize) {
+			pr_err("Segment %d outside memory bounds\n", i);
+			return -EINVAL;
+		}
+
+		/* Validate alignment - warn and round if needed */
+		if (!IS_ALIGNED(addr, PAGE_SIZE) || !IS_ALIGNED(size, PAGE_SIZE)) {
+			pr_debug("Segment %d not page-aligned, rounding\n", i);
+			size = ALIGN(size, PAGE_SIZE);
+		}
+
+		unsigned int mmu_flags = elf_flags_to_mmu_flags(p_flags);
+
+		pr_debug("Segment %d: addr=0x%08lx size=0x%08lx flags=0x%x [%c%c%c] -> mmu_flags=0x%x\n",
+			 i, addr, size, p_flags,
+			 (p_flags & PF_R) ? 'R' : '-',
+			 (p_flags & PF_W) ? 'W' : '-',
+			 (p_flags & PF_X) ? 'X' : '-',
+			 mmu_flags);
+
+		/*
+		 * Remap this segment with proper permissions.
+		 * Use page-wise mapping to allow different permissions for
+		 * different segments even if they're nearby.
+		 */
+		pbl_remap_range((void *)addr, addr, size, mmu_flags);
+	}
+
+	pr_debug("MMU setup from ELF complete\n");
+	return 0;
+}

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 18/23] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (16 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 17/23] ARM: PBL: setup MMU with proper permissions from ELF segments Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 19/23] riscv: link ELF image into PBL Sascha Hauer
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5

Fix the linker script to generate three distinct PT_LOAD segments with
correct permissions instead of combining .rodata with .data.

Before this fix, the linker auto-generated only two PT_LOAD segments:
1. Text segment (PF_R|PF_X)
2. Data segment (PF_R|PF_W) - containing .rodata, .data, .bss, etc.

This caused .rodata to be mapped with write permissions when
riscv_mmu_setup_from_elf() or riscv_pmp_setup_from_elf() set up memory
permissions based on ELF segments, defeating the W^X protection.

With explicit PHDRS directives, we now generate three segments:
1. text segment (PF_R|PF_X): .text and related code sections
2. rodata segment (PF_R): .rodata and related read-only sections
3. data segment (PF_R|PF_W): .data, .bss, and related sections

This ensures riscv_mmu_setup_from_elf() and riscv_pmp_setup_from_elf()
correctly map .rodata as read-only instead of read-write.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/riscv/lib/barebox.lds.S | 38 +++++++++++++++++++++++++-------------
 1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/arch/riscv/lib/barebox.lds.S b/arch/riscv/lib/barebox.lds.S
index 03b3a967193cfee1c67b96632cf972a553e8bec4..77f854e73e2013ca332a0a94fd1deaa2b9978a1d 100644
--- a/arch/riscv/lib/barebox.lds.S
+++ b/arch/riscv/lib/barebox.lds.S
@@ -16,14 +16,23 @@
 OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
 ENTRY(start)
 OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
+
+PHDRS
+{
+	text PT_LOAD FLAGS(5);     /* PF_R | PF_X */
+	rodata PT_LOAD FLAGS(4);   /* PF_R */
+	data PT_LOAD FLAGS(6);     /* PF_R | PF_W */
+	dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
+}
+
 SECTIONS
 {
 	. = 0x0;
 
-	.image_start : { *(.__image_start) }
+	.image_start : { *(.__image_start) } :text
 
 	. = ALIGN(4);
-	._text : { *(._text) }
+	._text : { *(._text) } :text
 	.text      :
 	{
 		_stext = .;
@@ -35,44 +44,47 @@ SECTIONS
 		KEEP(*(.text_exceptions*))
 		__exceptions_stop = .;
 		*(.text*)
-	}
+	} :text
 	BAREBOX_BARE_INIT_SIZE
 
-	. = ALIGN(4);
+	. = ALIGN(4096);
 	__start_rodata = .;
 	.rodata : {
 		*(.rodata*)
 		RO_DATA_SECTION
-	}
+	} :rodata
 
 	__end_rodata = .;
 	_etext = .;
 	_sdata = .;
 
-	. = ALIGN(4);
-	.data : { *(.data*) }
+	. = ALIGN(4096);
+
+	.data : { *(.data*) } :data
 
 	/DISCARD/ : { *(.rela.plt*) }
 	.rela.dyn : {
 		__rel_dyn_start = .;
 		*(.rel*)
 		__rel_dyn_end = .;
-	}
+	} :data
 
 	.dynsym : {
 		__dynsym_start = .;
 		*(.dynsym)
 		__dynsym_end = .;
-	}
+	} :data
+
+	.dynamic : { *(.dynamic) } :data :dynamic
 
 	_edata = .;
 
-	.image_end : { *(.__image_end) }
+	.image_end : { *(.__image_end) } :data
 
 	. = ALIGN(4);
-	.__bss_start :  { *(.__bss_start) }
-	.bss : { *(.bss*) }
-	.__bss_stop :  { *(.__bss_stop) }
+	.__bss_start :  { *(.__bss_start) } :data
+	.bss : { *(.bss*) } :data
+	.__bss_stop :  { *(.__bss_stop) } :data
 	_end = .;
 	_barebox_image_size = __bss_start;
 }

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 19/23] riscv: link ELF image into PBL
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (17 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 18/23] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 20/23] riscv: Allwinner D1: Drop M-Mode Sascha Hauer
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

Instead of linking the raw binary barebox proper image into the PBL link
the ELF image into the PBL. With this barebox proper starts with a properly
linked and fully initialized C environment, so the calls to
relocate_to_adr() and setup_c() can be removed from barebox proper.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/riscv/Kconfig           |  1 +
 arch/riscv/boot/start.c      |  6 ------
 arch/riscv/boot/uncompress.c | 21 ++++++++++++++++++++-
 3 files changed, 21 insertions(+), 7 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 96d013d8514ac12de5c34a426262d85f8cf021b9..d9794354f4ed2e8bf7276e03b968c566002c2ec6 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -17,6 +17,7 @@ config RISCV
 	select HAS_KALLSYMS
 	select RISCV_TIMER if RISCV_SBI
 	select HW_HAS_PCI
+	select PBL_IMAGE_ELF
 	select HAVE_ARCH_BOARD_GENERIC_DT
 	select HAVE_ARCH_BOOTM_OFTREE
 
diff --git a/arch/riscv/boot/start.c b/arch/riscv/boot/start.c
index 5091340c8a374fc360ab732ba01ec8516e82a83d..002ab3eccb4292d8d88a95f8b93163c970cb1d64 100644
--- a/arch/riscv/boot/start.c
+++ b/arch/riscv/boot/start.c
@@ -123,12 +123,6 @@ void barebox_non_pbl_start(unsigned long membase, unsigned long memsize,
 	unsigned long barebox_size = barebox_image_size + MAX_BSS_SIZE;
 	unsigned long barebox_base = riscv_mem_barebox_image(membase, endmem, barebox_size);
 
-	relocate_to_current_adr();
-
-	setup_c();
-
-	barrier();
-
 	irq_init_vector(riscv_mode());
 
 	pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
diff --git a/arch/riscv/boot/uncompress.c b/arch/riscv/boot/uncompress.c
index 84142acf9c66fe1fcceb6ae63d15ac078ccddee7..fba04cf4fb6ed450d80b83a8a595346a3186f1e7 100644
--- a/arch/riscv/boot/uncompress.c
+++ b/arch/riscv/boot/uncompress.c
@@ -32,6 +32,8 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
 	unsigned long barebox_base;
 	void *pg_start, *pg_end;
 	unsigned long pc = get_pc();
+	struct elf_image elf;
+	int ret;
 
 	irq_init_vector(riscv_mode());
 
@@ -68,7 +70,24 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
 
 	sync_caches_for_execution();
 
-	barebox = (void *)barebox_base;
+	ret = elf_open_binary_into(&elf, (void *)barebox_base);
+	if (ret) {
+		pr_err("Failed to open ELF binary: %d\n", ret);
+		hang();
+	}
+
+	ret = elf_load_inplace(&elf);
+	if (ret) {
+		pr_err("Failed to relocate ELF: %d\n", ret);
+		hang();
+	}
+
+	/*
+	 * TODO: Add pbl_mmu_setup_from_elf() call when RISC-V PBL
+	 * MMU support is implemented, similar to ARM
+	 */
+
+	barebox = (void *)elf.entry;
 
 	pr_debug("jumping to uncompressed image at 0x%p. dtb=0x%p\n", barebox, fdt);
 

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 20/23] riscv: Allwinner D1: Drop M-Mode
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (18 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 19/23] riscv: link ELF image into PBL Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 21/23] riscv: add ELF segment-based memory protection with MMU Sascha Hauer
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum

The Allwinner D1 selects both RISCV_M_MODE and RISCV_S_MODE. The board
code uses barebox_riscv_machine_entry() and not barebox_riscv_machine_entry()
which indicates RISCV_M_MODE was only selected by accident. Remove it.

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/riscv/Kconfig.socs | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/riscv/Kconfig.socs b/arch/riscv/Kconfig.socs
index 4a3b56b5fff48c86901ed0346be490a6847ac14e..0d9984dd2888e6cab81939e3ee97ef83851362a0 100644
--- a/arch/riscv/Kconfig.socs
+++ b/arch/riscv/Kconfig.socs
@@ -123,7 +123,6 @@ if SOC_ALLWINNER_SUN20I
 config BOARD_ALLWINNER_D1
 	bool "Allwinner D1 Nezha"
 	select RISCV_S_MODE
-	select RISCV_M_MODE
 	def_bool y
 
 endif

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 21/23] riscv: add ELF segment-based memory protection with MMU
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (19 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 20/23] riscv: Allwinner D1: Drop M-Mode Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 22/23] ARM: cleanup barebox proper entry Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 23/23] riscv: " Sascha Hauer
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5

Enable hardware-enforced W^X (Write XOR Execute) memory protection through
ELF segment-based permissions using the RISC-V MMU.

This implementation provides memory protection for RISC-V S-mode using
Sv39 (RV64) or Sv32 (RV32) page tables.

S-mode MMU Implementation (mmu.c):
- Implement page table walking for Sv39/Sv32
- pbl_remap_range(): remap segments with ELF-derived permissions
- mmu_early_enable(): create identity mapping and enable SATP CSR
- Map ELF flags to PTE bits:
  * MAP_CODE → PTE_R | PTE_X (read + execute)
  * MAP_CACHED_RO → PTE_R (read only)
  * MAP_CACHED → PTE_R | PTE_W (read + write)

Integration:
- Update uncompress.c to call mmu_early_enable() before decompression
  (enables caching for faster decompression)
- Call pbl_mmu_setup_from_elf() after ELF relocation to apply final
  segment-based permissions
- Uses portable pbl/mmu.c infrastructure to parse PT_LOAD segments

Configuration:
- Add CONFIG_MMU option (default y for RISCV_S_MODE)
- Update asm/mmu.h with ARCH_HAS_REMAP and function declarations

Security Benefits:
- Text sections are read-only and executable (cannot be modified)
- Read-only data sections are read-only and non-executable
- Data sections are read-write and non-executable (cannot be executed)
- Hardware-enforced W^X prevents code injection attacks

This is based on the current ARM implementation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/riscv/Kconfig           |  17 ++
 arch/riscv/boot/uncompress.c |  35 ++--
 arch/riscv/cpu/Makefile      |   1 +
 arch/riscv/cpu/mmu.c         | 387 +++++++++++++++++++++++++++++++++++++++++++
 arch/riscv/cpu/mmu.h         | 120 ++++++++++++++
 arch/riscv/include/asm/asm.h |   3 +-
 arch/riscv/include/asm/mmu.h |  44 +++++
 7 files changed, 596 insertions(+), 11 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index d9794354f4ed2e8bf7276e03b968c566002c2ec6..99562c7df8927e11d4de448ad486e49dd5a0d0fd 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -129,4 +129,21 @@ config RISCV_MULTI_MODE
 config RISCV_SBI
 	def_bool RISCV_S_MODE
 
+config MMU
+	bool "MMU-based memory protection"
+	default y if RISCV_S_MODE
+	help
+	  Enable MMU (Memory Management Unit) support for RISC-V S-mode.
+	  This provides hardware-enforced W^X (Write XOR Execute) memory
+	  protection using page tables (Sv39 for RV64, Sv32 for RV32).
+
+	  The PBL sets up page table entries based on ELF segment permissions,
+	  ensuring that:
+	  - Text sections are read-only and executable
+	  - Read-only data sections are read-only and non-executable
+	  - Data sections are read-write and non-executable
+
+	  Say Y if running in S-mode (supervisor mode) with virtual memory.
+	  Say N if running in M-mode or if you don't need memory protection.
+
 endmenu
diff --git a/arch/riscv/boot/uncompress.c b/arch/riscv/boot/uncompress.c
index fba04cf4fb6ed450d80b83a8a595346a3186f1e7..d34926c7eba9439aa8794cd0cd6e7d80a774e4e6 100644
--- a/arch/riscv/boot/uncompress.c
+++ b/arch/riscv/boot/uncompress.c
@@ -10,11 +10,14 @@
 #include <init.h>
 #include <linux/sizes.h>
 #include <pbl.h>
+#include <pbl/mmu.h>
 #include <asm/barebox-riscv.h>
 #include <asm-generic/memory_layout.h>
 #include <asm/sections.h>
 #include <asm/unaligned.h>
+#include <asm/mmu.h>
 #include <asm/irq.h>
+#include <elf.h>
 
 #include <debug_ll.h>
 
@@ -63,6 +66,14 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
 	free_mem_ptr = riscv_mem_early_malloc(membase, endmem);
 	free_mem_end_ptr = riscv_mem_early_malloc_end(membase, endmem);
 
+	/*
+	 * Enable MMU early to enable caching for faster decompression.
+	 * This creates an initial identity mapping that will be refined
+	 * later based on ELF segments.
+	 */
+	if (IS_ENABLED(CONFIG_MMU))
+		mmu_early_enable(membase, memsize, barebox_base);
+
 	pr_debug("uncompressing barebox binary at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
 			pg_start, pg_len, barebox_base, uncompressed_len);
 
@@ -71,21 +82,25 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
 	sync_caches_for_execution();
 
 	ret = elf_open_binary_into(&elf, (void *)barebox_base);
-	if (ret) {
-		pr_err("Failed to open ELF binary: %d\n", ret);
-		hang();
-	}
+	if (ret)
+		panic("Failed to open ELF binary: %d\n", ret);
 
 	ret = elf_load_inplace(&elf);
-	if (ret) {
-		pr_err("Failed to relocate ELF: %d\n", ret);
-		hang();
-	}
+	if (ret)
+		panic("Failed to relocate ELF: %d\n", ret);
+
+	pr_debug("ELF entry point: 0x%llx\n", elf.entry);
 
 	/*
-	 * TODO: Add pbl_mmu_setup_from_elf() call when RISC-V PBL
-	 * MMU support is implemented, similar to ARM
+	 * Now that the ELF image is relocated, we know the exact addresses
+	 * of all segments. Set up MMU with proper permissions based on
+	 * ELF segment flags (PF_R/W/X).
 	 */
+	if (IS_ENABLED(CONFIG_MMU)) {
+		ret = pbl_mmu_setup_from_elf(&elf, membase, memsize);
+		if (ret)
+			panic("Failed to setup memory protection from ELF: %d\n", ret);
+	}
 
 	barebox = (void *)elf.entry;
 
diff --git a/arch/riscv/cpu/Makefile b/arch/riscv/cpu/Makefile
index d79bafc6f142a0060d2a86078f0fb969b298ba98..6bf31b574cd6242df6393fbdc8accc08dceb822a 100644
--- a/arch/riscv/cpu/Makefile
+++ b/arch/riscv/cpu/Makefile
@@ -7,3 +7,4 @@ obj-pbl-$(CONFIG_RISCV_M_MODE) += mtrap.o
 obj-pbl-$(CONFIG_RISCV_S_MODE) += strap.o
 obj-pbl-y += interrupts.o
 endif
+obj-pbl-$(CONFIG_MMU) += mmu.o
diff --git a/arch/riscv/cpu/mmu.c b/arch/riscv/cpu/mmu.c
new file mode 100644
index 0000000000000000000000000000000000000000..1fd5f54c14afbf53e80bd6b9c93a9e84186fc544
--- /dev/null
+++ b/arch/riscv/cpu/mmu.c
@@ -0,0 +1,387 @@
+// SPDX-License-Identifier: GPL-2.0-only
+// SPDX-FileCopyrightText: 2026 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix
+
+#define pr_fmt(fmt) "mmu: " fmt
+
+#include <common.h>
+#include <init.h>
+#include <mmu.h>
+#include <errno.h>
+#include <linux/sizes.h>
+#include <linux/bitops.h>
+#include <asm/sections.h>
+#include <asm/csr.h>
+
+#include "mmu.h"
+
+#ifdef __PBL__
+
+/*
+ * Page table storage for early MMU setup in PBL.
+ * Static allocation before BSS is available.
+ */
+static char early_pt_storage[RISCV_EARLY_PAGETABLE_SIZE] __aligned(RISCV_PGSIZE);
+static unsigned int early_pt_idx;
+
+/*
+ * Allocate a page table from the early PBL storage
+ */
+static pte_t *alloc_pte(void)
+{
+	pte_t *pt;
+
+	if ((early_pt_idx + 1) * RISCV_PGSIZE >= RISCV_EARLY_PAGETABLE_SIZE) {
+		pr_err("Out of early page table memory (need more than %d KB)\n",
+		       RISCV_EARLY_PAGETABLE_SIZE / 1024);
+		hang();
+	}
+
+	pt = (pte_t *)(early_pt_storage + early_pt_idx * RISCV_PGSIZE);
+	early_pt_idx++;
+
+	/* Clear the page table */
+	memset(pt, 0, RISCV_PGSIZE);
+
+	return pt;
+}
+
+/*
+ * split_pte - Split a megapage/gigapage PTE into a page table
+ * @pte: Pointer to the PTE to split
+ * @level: Current page table level (0-2 for Sv39)
+ *
+ * This function takes a leaf PTE (megapage/gigapage) and converts it into
+ * a page table pointer with 512 entries, each covering 1/512th of the
+ * original range with identical permissions.
+ *
+ * Example: A 2MB megapage at Level 1 becomes a Level 2 page table with
+ * 512 × 4KB pages, all with the same R/W/X attributes.
+ */
+static void split_pte(pte_t *pte, int level)
+{
+	pte_t old_pte = *pte;
+	pte_t *new_table;
+	pte_t phys_base;
+	pte_t attrs;
+	unsigned long granularity;
+	int i;
+
+	/* If already a table pointer (no RWX bits), nothing to do */
+	if (!(*pte & (PTE_R | PTE_W | PTE_X)))
+		return;
+
+	/* Allocate new page table (512 entries × 8 bytes = 4KB) */
+	new_table = alloc_pte();
+
+	/* Extract physical base address from old PTE */
+	phys_base = (old_pte >> PTE_PPN_SHIFT) << RISCV_PGSHIFT;
+
+	/* Extract permission attributes to replicate */
+	attrs = old_pte & (PTE_R | PTE_W | PTE_X | PTE_A | PTE_D | PTE_U | PTE_G);
+
+	/*
+	 * Calculate granularity of child level.
+	 * Level 0 (1GB) → Level 1 (2MB): granularity = 2MB = 1 << 21
+	 * Level 1 (2MB) → Level 2 (4KB): granularity = 4KB = 1 << 12
+	 *
+	 * Formula: granularity = 1 << (12 + 9 * (Levels - 2 - level))
+	 * For Sv39 (3 levels):
+	 *   level=0: 1 << (12 + 9*1) = 2MB
+	 *   level=1: 1 << (12 + 9*0) = 4KB
+	 */
+	granularity = 1UL << (RISCV_PGSHIFT + RISCV_PGLEVEL_BITS *
+			      (RISCV_PGTABLE_LEVELS - 2 - level));
+
+	/* Populate new table: replicate old mapping across 512 entries */
+	for (i = 0; i < RISCV_PTE_ENTRIES; i++) {
+		unsigned long new_phys = phys_base + (i * granularity);
+		pte_t new_pte = ((new_phys >> RISCV_PGSHIFT) << PTE_PPN_SHIFT) |
+				attrs | PTE_V;
+		new_table[i] = new_pte;
+	}
+
+	/*
+	 * Replace old leaf PTE with table pointer.
+	 * No RWX bits = pointer to next level.
+	 */
+	*pte = (((unsigned long)new_table >> RISCV_PGSHIFT) << PTE_PPN_SHIFT) | PTE_V;
+
+	pr_debug("Split level %d PTE at phys=0x%llx (granularity=%lu KB)\n",
+		 level, (unsigned long long)phys_base, granularity / 1024);
+}
+
+/*
+ * Get the root page table base
+ */
+static pte_t *get_ttb(void)
+{
+	return (pte_t *)early_pt_storage;
+}
+
+/*
+ * Convert maptype flags to PTE permission bits
+ */
+static unsigned long flags_to_pte(maptype_t flags)
+{
+	unsigned long pte = PTE_V;  /* Valid bit always set */
+
+	/*
+	 * Map barebox memory types to RISC-V PTE flags:
+	 * - ARCH_MAP_CACHED_RWX: read + write + execute (early boot, full RAM access)
+	 * - MAP_CODE: read + execute (text sections)
+	 * - MAP_CACHED_RO: read only (rodata sections)
+	 * - MAP_CACHED: read + write (data/bss sections)
+	 * - MAP_UNCACHED: read + write, uncached (device memory)
+	 */
+	switch (flags & MAP_TYPE_MASK) {
+	case ARCH_MAP_CACHED_RWX:
+		/* Full access for early boot: R + W + X */
+		pte |= PTE_R | PTE_W | PTE_X;
+		break;
+	case MAP_CACHED_RO:
+		/* Read-only data: R, no W, no X */
+		pte |= PTE_R;
+		break;
+	case MAP_CODE:
+		/* Code: R + X, no W */
+		pte |= PTE_R | PTE_X;
+		break;
+	case MAP_CACHED: /* TODO: implement */
+	case MAP_UNCACHED:
+	default:
+		/* Data or uncached: R + W, no X */
+		pte |= PTE_R | PTE_W;
+		break;
+	}
+
+	/* Set accessed and dirty bits to avoid hardware updates */
+	pte |= PTE_A | PTE_D;
+
+	return pte;
+}
+
+/*
+ * Walk page tables and get/create PTE for given address at specified level
+ */
+static pte_t *walk_pgtable(unsigned long addr, int target_level)
+{
+	pte_t *table = get_ttb();
+	int level;
+
+	for (level = 0; level < target_level; level++) {
+		unsigned int index = VPN(addr, RISCV_PGTABLE_LEVELS - 1 - level);
+		pte_t *pte = &table[index];
+
+		if (!(*pte & PTE_V)) {
+			/* Entry not valid - allocate new page table */
+			pte_t *new_table = alloc_pte();
+			pte_t new_pte = ((unsigned long)new_table >> RISCV_PGSHIFT) << PTE_PPN_SHIFT;
+			new_pte |= PTE_V;
+			*pte = new_pte;
+			table = new_table;
+		} else if (*pte & (PTE_R | PTE_W | PTE_X)) {
+			/* This is a leaf PTE - split it before descending */
+			split_pte(pte, level);
+			/* After split, PTE is now a table pointer - follow it */
+			table = (pte_t *)(((*pte >> PTE_PPN_SHIFT) << RISCV_PGSHIFT));
+		} else {
+			/* Valid non-leaf PTE - follow to next level */
+			table = (pte_t *)(((*pte >> PTE_PPN_SHIFT) << RISCV_PGSHIFT));
+		}
+	}
+
+	return table;
+}
+
+/*
+ * Create a page table entry mapping virt -> phys with given permissions
+ */
+static void create_pte(unsigned long virt, phys_addr_t phys, maptype_t flags)
+{
+	pte_t *table;
+	unsigned int index;
+	pte_t pte;
+
+	/* Walk to leaf level page table */
+	table = walk_pgtable(virt, RISCV_PGTABLE_LEVELS - 1);
+
+	/* Get index for this address at leaf level */
+	index = VPN(virt, 0);
+
+	/* Build PTE: PPN + flags */
+	pte = (phys >> RISCV_PGSHIFT) << PTE_PPN_SHIFT;
+	pte |= flags_to_pte(flags);
+
+	/* Write PTE */
+	table[index] = pte;
+}
+
+/*
+ * create_megapage - Create a 2MB megapage mapping
+ * @virt: Virtual address (should be 2MB-aligned)
+ * @phys: Physical address (should be 2MB-aligned)
+ * @flags: Mapping flags (MAP_CACHED, etc.)
+ *
+ * Creates a leaf PTE at Level 1 covering 2MB. This is identical to a 4KB
+ * PTE except it's placed at Level 1 instead of Level 2, saving page tables.
+ */
+static void create_megapage(unsigned long virt, phys_addr_t phys, maptype_t flags)
+{
+	pte_t *table;
+	unsigned int index;
+	pte_t pte;
+
+	/* Walk to Level 1 (one level above 4KB leaf) */
+	table = walk_pgtable(virt, RISCV_PGTABLE_LEVELS - 2);
+
+	/* Get VPN[1] index for this address at Level 1 */
+	index = VPN(virt, 1);
+
+	/* Build leaf PTE at Level 1: PPN + RWX flags make it a megapage */
+	pte = (phys >> RISCV_PGSHIFT) << PTE_PPN_SHIFT;
+	pte |= flags_to_pte(flags);
+
+	/* Write megapage PTE */
+	table[index] = pte;
+}
+
+/*
+ * pbl_remap_range - Remap a virtual address range with specified permissions
+ *
+ * This is called by the portable pbl/mmu.c code after ELF relocation to set up
+ * proper memory protection based on ELF segment flags.
+ */
+void pbl_remap_range(void *virt, phys_addr_t phys, size_t size, maptype_t flags)
+{
+	unsigned long addr = (unsigned long)virt;
+	unsigned long end = addr + size;
+
+	pr_debug("Remapping 0x%08lx-0x%08lx -> 0x%08llx (flags=0x%x)\n",
+		 addr, end, (unsigned long long)phys, flags);
+
+	/* Align to page boundaries */
+	addr &= ~(RISCV_PGSIZE - 1);
+	end = ALIGN(end, RISCV_PGSIZE);
+
+	/* Create page table entries for each page in the range */
+	while (addr < end) {
+		create_pte(addr, phys, flags);
+		addr += RISCV_PGSIZE;
+		phys += RISCV_PGSIZE;
+	}
+
+	/* Flush TLB for the remapped range */
+	sfence_vma();
+}
+
+/*
+ * mmu_early_enable - Set up initial MMU with identity mapping
+ *
+ * Called before barebox decompression to enable caching for faster decompression.
+ * Creates a simple identity map of all RAM with RWX permissions.
+ */
+void mmu_early_enable(unsigned long membase, unsigned long memsize,
+		      unsigned long barebox_base)
+{
+	unsigned long addr;
+	unsigned long end = membase + memsize;
+	unsigned long satp;
+
+	pr_debug("Enabling MMU: mem=0x%08lx-0x%08lx barebox=0x%08lx\n",
+		 membase, end, barebox_base);
+
+	/* Reset page table allocator */
+	early_pt_idx = 0;
+
+	/* Allocate root page table */
+	(void)alloc_pte();
+
+	pr_debug("Creating flat identity mapping...\n");
+
+	/*
+	 * Create a flat identity mapping of the lower address space as uncached.
+	 * This ensures I/O devices (UART, etc.) are accessible after MMU is enabled.
+	 * RV64: Map lower 4GB using 2MB megapages (2048 entries).
+	 * RV32: Map entire 4GB using 4MB superpages (1024 entries in root table).
+	 */
+	addr = 0;
+	do {
+		create_megapage(addr, addr, MAP_UNCACHED);
+		addr += RISCV_L1_SIZE;
+	} while (lower_32_bits(addr) != 0);  /* Wraps around to 0 after 0xFFFFFFFF */
+
+	/*
+	 * Remap RAM as cached with RWX permissions using superpages.
+	 * This overwrites the uncached mappings for RAM regions, providing
+	 * better performance. Later, pbl_mmu_setup_from_elf() will split
+	 * superpages as needed to set fine-grained permissions based on ELF segments.
+	 */
+	pr_debug("Remapping RAM 0x%08lx-0x%08lx as cached RWX...\n", membase, end);
+	for (addr = membase; addr < end; addr += RISCV_L1_SIZE)
+		create_megapage(addr, addr, ARCH_MAP_CACHED_RWX);
+
+	pr_debug("Page table setup complete, used %lu KB\n",
+		 (early_pt_idx * RISCV_PGSIZE) / 1024);
+
+	/*
+	 * Enable MMU by setting SATP CSR:
+	 * - MODE field: Sv39 (RV64) or Sv32 (RV32)
+	 * - ASID: 0 (no address space ID)
+	 * - PPN: physical address of root page table
+	 */
+	satp = SATP_MODE | (((unsigned long)get_ttb() >> RISCV_PGSHIFT) & SATP_PPN_MASK);
+
+	pr_debug("Enabling MMU: SATP=0x%08lx\n", satp);
+
+	/* Synchronize before enabling MMU */
+	sfence_vma();
+
+	/* Enable MMU */
+	csr_write(satp, satp);
+
+	/* Synchronize after enabling MMU */
+	sfence_vma();
+
+	pr_debug("MMU enabled with %lu %spages for RAM\n",
+		 (memsize / RISCV_L1_SIZE),
+		 IS_ENABLED(CONFIG_64BIT) ? "2MB mega" : "4MB super");
+}
+
+#else /* !__PBL__ */
+
+/*
+ * arch_remap_range - Remap a virtual address range (barebox proper)
+ *
+ * This is the non-PBL version used in barebox proper after full relocation.
+ * Currently provides basic remapping support. For full MMU management in
+ * barebox proper, this would need to be extended with:
+ * - Dynamic page table allocation
+ * - Cache flushing for non-cached mappings
+ * - TLB management
+ * - Support for MAP_FAULT (guard pages)
+ */
+int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
+		     maptype_t map_type)
+{
+	/*
+	 * For now, only allow identity mappings that match the default
+	 * cached mapping. This is sufficient for most barebox proper use cases
+	 * where the PBL has already set up the basic MMU configuration.
+	 *
+	 * TODO: Implement full remapping support for:
+	 * - Non-identity mappings
+	 * - Uncached device memory (MAP_UNCACHED)
+	 * - Guard pages (MAP_FAULT)
+	 */
+	if (phys_addr == virt_to_phys(virt_addr) &&
+	    maptype_is_compatible(map_type, MAP_ARCH_DEFAULT))
+		return 0;
+
+	pr_warn("arch_remap_range: non-identity or non-default mapping not yet supported\n");
+	pr_warn("  virt=0x%p phys=0x%pad size=0x%zx type=0x%x\n",
+		virt_addr, &phys_addr, size, map_type);
+
+	return -ENOSYS;
+}
+
+#endif /* __PBL__ */
diff --git a/arch/riscv/cpu/mmu.h b/arch/riscv/cpu/mmu.h
new file mode 100644
index 0000000000000000000000000000000000000000..0222c97fc1ddc507c7bc2f832df60a21d66d25f3
--- /dev/null
+++ b/arch/riscv/cpu/mmu.h
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* SPDX-FileCopyrightText: 2026 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix */
+
+#ifndef __RISCV_CPU_MMU_H
+#define __RISCV_CPU_MMU_H
+
+#include <linux/types.h>
+
+/*
+ * RISC-V MMU constants for Sv39 (RV64) and Sv32 (RV32) page tables
+ */
+
+/* Page table configuration */
+#define RISCV_PGSHIFT		12
+#define RISCV_PGSIZE		(1UL << RISCV_PGSHIFT)  /* 4KB */
+
+#ifdef CONFIG_64BIT
+/* Sv39: 9-bit VPN fields, 512 entries per table */
+#define RISCV_PGLEVEL_BITS	9
+#define RISCV_PGTABLE_ENTRIES	512
+#else
+/* Sv32: 10-bit VPN fields, 1024 entries per table */
+#define RISCV_PGLEVEL_BITS	10
+#define RISCV_PGTABLE_ENTRIES	1024
+#endif
+
+/* Page table entry (PTE) bit definitions */
+#define PTE_V		(1UL << 0)  /* Valid */
+#define PTE_R		(1UL << 1)  /* Read */
+#define PTE_W		(1UL << 2)  /* Write */
+#define PTE_X		(1UL << 3)  /* Execute */
+#define PTE_U		(1UL << 4)  /* User accessible */
+#define PTE_G		(1UL << 5)  /* Global mapping */
+#define PTE_A		(1UL << 6)  /* Accessed */
+#define PTE_D		(1UL << 7)  /* Dirty */
+#define PTE_RSW_MASK	(3UL << 8)  /* Reserved for software */
+
+/* PTE physical page number (PPN) field position */
+#define PTE_PPN_SHIFT	10
+
+#ifdef CONFIG_64BIT
+/*
+ * Sv39: 39-bit virtual addressing, 3-level page tables
+ * Virtual address format: [38:30] VPN[2], [29:21] VPN[1], [20:12] VPN[0], [11:0] offset
+ */
+#define RISCV_PGTABLE_LEVELS	3
+#define VA_BITS			39
+#else
+/*
+ * Sv32: 32-bit virtual addressing, 2-level page tables
+ * Virtual address format: [31:22] VPN[1], [21:12] VPN[0], [11:0] offset
+ */
+#define RISCV_PGTABLE_LEVELS	2
+#define VA_BITS			32
+#endif
+
+/* SATP register fields */
+#ifdef CONFIG_64BIT
+#define SATP_PPN_MASK		((1ULL << 44) - 1)  /* Physical page number (Sv39) */
+#else
+#define SATP_PPN_MASK		((1UL << 22) - 1)   /* Physical page number (Sv32) */
+#endif
+
+/* Extract VPN (Virtual Page Number) from virtual address */
+#define VPN_MASK		((1UL << RISCV_PGLEVEL_BITS) - 1)
+#define VPN(addr, level)	(((addr) >> (RISCV_PGSHIFT + (level) * RISCV_PGLEVEL_BITS)) & VPN_MASK)
+
+/* RISC-V page sizes by level */
+#ifdef CONFIG_64BIT
+/* Sv39: 3-level page tables */
+#define RISCV_L2_SHIFT		30	/* 1GB gigapages (Level 0 in Sv39) */
+#define RISCV_L1_SHIFT		21	/* 2MB megapages (Level 1 in Sv39) */
+#define RISCV_L0_SHIFT		12	/* 4KB pages (Level 2 in Sv39) */
+#else
+/* Sv32: 2-level page tables */
+#define RISCV_L1_SHIFT		22	/* 4MB superpages (Level 0 in Sv32) */
+#define RISCV_L0_SHIFT		12	/* 4KB pages (Level 1 in Sv32) */
+#endif
+
+#ifdef CONFIG_64BIT
+#define RISCV_L2_SIZE		(1UL << RISCV_L2_SHIFT)	/* 1GB (RV64 only) */
+#endif
+#define RISCV_L1_SIZE		(1UL << RISCV_L1_SHIFT)	/* 2MB (RV64) or 4MB (RV32) */
+#define RISCV_L0_SIZE		(1UL << RISCV_L0_SHIFT)	/* 4KB */
+
+/* Number of entries per page table (use RISCV_PGTABLE_ENTRIES instead) */
+#define RISCV_PTE_ENTRIES	RISCV_PGTABLE_ENTRIES
+
+/* PTE type - 64-bit on RV64, 32-bit on RV32 */
+#ifdef CONFIG_64BIT
+typedef uint64_t pte_t;
+#else
+typedef uint32_t pte_t;
+#endif
+
+/* Early page table allocation size (PBL) */
+#ifdef CONFIG_64BIT
+/* Sv39: 3 levels, allocate space for root + worst case intermediate tables */
+#define RISCV_EARLY_PAGETABLE_SIZE	(64 * 1024)  /* 64KB */
+#else
+/* Sv32: 2 levels, smaller allocation */
+#define RISCV_EARLY_PAGETABLE_SIZE	(32 * 1024)  /* 32KB */
+#endif
+
+#ifndef __ASSEMBLY__
+
+/* SFENCE.VMA - Synchronize updates to page tables */
+static inline void sfence_vma(void)
+{
+	__asm__ __volatile__ ("sfence.vma" : : : "memory");
+}
+
+static inline void sfence_vma_addr(unsigned long addr)
+{
+	__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory");
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __RISCV_CPU_MMU_H */
diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
index 9c992a88d858fe6105f16978849f3a564d42b85f..23f60b615ea91e680c57b7b65b868260a761de5e 100644
--- a/arch/riscv/include/asm/asm.h
+++ b/arch/riscv/include/asm/asm.h
@@ -9,7 +9,8 @@
 #ifdef __ASSEMBLY__
 #define __ASM_STR(x)	x
 #else
-#define __ASM_STR(x)	#x
+#define __ASM_STR_HELPER(x)	#x
+#define __ASM_STR(x)	__ASM_STR_HELPER(x)
 #endif
 
 #if __riscv_xlen == 64
diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
index 1c2646ebb3393f120ad7208109372fef8bc32e81..bad64fd3b9c1ca685a18128024b8005262640e5b 100644
--- a/arch/riscv/include/asm/mmu.h
+++ b/arch/riscv/include/asm/mmu.h
@@ -3,6 +3,50 @@
 #ifndef __ASM_MMU_H
 #define __ASM_MMU_H
 
+#include <linux/types.h>
+
+/*
+ * RISC-V supports memory protection through two mechanisms:
+ * - S-mode: Virtual memory with page tables (MMU)
+ * - M-mode: Physical Memory Protection (PMP) regions
+ */
+
+#ifdef CONFIG_MMU
+#define ARCH_HAS_REMAP
+#define MAP_ARCH_DEFAULT MAP_CACHED
+
+/* Architecture-specific memory type flags */
+#define ARCH_MAP_CACHED_RWX		MAP_ARCH(2)	/* Cached, RWX (early boot) */
+#define ARCH_MAP_FLAG_PAGEWISE		(1 << 16)	/* Force page-wise mapping */
+
+#ifdef __PBL__
+/*
+ * PBL remap function - used by pbl/mmu.c to apply ELF segment permissions.
+ * Implementation is in arch/riscv/cpu/mmu.c (S-mode) or pmp.c (M-mode).
+ */
+void pbl_remap_range(void *virt, phys_addr_t phys, size_t size, maptype_t flags);
+
+/*
+ * Early MMU/PMP setup - called before decompression for performance.
+ * S-mode: Sets up basic page tables and enables MMU via SATP CSR.
+ * M-mode: Configures initial PMP regions.
+ */
+void mmu_early_enable(unsigned long membase, unsigned long memsize,
+		      unsigned long barebox_base);
+#endif /* __PBL__ */
+
+/*
+ * Remap a virtual address range with specified memory type (barebox proper).
+ * Used by the generic remap infrastructure after barebox is fully relocated.
+ * Implementation is in arch/riscv/cpu/mmu.c (S-mode) or pmp.c (M-mode).
+ */
+int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
+		     maptype_t map_type);
+
+#else
 #define MAP_ARCH_DEFAULT MAP_UNCACHED
+#endif
+
+#include <mmu.h>
 
 #endif /* __ASM_MMU_H */

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 22/23] ARM: cleanup barebox proper entry
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (20 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 21/23] riscv: add ELF segment-based memory protection with MMU Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  2026-01-08 15:50 ` [PATCH v3 23/23] riscv: " Sascha Hauer
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5

As barebox proper is now an ELF file we no longer need to map the entry
function to the start of the image. Just link it to wherever the linker
wants it and drop the text_entry section. Also, remove the start()
function and set the ELF entry to barebox_non_pbl_start() directly.

While at it also remove the bare_init stuff from the barebox proper
linker script as it's only relevant to the PBL linker script which
is a separate script.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/arm/cpu/start.c         | 11 -----------
 arch/arm/lib32/barebox.lds.S |  7 +------
 arch/arm/lib64/barebox.lds.S |  7 +------
 3 files changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
index 2498bdb894c261f587af6542ea4574c497a6edc0..c2f14736dac5948d0b751f2307690420ee4c23ca 100644
--- a/arch/arm/cpu/start.c
+++ b/arch/arm/cpu/start.c
@@ -189,14 +189,3 @@ __noreturn void barebox_non_pbl_start(unsigned long membase,
 
 	start_barebox();
 }
-
-void start(unsigned long membase, unsigned long memsize, struct handoff_data *hd);
-/*
- * First function in the uncompressed image. We get here from
- * the pbl. The stack already has been set up by the pbl.
- */
-void __section(.text_entry) start(unsigned long membase,
-		unsigned long memsize, struct handoff_data *hd)
-{
-	barebox_non_pbl_start(membase, memsize, hd);
-}
diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
index 2fb43b4619ff29d8d21dd579d3a3002b7134ff71..e1a9495440b3330811561db2e8ea92149756ff8a 100644
--- a/arch/arm/lib32/barebox.lds.S
+++ b/arch/arm/lib32/barebox.lds.S
@@ -6,7 +6,7 @@
 
 OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
 OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
-ENTRY(start)
+ENTRY(barebox_non_pbl_start)
 
 PHDRS
 {
@@ -27,17 +27,12 @@ SECTIONS
 	.text      :
 	{
 		_stext = .;
-		*(.text_entry*)
-		__bare_init_start = .;
-		*(.text_bare_init*)
-		__bare_init_end = .;
 		. = ALIGN(0x20);
 		__exceptions_start = .;
 		KEEP(*(.text_exceptions*))
 		__exceptions_stop = .;
 		*(.text*)
 	} :text
-	BAREBOX_BARE_INIT_SIZE
 
 	. = ALIGN(4096);
 	__start_rodata = .;
diff --git a/arch/arm/lib64/barebox.lds.S b/arch/arm/lib64/barebox.lds.S
index 71f677a917851270e09c6d439fe5cbe4b6b41034..2255eaf503eae3d915f51d5d7ba2e6cdc10f711a 100644
--- a/arch/arm/lib64/barebox.lds.S
+++ b/arch/arm/lib64/barebox.lds.S
@@ -5,7 +5,7 @@
 
 OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
 OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
-ENTRY(start)
+ENTRY(barebox_non_pbl_start)
 
 PHDRS
 {
@@ -26,13 +26,8 @@ SECTIONS
 	.text      :
 	{
 		_stext = .;
-		*(.text_entry*)
-		__bare_init_start = .;
-		*(.text_bare_init*)
-		__bare_init_end = .;
 		*(.text*)
 	} :text
-	BAREBOX_BARE_INIT_SIZE
 
 	. = ALIGN(4096);
 	__start_rodata = .;

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v3 23/23] riscv: cleanup barebox proper entry
  2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
                   ` (21 preceding siblings ...)
  2026-01-08 15:50 ` [PATCH v3 22/23] ARM: cleanup barebox proper entry Sascha Hauer
@ 2026-01-08 15:50 ` Sascha Hauer
  22 siblings, 0 replies; 25+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:50 UTC (permalink / raw)
  To: BAREBOX; +Cc: Claude Sonnet 4.5

As barebox proper is now an ELF file we no longer need to map the entry
function to the start of the image. Just link it to wherever the linker
wants it and drop the text_entry section. Also, remove the start()
function and set the ELF entry to barebox_non_pbl_start() directly.

While at it also remove the bare_init stuff from the barebox proper
linker script as it's only relevant to the PBL linker script which
is a separate script.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 arch/riscv/boot/start.c      | 13 +------------
 arch/riscv/boot/uncompress.c |  2 +-
 arch/riscv/lib/barebox.lds.S |  7 +------
 3 files changed, 3 insertions(+), 19 deletions(-)

diff --git a/arch/riscv/boot/start.c b/arch/riscv/boot/start.c
index 002ab3eccb4292d8d88a95f8b93163c970cb1d64..15bb91ac1b49ada66375507120256daff6b563f4 100644
--- a/arch/riscv/boot/start.c
+++ b/arch/riscv/boot/start.c
@@ -114,7 +114,7 @@ device_initcall(barebox_memory_areas_init);
  * First function in the uncompressed image. We get here from
  * the pbl. The stack already has been set up by the pbl.
  */
-__noreturn __no_sanitize_address __section(.text_entry)
+__noreturn
 void barebox_non_pbl_start(unsigned long membase, unsigned long memsize,
 			   void *boarddata)
 {
@@ -177,14 +177,3 @@ void barebox_non_pbl_start(unsigned long membase, unsigned long memsize,
 
 	start_barebox();
 }
-
-void start(unsigned long membase, unsigned long memsize, void *boarddata);
-/*
- * First function in the uncompressed image. We get here from
- * the pbl. The stack already has been set up by the pbl.
- */
-void __no_sanitize_address __section(.text_entry) start(unsigned long membase,
-		unsigned long memsize, void *boarddata)
-{
-	barebox_non_pbl_start(membase, memsize, boarddata);
-}
diff --git a/arch/riscv/boot/uncompress.c b/arch/riscv/boot/uncompress.c
index d34926c7eba9439aa8794cd0cd6e7d80a774e4e6..73b9cab7ea39bfbdd2184e074d07b0df56ab8b0d 100644
--- a/arch/riscv/boot/uncompress.c
+++ b/arch/riscv/boot/uncompress.c
@@ -102,7 +102,7 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
 			panic("Failed to setup memory protection from ELF: %d\n", ret);
 	}
 
-	barebox = (void *)elf.entry;
+	barebox = (void *)(unsigned long)elf.entry;
 
 	pr_debug("jumping to uncompressed image at 0x%p. dtb=0x%p\n", barebox, fdt);
 
diff --git a/arch/riscv/lib/barebox.lds.S b/arch/riscv/lib/barebox.lds.S
index 77f854e73e2013ca332a0a94fd1deaa2b9978a1d..1435ce3318a466d875d583bab876f63a2368ae69 100644
--- a/arch/riscv/lib/barebox.lds.S
+++ b/arch/riscv/lib/barebox.lds.S
@@ -14,7 +14,7 @@
 #include <asm/barebox.lds.h>
 
 OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
-ENTRY(start)
+ENTRY(barebox_non_pbl_start)
 OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
 
 PHDRS
@@ -36,16 +36,11 @@ SECTIONS
 	.text      :
 	{
 		_stext = .;
-		*(.text_entry*)
-		__bare_init_start = .;
-		*(.text_bare_init*)
-		__bare_init_end = .;
 		__exceptions_start = .;
 		KEEP(*(.text_exceptions*))
 		__exceptions_stop = .;
 		*(.text*)
 	} :text
-	BAREBOX_BARE_INIT_SIZE
 
 	. = ALIGN(4096);
 	__start_rodata = .;

-- 
2.47.3




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v3 01/23] Makefile.compiler: add objcopy-option
  2026-01-08 15:49 ` [PATCH v3 01/23] Makefile.compiler: add objcopy-option Sascha Hauer
@ 2026-01-08 16:25   ` Ahmad Fatoum
  0 siblings, 0 replies; 25+ messages in thread
From: Ahmad Fatoum @ 2026-01-08 16:25 UTC (permalink / raw)
  To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5

On 1/8/26 4:49 PM, Sascha Hauer wrote:
> Similar to other *-option macros this one is for testing if objcopy
> flags are supported.
> 
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>

Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>

> ---
>  scripts/Makefile.compiler | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/scripts/Makefile.compiler b/scripts/Makefile.compiler
> index 1d34239b3bbaf49dede33f67d69ebeb511b5dc28..f2fdddd07bbef9ea90da219602086b5be591a75d 100644
> --- a/scripts/Makefile.compiler
> +++ b/scripts/Makefile.compiler
> @@ -69,6 +69,11 @@ cc-ifversion = $(shell [ $(call cc-version, $(CC)) $(1) $(2) ] && echo $(3))
>  ld-option = $(call try-run,\
>  	$(CC) -x c /dev/null -c -o "$$TMPO" ; $(LD) $(1) "$$TMPO" -o "$$TMP",$(1),$(2))
>  
> +# objcopy-option
> +# Usage: KBUILD_LDFLAGS += $(call objcopy-option,--strip-section-headers,--strip-all)
> +objcopy-option = $(call try-run,\
> +        $(CC) -x c /dev/null -c -o "$$TMPO"; $(OBJCOPY) $(1) "$$TMPO" "$$TMP",$(1),$(2))
> +
>  # Prefix -I with $(srctree) if it is not an absolute path.
>  # skip if -I has no parameter
>  addtree = $(if $(patsubst -I%,%,$(1)), \
> 

-- 
Pengutronix e.K.                  |                             |
Steuerwalder Str. 21              | http://www.pengutronix.de/  |
31137 Hildesheim, Germany         | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686  | Fax:   +49-5121-206917-5555 |




^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2026-01-08 16:26 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-08 15:49 [PATCH v3 00/23] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
2026-01-08 15:49 ` [PATCH v3 01/23] Makefile.compiler: add objcopy-option Sascha Hauer
2026-01-08 16:25   ` Ahmad Fatoum
2026-01-08 15:49 ` [PATCH v3 02/23] elf: only accept images matching the native ELF_CLASS Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 03/23] elf: build for PBL as well Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 04/23] elf: add dynamic relocation support Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 05/23] ARM: implement elf_apply_relocations() for ELF " Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 06/23] riscv: define generic relocate_image Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 07/23] riscv: implement elf_apply_relocations() for ELF relocation support Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 08/23] elf: implement elf_load_inplace() Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 09/23] elf: create elf_open_binary_into() Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 10/23] Makefile: add vmbarebox build target Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 11/23] PBL: allow to link ELF image into PBL Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 12/23] mmu: add MAP_CACHED_RO mapping type Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 13/23] mmu: introduce pbl_remap_range() Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 14/23] ARM: drop arm_fixup_vectors() Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 15/23] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 16/23] ARM: link ELF image into PBL Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 17/23] ARM: PBL: setup MMU with proper permissions from ELF segments Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 18/23] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 19/23] riscv: link ELF image into PBL Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 20/23] riscv: Allwinner D1: Drop M-Mode Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 21/23] riscv: add ELF segment-based memory protection with MMU Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 22/23] ARM: cleanup barebox proper entry Sascha Hauer
2026-01-08 15:50 ` [PATCH v3 23/23] riscv: " Sascha Hauer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox