* [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations
@ 2026-01-06 12:53 Sascha Hauer
2026-01-06 12:53 ` [PATCH v2 01/21] elf: only accept images matching the native ELF_CLASS Sascha Hauer
` (20 more replies)
0 siblings, 21 replies; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Until now we linked the raw barebox proper binary into the PBL which
comes with a number of disadvantages. We rely on self-modifying code
to in barebox proper (relocate_to_current_adr()) and have no initialized
bss segment (setup_c()). Also we can only mark the .text and .rodata as
readonly during runtime of barebox proper.
This series overcomes this by linking a ELF image into the PBL. This
image is properly layed out, linked and initialized in the PBL. With
this barebox proper has a proper C environment and text/rodata
protection from the start.
As a bonus this series also adds initial MMU support for RISCV, also
based on loading the ELF image and configuring the MMU from the PBL.
I lost track about the review feedback for v1, partly because I asked
Claude to integrate the review feedback for me, which it did, but not
completely. Nevertheless I think this series has enough changes for now
so that it deserves a second look.
What I didn't care about yet is that we found out that neither ARM nor
RiscV use any absolute relocations, so we might be able to remove
support for them, make sure these are not emitted during compile time,
or properly test/fix them when we discover that they are indeed needed.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
Changes in v2:
- rebased on Ahmads patches and with it reused existing relocate_image()
- hopefully integrated all feedback for v1
- Link to v1: https://lore.barebox.org/20260105-pbl-load-elf-v1-0-e97853f98232@pengutronix.de
---
Sascha Hauer (21):
elf: only accept images matching the native ELF_CLASS
elf: build for PBL as well
elf: add dynamic relocation support
ARM: implement elf_apply_relocations() for ELF relocation support
riscv: define generic relocate_image
riscv: implement elf_apply_relocations() for ELF relocation support
elf: implement elf_load_inplace()
elf: create elf_open_binary_into()
Makefile: add barebox.elf build target
PBL: allow to link ELF image into PBL
mmu: add MAP_CACHED_RO mapping type
mmu: introduce pbl_remap_range()
ARM: use relative jumps in exception table
ARM: exceptions: make in-binary exception table const
ARM: linker script: create separate PT_LOAD segments for text, rodata, and data
ARM: link ELF image into PBL
ARM: PBL: setup MMU with proper permissions from ELF segments
riscv: link ELF image into PBL
riscv: linker script: create separate PT_LOAD segments for text, rodata, and data
riscv: Allwinner D1: Drop M-Mode
riscv: add ELF segment-based memory protection with MMU
Makefile | 17 +-
arch/arm/Kconfig | 2 +
arch/arm/cpu/exceptions_32.S | 54 ++---
arch/arm/cpu/interrupts_32.c | 45 +++-
arch/arm/cpu/mmu-common.c | 14 +-
arch/arm/cpu/mmu-common.h | 3 +-
arch/arm/cpu/mmu_32.c | 14 +-
arch/arm/cpu/mmu_64.c | 10 +-
arch/arm/cpu/no-mmu.c | 11 +-
arch/arm/cpu/start.c | 11 +-
arch/arm/cpu/uncompress.c | 40 +++-
arch/arm/include/asm/barebox-arm.h | 4 +-
arch/arm/include/asm/barebox.lds.h | 2 +-
arch/arm/include/asm/elf.h | 7 +
arch/arm/include/asm/sections.h | 1 +
arch/arm/lib/pbl.lds.S | 6 +-
arch/arm/lib32/barebox.lds.S | 38 ++-
arch/arm/lib32/reloc.c | 23 ++
arch/arm/lib64/barebox.lds.S | 29 ++-
arch/arm/lib64/reloc.c | 25 +-
arch/riscv/Kconfig | 18 ++
arch/riscv/Kconfig.socs | 1 -
arch/riscv/boot/start.c | 6 -
arch/riscv/boot/uncompress.c | 43 +++-
arch/riscv/cpu/Makefile | 1 +
arch/riscv/cpu/mmu.c | 386 +++++++++++++++++++++++++++++++
arch/riscv/cpu/mmu.h | 144 ++++++++++++
arch/riscv/include/asm/asm.h | 3 +-
arch/riscv/include/asm/mmu.h | 44 ++++
arch/riscv/lib/barebox.lds.S | 38 +--
arch/riscv/lib/reloc.c | 82 +++++--
common/Makefile | 2 +-
common/elf.c | 461 ++++++++++++++++++++++++++++++++++++-
images/Makefile | 2 +-
include/elf.h | 73 ++++++
include/mmu.h | 14 +-
include/pbl/mmu.h | 29 +++
lib/Makefile | 1 +
lib/elf_reloc.c | 15 ++
pbl/Kconfig | 9 +
pbl/Makefile | 1 +
pbl/mmu.c | 111 +++++++++
scripts/prelink-riscv.inc | 9 +-
43 files changed, 1685 insertions(+), 164 deletions(-)
---
base-commit: fd86e9d4215bc4cd6a84af2a8de22f1bb9738faf
change-id: 20251227-pbl-load-elf-cb4cb0ceb7d8
Best regards,
--
Sascha Hauer <s.hauer@pengutronix.de>
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 01/21] elf: only accept images matching the native ELF_CLASS
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 12:58 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 02/21] elf: build for PBL as well Sascha Hauer
` (19 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
common/elf.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/common/elf.c b/common/elf.c
index 18c541bf827e6077e64c15f62cb4abedc68cf278..a0e67a9353a12779ec841c53db7f6dba47070d8d 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -213,6 +213,9 @@ static int elf_check_image(struct elf_image *elf, void *buf)
return -ENOEXEC;
}
+ if (elf->class != ELF_CLASS)
+ return -EINVAL;
+
if (!elf_hdr_e_phnum(elf, buf)) {
pr_err("No phdr found.\n");
return -ENOEXEC;
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 02/21] elf: build for PBL as well
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
2026-01-06 12:53 ` [PATCH v2 01/21] elf: only accept images matching the native ELF_CLASS Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:26 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 03/21] elf: add dynamic relocation support Sascha Hauer
` (18 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
We'll link barebox proper as an ELF image into the PBL later, so compile
ELF support for PBL as well.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
common/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/common/Makefile b/common/Makefile
index 36dee5f7a98a7b2bbf67263ee5942520e6ce53e0..b1170ea4a801de2e7d0279edaddbf6f370a053c1 100644
--- a/common/Makefile
+++ b/common/Makefile
@@ -14,7 +14,7 @@ obj-y += misc.o
obj-pbl-y += memsize.o
obj-y += resource.o
obj-pbl-y += bootsource.o
-obj-$(CONFIG_ELF) += elf.o
+obj-pbl-$(CONFIG_ELF) += elf.o
obj-y += restart.o
obj-y += poweroff.o
obj-y += slice.o
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 03/21] elf: add dynamic relocation support
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
2026-01-06 12:53 ` [PATCH v2 01/21] elf: only accept images matching the native ELF_CLASS Sascha Hauer
2026-01-06 12:53 ` [PATCH v2 02/21] elf: build for PBL as well Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:51 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 04/21] ARM: implement elf_apply_relocations() for ELF " Sascha Hauer
` (17 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Add support for applying dynamic relocations to ELF binaries. This allows
loading ET_DYN (position-independent) binaries and ET_EXEC binaries at
custom load addresses.
Key changes:
- Add elf_image.reloc_offset to track offset between vaddr and load address
- Implement elf_compute_load_offset() to calculate relocation offset
- Add elf_set_load_address() API to specify custom load address
- Implement elf_find_dynamic_segment() to locate PT_DYNAMIC
- Add elf_relocate() to apply relocations
- Provide weak default elf_apply_relocations() stub for unsupported architectures
- Add ELF dynamic section accessors
The relocation offset type is unsigned long to properly handle pointer
arithmetic and avoid casting issues.
Architecture-specific implementations should override the weak
elf_apply_relocations() function to handle their relocation types.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---
common/elf.c | 280 +++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/elf.h | 64 +++++++++++++
lib/Makefile | 1 +
lib/elf_reloc.c | 15 +++
4 files changed, 355 insertions(+), 5 deletions(-)
diff --git a/common/elf.c b/common/elf.c
index a0e67a9353a12779ec841c53db7f6dba47070d8d..77d4ac7c8545596b6f44751988e5e7f97b98df15 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -21,6 +21,18 @@ struct elf_segment {
bool is_iomem_region;
};
+static void *elf_get_dest(struct elf_image *elf, void *phdr)
+{
+ void *dst;
+
+ if (elf->reloc_offset)
+ dst = (void *)(unsigned long)(elf->reloc_offset + elf_phdr_p_vaddr(elf, phdr));
+ else
+ dst = (void *)(unsigned long)elf_phdr_p_paddr(elf, phdr);
+
+ return dst;
+}
+
static int elf_request_region(struct elf_image *elf, resource_size_t start,
resource_size_t size, void *phdr)
{
@@ -65,9 +77,63 @@ static void elf_release_regions(struct elf_image *elf)
}
}
+static int elf_compute_load_offset(struct elf_image *elf)
+{
+ void *buf = elf->hdr_buf;
+ void *phdr = buf + elf_hdr_e_phoff(elf, buf);
+ u64 min_vaddr = (u64)-1;
+ u64 min_paddr = (u64)-1;
+ int i;
+
+ /* Find lowest p_vaddr and p_paddr in PT_LOAD segments */
+ for (i = 0; i < elf_hdr_e_phnum(elf, buf); i++) {
+ if (elf_phdr_p_type(elf, phdr) == PT_LOAD) {
+ u64 vaddr = elf_phdr_p_vaddr(elf, phdr);
+ u64 paddr = elf_phdr_p_paddr(elf, phdr);
+
+ if (vaddr < min_vaddr)
+ min_vaddr = vaddr;
+ if (paddr < min_paddr)
+ min_paddr = paddr;
+ }
+ phdr += elf_size_of_phdr(elf);
+ }
+
+ /*
+ * Determine base load address:
+ * 1. If user specified load_address, use it
+ * 2. Otherwise for ET_EXEC, use NULL (segments use p_paddr directly)
+ * 3. For ET_DYN, use lowest p_paddr
+ */
+ if (elf->load_address) {
+ elf->base_load_addr = elf->load_address;
+ } else if (elf->type == ET_EXEC) {
+ elf->base_load_addr = NULL;
+ } else {
+ elf->base_load_addr = (void *)(phys_addr_t)min_paddr;
+ }
+
+ /*
+ * Calculate relocation offset:
+ * - For ET_EXEC with no custom load address: no offset needed
+ * - Otherwise: offset = base_load_addr - lowest_vaddr
+ */
+ if (elf->type == ET_EXEC && !elf->load_address) {
+ elf->reloc_offset = 0;
+ } else {
+ elf->reloc_offset = ((unsigned long)elf->base_load_addr - min_vaddr);
+ }
+
+ pr_debug("ELF load: type=%s, base=%p, offset=%08lx\n",
+ elf->type == ET_EXEC ? "ET_EXEC" : "ET_DYN",
+ elf->base_load_addr, elf->reloc_offset);
+
+ return 0;
+}
+
static int request_elf_segment(struct elf_image *elf, void *phdr)
{
- void *dst = (void *) (phys_addr_t) elf_phdr_p_paddr(elf, phdr);
+ void *dst;
int ret;
u64 p_memsz = elf_phdr_p_memsz(elf, phdr);
@@ -78,6 +144,15 @@ static int request_elf_segment(struct elf_image *elf, void *phdr)
if (!p_memsz)
return 0;
+ /*
+ * Calculate destination address:
+ * - If reloc_offset is set (custom load address or ET_DYN):
+ * dst = reloc_offset + p_vaddr
+ * - Otherwise (ET_EXEC, no custom address):
+ * dst = p_paddr (original behavior)
+ */
+ dst = elf_get_dest(elf, phdr);
+
if (dst < elf->low_addr)
elf->low_addr = dst;
if (dst + p_memsz > elf->high_addr)
@@ -129,7 +204,8 @@ static int load_elf_to_memory(struct elf_image *elf)
p_offset = elf_phdr_p_offset(elf, r->phdr);
p_filesz = elf_phdr_p_filesz(elf, r->phdr);
p_memsz = elf_phdr_p_memsz(elf, r->phdr);
- dst = (void *) (phys_addr_t) elf_phdr_p_paddr(elf, r->phdr);
+
+ dst = elf_get_dest(elf, r->phdr);
pr_debug("Loading phdr offset 0x%llx to 0x%p (%llu bytes)\n",
p_offset, dst, p_filesz);
@@ -173,6 +249,11 @@ static int load_elf_image_segments(struct elf_image *elf)
if (!list_empty(&elf->list))
return -EINVAL;
+ /* Calculate load offset for ET_DYN */
+ ret = elf_compute_load_offset(elf);
+ if (ret)
+ return ret;
+
for (i = 0; i < elf_hdr_e_phnum(elf, buf) ; ++i) {
ret = request_elf_segment(elf, phdr);
if (ret)
@@ -201,6 +282,8 @@ static int load_elf_image_segments(struct elf_image *elf)
static int elf_check_image(struct elf_image *elf, void *buf)
{
+ u16 e_type;
+
if (memcmp(buf, ELFMAG, SELFMAG)) {
pr_err("ELF magic not found.\n");
return -EINVAL;
@@ -208,14 +291,17 @@ static int elf_check_image(struct elf_image *elf, void *buf)
elf->class = ((char *) buf)[EI_CLASS];
- if (elf_hdr_e_type(elf, buf) != ET_EXEC) {
- pr_err("Non EXEC ELF image.\n");
+ e_type = elf_hdr_e_type(elf, buf);
+ if (e_type != ET_EXEC && e_type != ET_DYN) {
+ pr_err("Unsupported ELF type: %u (only ET_EXEC and ET_DYN supported)\n", e_type);
return -ENOEXEC;
}
if (elf->class != ELF_CLASS)
return -EINVAL;
+ elf->type = e_type;
+
if (!elf_hdr_e_phnum(elf, buf)) {
pr_err("No phdr found.\n");
return -ENOEXEC;
@@ -341,9 +427,193 @@ struct elf_image *elf_open(const char *filename)
return elf_check_init(filename);
}
+void elf_set_load_address(struct elf_image *elf, void *addr)
+{
+ elf->load_address = addr;
+}
+
+static void *elf_find_dynamic_segment(struct elf_image *elf)
+{
+ void *buf = elf->hdr_buf;
+ void *phdr = buf + elf_hdr_e_phoff(elf, buf);
+ int i;
+
+ for (i = 0; i < elf_hdr_e_phnum(elf, buf); i++) {
+ if (elf_phdr_p_type(elf, phdr) == PT_DYNAMIC) {
+ u64 offset = elf_phdr_p_offset(elf, phdr);
+
+ /* If loaded from file, PT_DYNAMIC might not be in hdr_buf */
+ if (elf->filename)
+ return elf_get_dest(elf, phdr);
+ else
+ /* Binary in memory, use offset */
+ return elf->hdr_buf + offset;
+ }
+ phdr += elf_size_of_phdr(elf);
+ }
+
+ return NULL; /* No PT_DYNAMIC segment */
+}
+
+/**
+ * elf_parse_dynamic_section - Parse the dynamic section and extract relocation info
+ * @elf: ELF image structure
+ * @dyn_seg: Pointer to the PT_DYNAMIC segment
+ * @rel_out: Output pointer to the relocation table (either REL or RELA)
+ * @relsz_out: Output size of the relocation table in bytes
+ * @is_rela: flag indicating RELA (true) vs REL (false) format is expected
+ *
+ * This is a generic function that works for both 32-bit and 64-bit ELF files,
+ * and handles both REL and RELA relocation formats.
+ *
+ * Returns: 0 on success, -EINVAL on error
+ */
+static int elf_parse_dynamic_section(struct elf_image *elf, const void *dyn_seg,
+ void **rel_out, u64 *relsz_out, bool is_rela)
+{
+ const void *dyn = dyn_seg;
+ void *rel = NULL, *rela = NULL;
+ u64 relsz = 0, relasz = 0;
+ u64 relent = 0, relaent = 0;
+ phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+ size_t expected_rel_size, expected_rela_size;
+
+ /* Calculate expected sizes based on ELF class */
+ if (ELF_CLASS == ELFCLASS32) {
+ expected_rel_size = sizeof(Elf32_Rel);
+ expected_rela_size = sizeof(Elf32_Rela);
+ } else {
+ expected_rel_size = sizeof(Elf64_Rel);
+ expected_rela_size = sizeof(Elf64_Rela);
+ }
+
+ /* Iterate through dynamic entries until DT_NULL */
+ while (elf_dyn_d_tag(elf, dyn) != DT_NULL) {
+ unsigned long tag = elf_dyn_d_tag(elf, dyn);
+
+ switch (tag) {
+ case DT_REL:
+ /* REL table address - needs to be adjusted by load offset */
+ rel = (void *)(unsigned long)(base + elf_dyn_d_ptr(elf, dyn));
+ break;
+ case DT_RELSZ:
+ relsz = elf_dyn_d_val(elf, dyn);
+ break;
+ case DT_RELENT:
+ relent = elf_dyn_d_val(elf, dyn);
+ break;
+ case DT_RELA:
+ /* RELA table address - needs to be adjusted by load offset */
+ rela = (void *)(unsigned long)(base + elf_dyn_d_ptr(elf, dyn));
+ break;
+ case DT_RELASZ:
+ relasz = elf_dyn_d_val(elf, dyn);
+ break;
+ case DT_RELAENT:
+ relaent = elf_dyn_d_val(elf, dyn);
+ break;
+ default:
+ break;
+ }
+
+ dyn += elf_size_of_dyn(elf);
+ }
+
+ /* Check that we found exactly one relocation type */
+ if (rel && rela) {
+ pr_err("ELF has both REL and RELA relocations\n");
+ return -EINVAL;
+ }
+
+ if (rel && !is_rela) {
+ /* REL relocations */
+ if (!relsz || relent != expected_rel_size) {
+ pr_debug("No REL relocations or invalid relocation info\n");
+ return -EINVAL;
+ }
+ *rel_out = rel;
+ *relsz_out = relsz;
+
+ return 0;
+ } else if (rela && is_rela) {
+ /* RELA relocations */
+ if (!relasz || relaent != expected_rela_size) {
+ pr_debug("No RELA relocations or invalid relocation info\n");
+ return -EINVAL;
+ }
+ *rel_out = rela;
+ *relsz_out = relasz;
+
+ return 0;
+ }
+
+ pr_debug("No relocations found in dynamic section\n");
+
+ return -EINVAL;
+}
+
+int elf_parse_dynamic_section_rel(struct elf_image *elf, const void *dyn_seg,
+ void **rel_out, u64 *relsz_out)
+{
+ return elf_parse_dynamic_section(elf, dyn_seg, rel_out, relsz_out, false);
+}
+
+int elf_parse_dynamic_section_rela(struct elf_image *elf, const void *dyn_seg,
+ void **rel_out, u64 *relsz_out)
+{
+ return elf_parse_dynamic_section(elf, dyn_seg, rel_out, relsz_out, true);
+}
+
+static int elf_relocate(struct elf_image *elf)
+{
+ void *dyn_seg;
+
+ /*
+ * Relocations needed if:
+ * - ET_DYN (position-independent), OR
+ * - ET_EXEC with custom load address
+ */
+ if (elf->type == ET_EXEC && !elf->load_address)
+ return 0;
+
+ /* Find PT_DYNAMIC segment */
+ dyn_seg = elf_find_dynamic_segment(elf);
+ if (!dyn_seg) {
+ /*
+ * No PT_DYNAMIC segment found.
+ * For ET_DYN this is unusual but legal.
+ * For ET_EXEC with custom load address, this means no relocations
+ * can be applied - warn the user.
+ */
+ if (elf->type == ET_EXEC && elf->load_address) {
+ pr_warn("ET_EXEC loaded at custom address but no PT_DYNAMIC segment - "
+ "relocations cannot be applied\n");
+ } else {
+ pr_debug("No PT_DYNAMIC segment found\n");
+ }
+ return 0;
+ }
+
+ /* Call architecture-specific relocation handler */
+ return elf_apply_relocations(elf, dyn_seg);
+}
+
int elf_load(struct elf_image *elf)
{
- return load_elf_image_segments(elf);
+ int ret;
+
+ ret = load_elf_image_segments(elf);
+ if (ret)
+ return ret;
+
+ /* Apply relocations if needed */
+ ret = elf_relocate(elf);
+ if (ret) {
+ pr_err("Relocation failed: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
}
void elf_close(struct elf_image *elf)
diff --git a/include/elf.h b/include/elf.h
index 994db642b0789942530f6ef7fdffdd2218afd7b6..7832a522550f64843f7891d1e033f7868565decb 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -394,11 +394,15 @@ extern Elf64_Dyn _DYNAMIC [];
struct elf_image {
struct list_head list;
u8 class;
+ u16 type; /* ET_EXEC or ET_DYN */
u64 entry;
void *low_addr;
void *high_addr;
void *hdr_buf;
const char *filename;
+ void *load_address; /* User-specified load address (NULL = use p_paddr) */
+ void *base_load_addr; /* Calculated base address for ET_DYN */
+ unsigned long reloc_offset; /* Offset between p_vaddr and actual load address */
};
static inline size_t elf_get_mem_size(struct elf_image *elf)
@@ -411,6 +415,31 @@ struct elf_image *elf_open(const char *filename);
void elf_close(struct elf_image *elf);
int elf_load(struct elf_image *elf);
+/*
+ * Set the load address for the ELF file.
+ * Must be called before elf_load().
+ * If not set, ET_EXEC uses p_paddr, ET_DYN uses lowest p_paddr.
+ */
+void elf_set_load_address(struct elf_image *elf, void *addr);
+
+/*
+ * Architecture-specific relocation handler.
+ * Returns 0 on success, -ENOSYS if architecture doesn't support relocations,
+ * other negative error codes on failure.
+ */
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg);
+
+/*
+ * Parse the dynamic section and extract relocation information.
+ * This is a generic function that works for both 32-bit and 64-bit ELF files,
+ * and handles both REL and RELA relocation formats.
+ * Returns 0 on success, -EINVAL on error.
+ */
+int elf_parse_dynamic_section_rel(struct elf_image *elf, const void *dyn_seg,
+ void **rel_out, u64 *relsz_out);
+int elf_parse_dynamic_section_rela(struct elf_image *elf, const void *dyn_seg,
+ void **rel_out, u64 *relsz_out);
+
#define ELF_GET_FIELD(__s, __field, __type) \
static inline __type elf_##__s##_##__field(struct elf_image *elf, void *arg) { \
if (elf->class == ELFCLASS32) \
@@ -426,10 +455,12 @@ ELF_GET_FIELD(hdr, e_phentsize, u16)
ELF_GET_FIELD(hdr, e_type, u16)
ELF_GET_FIELD(hdr, e_machine, u16)
ELF_GET_FIELD(phdr, p_paddr, u64)
+ELF_GET_FIELD(phdr, p_vaddr, u64)
ELF_GET_FIELD(phdr, p_filesz, u64)
ELF_GET_FIELD(phdr, p_memsz, u64)
ELF_GET_FIELD(phdr, p_type, u32)
ELF_GET_FIELD(phdr, p_offset, u64)
+ELF_GET_FIELD(phdr, p_flags, u32)
static inline unsigned long elf_size_of_phdr(struct elf_image *elf)
{
@@ -439,4 +470,37 @@ static inline unsigned long elf_size_of_phdr(struct elf_image *elf)
return sizeof(Elf64_Phdr);
}
+/* Dynamic section accessors */
+static inline s64 elf_dyn_d_tag(struct elf_image *elf, const void *arg)
+{
+ if (elf->class == ELFCLASS32)
+ return (s64)((Elf32_Dyn *)arg)->d_tag;
+ else
+ return (s64)((Elf64_Dyn *)arg)->d_tag;
+}
+
+static inline u64 elf_dyn_d_val(struct elf_image *elf, const void *arg)
+{
+ if (elf->class == ELFCLASS32)
+ return (u64)((Elf32_Dyn *)arg)->d_un.d_val;
+ else
+ return (u64)((Elf64_Dyn *)arg)->d_un.d_val;
+}
+
+static inline u64 elf_dyn_d_ptr(struct elf_image *elf, const void *arg)
+{
+ if (elf->class == ELFCLASS32)
+ return (u64)((Elf32_Dyn *)arg)->d_un.d_ptr;
+ else
+ return (u64)((Elf64_Dyn *)arg)->d_un.d_ptr;
+}
+
+static inline unsigned long elf_size_of_dyn(struct elf_image *elf)
+{
+ if (elf->class == ELFCLASS32)
+ return sizeof(Elf32_Dyn);
+ else
+ return sizeof(Elf64_Dyn);
+}
+
#endif /* _LINUX_ELF_H */
diff --git a/lib/Makefile b/lib/Makefile
index 6d259dd94e163336d7fdab38c7b74b301aabc5c5..da2c8ffe1dbf512a901295c89494e0837f31a0d9 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -24,6 +24,7 @@ obj-y += readkey.o
obj-y += kfifo.o
obj-y += libbb.o
obj-y += libgen.o
+obj-y += elf_reloc.o
obj-$(CONFIG_FIP) += fip.o tbbr_config.o
obj-$(CONFIG_JSMN) += jsmn.o
obj-y += stringlist.o
diff --git a/lib/elf_reloc.c b/lib/elf_reloc.c
new file mode 100644
index 0000000000000000000000000000000000000000..62c6a96c3c1d7b04b618e9a406c73c57da970f6b
--- /dev/null
+++ b/lib/elf_reloc.c
@@ -0,0 +1,15 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <common.h>
+#include <elf.h>
+#include <errno.h>
+
+/*
+ * Weak default implementation for architectures that don't support
+ * ELF relocations yet. Can be overridden by arch-specific implementation.
+ */
+int __weak elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+ pr_warn("ELF relocations not supported for this architecture\n");
+ return -ENOSYS;
+}
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 04/21] ARM: implement elf_apply_relocations() for ELF relocation support
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (2 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 03/21] elf: add dynamic relocation support Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:07 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 05/21] riscv: define generic relocate_image Sascha Hauer
` (16 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Implement architecture-specific ELF relocation handlers for ARM32 and ARM64.
The implementation reuses the existing relocate_image().
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---
arch/arm/include/asm/elf.h | 7 +++++++
arch/arm/lib32/reloc.c | 23 +++++++++++++++++++++++
arch/arm/lib64/reloc.c | 25 +++++++++++++++++++++++--
3 files changed, 53 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/elf.h b/arch/arm/include/asm/elf.h
index 0b4704a4a5615e648f67705ad9daf8dea9f41bab..630c85f2b421cba7e37ea7147c047be948ecdc7e 100644
--- a/arch/arm/include/asm/elf.h
+++ b/arch/arm/include/asm/elf.h
@@ -36,6 +36,13 @@ typedef struct user_fp elf_fpregset_t;
#define R_ARM_THM_CALL 10
#define R_ARM_THM_JUMP24 30
+/* Additional relocation types for dynamic linking */
+#define R_ARM_RELATIVE 23
+
+#define R_AARCH64_NONE 0
+#define R_AARCH64_ABS64 257
+#define R_AARCH64_RELATIVE 1027
+
/*
* These are used to set parameters in the core dumps.
*/
diff --git a/arch/arm/lib32/reloc.c b/arch/arm/lib32/reloc.c
index 378ba95b2ffd455436e704a803f5448fc2c2b18c..19a10ce7e7e637b8dbf7f71db3d97dfc0b215139 100644
--- a/arch/arm/lib32/reloc.c
+++ b/arch/arm/lib32/reloc.c
@@ -6,6 +6,7 @@
#include <barebox.h>
#include <elf.h>
#include <debug_ll.h>
+#include <linux/printk.h>
#include <asm/reloc.h>
#define R_ARM_RELATIVE 23
@@ -54,3 +55,25 @@ void __prereloc relocate_image(unsigned long offset,
if (dynend)
__memset(dynsym, 0, (unsigned long)dynend - (unsigned long)dynsym);
}
+
+/*
+ * Apply ARM32 ELF relocations
+ */
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+ Elf32_Rel *rel;
+ void *rel_ptr;
+ u64 relsz;
+ phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+ int ret;
+
+ ret = elf_parse_dynamic_section_rel(elf, dyn_seg, &rel_ptr, &relsz);
+ if (ret)
+ return ret;
+
+ rel = (Elf32_Rel *)rel_ptr;
+
+ relocate_image(base, rel, (void *)rel + relsz, NULL, NULL);
+
+ return 0;
+}
diff --git a/arch/arm/lib64/reloc.c b/arch/arm/lib64/reloc.c
index 2288f9e2e336887c5edfbf6b080f487394754113..50bd0b88fae0a59a6a86a84c1df0743ac158e06c 100644
--- a/arch/arm/lib64/reloc.c
+++ b/arch/arm/lib64/reloc.c
@@ -7,8 +7,7 @@
#include <elf.h>
#include <debug_ll.h>
#include <asm/reloc.h>
-
-#define R_AARCH64_RELATIVE 1027
+#include <linux/printk.h>
/*
* relocate binary to the currently running address
@@ -45,3 +44,25 @@ void __prereloc relocate_image(unsigned long offset,
dstart += sizeof(*rel);
}
}
+
+/*
+ * Apply ARM64 ELF relocations
+ */
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+ Elf64_Rela *rela;
+ void *rel_ptr;
+ u64 relasz;
+ phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+ int ret;
+
+ ret = elf_parse_dynamic_section_rela(elf, dyn_seg, &rel_ptr, &relasz);
+ if (ret)
+ return ret;
+
+ rela = (Elf64_Rela *)rel_ptr;
+
+ relocate_image(base, rela, (void *)rela + relasz, NULL, NULL);
+
+ return 0;
+}
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 05/21] riscv: define generic relocate_image
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (3 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 04/21] ARM: implement elf_apply_relocations() for ELF " Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:10 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 06/21] riscv: implement elf_apply_relocations() for ELF relocation support Sascha Hauer
` (15 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
For use by the ELF loader in PBL to relocate barebox proper, export a
new relocate_image capable of relocating barebox and implement
relocate_to_current_adr() in terms of it.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/lib/reloc.c | 27 +++++++++++++--------------
1 file changed, 13 insertions(+), 14 deletions(-)
diff --git a/arch/riscv/lib/reloc.c b/arch/riscv/lib/reloc.c
index 0c1ec8b4880227d16ab5e7b580244f1db2e967ec..18b13a7013cff4032c12b999470f265dbda13c51 100644
--- a/arch/riscv/lib/reloc.c
+++ b/arch/riscv/lib/reloc.c
@@ -30,26 +30,15 @@ void sync_caches_for_execution(void)
local_flush_icache_all();
}
-void relocate_to_current_adr(void)
+static void relocate_image(unsigned long offset,
+ void *dstart, void *dend,
+ long *dynsym, long *dynend)
{
- unsigned long offset;
- unsigned long *dynsym;
- void *dstart, *dend;
Elf_Rela *rela;
- /* Get offset between linked address and runtime address */
- offset = get_runtime_offset();
if (!offset)
return;
- /*
- * We have yet to relocate, so using runtime_address
- * to compute the relocated address
- */
- dstart = runtime_address(__rel_dyn_start);
- dend = runtime_address(__rel_dyn_end);
- dynsym = runtime_address(__dynsym_start);
-
for (rela = dstart; (void *)rela < dend; rela++) {
unsigned long *fixup;
@@ -74,5 +63,15 @@ void relocate_to_current_adr(void)
}
}
+}
+
+void relocate_to_current_adr(void)
+{
+ relocate_image(get_runtime_offset(),
+ runtime_address(__rel_dyn_start),
+ runtime_address(__rel_dyn_end),
+ runtime_address(__dynsym_start),
+ NULL);
+
sync_caches_for_execution();
}
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 06/21] riscv: implement elf_apply_relocations() for ELF relocation support
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (4 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 05/21] riscv: define generic relocate_image Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:11 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 07/21] elf: implement elf_load_inplace() Sascha Hauer
` (14 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Add architecture-specific ELF relocation support for RISC-V,
enabling dynamic relocation of position-independent ELF binaries.
The implemetation reuses the existing relocate_image().
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/lib/reloc.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/arch/riscv/lib/reloc.c b/arch/riscv/lib/reloc.c
index 18b13a7013cff4032c12b999470f265dbda13c51..9a5004cf5b762b39bb98f7f0ab112c204185b33a 100644
--- a/arch/riscv/lib/reloc.c
+++ b/arch/riscv/lib/reloc.c
@@ -75,3 +75,58 @@ void relocate_to_current_adr(void)
sync_caches_for_execution();
}
+
+#if __riscv_xlen == 64
+
+/*
+ * Apply RISC-V 64-bit ELF relocations
+ */
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+ Elf64_Rela *rela;
+ void *rel_ptr;
+ u64 relasz;
+ phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+ int ret;
+
+ ret = elf_parse_dynamic_section_rela(elf, dyn_seg, &rel_ptr, &relasz);
+ if (ret)
+ return ret;
+
+ rela = (Elf64_Rela *)rel_ptr;
+
+ relocate_image(base, rela, (void *)rela + relasz, NULL, NULL);
+
+ return 0;
+}
+
+#else /* 32-bit RISC-V */
+
+/*
+ * Apply RISC-V 32-bit ELF relocations
+ */
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+ Elf32_Rela *rela;
+ void *rel_ptr;
+ u64 relasz;
+ phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+ int ret;
+
+ if (elf->class != ELFCLASS32) {
+ pr_err("Wrong ELF class for RISC-V 32 relocation\n");
+ return -EINVAL;
+ }
+
+ ret = elf_parse_dynamic_section_rela(elf, dyn_seg, &rel_ptr, &relasz);
+ if (ret)
+ return ret;
+
+ rela = (Elf32_Rela *)rel_ptr;
+
+ relocate_image(base, rela, (void *)rela + relasz, NULL, NULL);
+
+ return 0;
+}
+
+#endif
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 07/21] elf: implement elf_load_inplace()
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (5 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 06/21] riscv: implement elf_apply_relocations() for ELF relocation support Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:53 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 08/21] elf: create elf_open_binary_into() Sascha Hauer
` (13 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Implement elf_load_inplace() to apply dynamic relocations to an ELF binary
that is already loaded in memory. Unlike elf_load(), this function does not
allocate memory or copy segments - it only modifies the existing image in
place.
This is useful for self-relocating loaders or when the ELF has been loaded
by external means (e.g., firmware or another bootloader).
For ET_DYN (position-independent) binaries, the relocation offset is
calculated relative to the first executable PT_LOAD segment (.text section),
taking into account the difference between the segment's virtual address
and its file offset.
The entry point is also adjusted to point to the relocated image.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---
common/elf.c | 152 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
include/elf.h | 8 ++++
2 files changed, 160 insertions(+)
diff --git a/common/elf.c b/common/elf.c
index 77d4ac7c8545596b6f44751988e5e7f97b98df15..b3cc9b59644aa6e6e029fa44988dbdc0787c42f6 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -627,3 +627,155 @@ void elf_close(struct elf_image *elf)
free(elf);
}
+
+static const void *elf_find_dynamic_inplace(struct elf_image *elf)
+{
+ void *buf = elf->hdr_buf;
+ void *phdr = buf + elf_hdr_e_phoff(elf, buf);
+ int i;
+
+ for (i = 0; i < elf_hdr_e_phnum(elf, buf); i++) {
+ if (elf_phdr_p_type(elf, phdr) == PT_DYNAMIC) {
+ u64 offset = elf_phdr_p_offset(elf, phdr);
+ /* For in-place binary, PT_DYNAMIC is at hdr_buf + offset */
+ return elf->hdr_buf + offset;
+ }
+ phdr += elf_size_of_phdr(elf);
+ }
+
+ return NULL; /* No PT_DYNAMIC segment */
+}
+
+/**
+ * elf_load_inplace() - Apply dynamic relocations to an ELF binary in place
+ * @elf: ELF image previously opened with elf_open_binary()
+ *
+ * This function applies dynamic relocations to an ELF binary that is already
+ * loaded at its target address in memory. Unlike elf_load(), this does not
+ * allocate memory or copy segments - it only modifies the existing image.
+ *
+ * This is useful for self-relocating loaders or when the ELF has been loaded
+ * by external means (e.g., loaded by firmware or another bootloader).
+ *
+ * The ELF image must have been previously opened with elf_open_binary().
+ *
+ * For ET_DYN (position-independent) binaries, the relocation offset is
+ * calculated relative to the first executable PT_LOAD segment (.text section).
+ *
+ * For ET_EXEC binaries, no relocation is applied as they are expected to
+ * be at their link-time addresses.
+ *
+ * Returns: 0 on success, negative error code on failure
+ */
+int elf_load_inplace(struct elf_image *elf)
+{
+ const void *dyn_seg;
+ void *buf, *phdr;
+ void *elf_buf;
+ int i, ret;
+
+ buf = elf->hdr_buf;
+ elf_buf = elf->hdr_buf;
+
+ /*
+ * First pass: Clear BSS segments (p_memsz > p_filesz).
+ * This must be done before relocations as uninitialized data
+ * must be zeroed per C standard.
+ */
+ phdr = buf + elf_hdr_e_phoff(elf, buf);
+ for (i = 0; i < elf_hdr_e_phnum(elf, buf); i++) {
+ if (elf_phdr_p_type(elf, phdr) == PT_LOAD) {
+ u64 p_offset = elf_phdr_p_offset(elf, phdr);
+ u64 p_filesz = elf_phdr_p_filesz(elf, phdr);
+ u64 p_memsz = elf_phdr_p_memsz(elf, phdr);
+
+ /* Clear BSS (uninitialized data) */
+ if (p_filesz < p_memsz) {
+ void *bss_start = elf_buf + p_offset + p_filesz;
+ size_t bss_size = p_memsz - p_filesz;
+ memset(bss_start, 0x00, bss_size);
+ }
+ }
+ phdr += elf_size_of_phdr(elf);
+ }
+
+ /*
+ * Calculate relocation offset for the in-place binary.
+ * For ET_DYN, we need to find the first executable PT_LOAD segment
+ * (.text section) and use it as the relocation base.
+ */
+ if (elf->type == ET_DYN) {
+ u64 text_vaddr = 0;
+ u64 text_offset = 0;
+ bool found_text = false;
+
+ /* Find first executable PT_LOAD segment (.text) */
+ phdr = buf + elf_hdr_e_phoff(elf, buf);
+ for (i = 0; i < elf_hdr_e_phnum(elf, buf); i++) {
+ if (elf_phdr_p_type(elf, phdr) == PT_LOAD) {
+ u32 flags = elf_phdr_p_flags(elf, phdr);
+ /* Check if segment is executable (PF_X = 0x1) */
+ if (flags & PF_X) {
+ text_vaddr = elf_phdr_p_vaddr(elf, phdr);
+ text_offset = elf_phdr_p_offset(elf, phdr);
+ found_text = true;
+ break;
+ }
+ }
+ phdr += elf_size_of_phdr(elf);
+ }
+
+ if (!found_text) {
+ pr_err("No executable PT_LOAD segment found\n");
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /*
+ * Calculate relocation offset relative to .text section:
+ * - .text is at file offset text_offset, so in memory at: elf_buf + text_offset
+ * - .text has virtual address text_vaddr
+ * - reloc_offset = (actual .text address) - (virtual .text address)
+ */
+ elf->reloc_offset = ((unsigned long)elf_buf + text_offset) - text_vaddr;
+
+ pr_debug("In-place ELF relocation: text_vaddr=0x%llx, text_offset=0x%llx, "
+ "load_addr=%p, offset=0x%08lx\n",
+ text_vaddr, text_offset, elf_buf, elf->reloc_offset);
+
+ /* Adjust entry point to point to relocated image */
+ elf->entry += elf->reloc_offset;
+ } else {
+ /*
+ * ET_EXEC binaries are at their link-time addresses,
+ * no relocation needed
+ */
+ elf->reloc_offset = 0;
+ }
+
+ /* Find PT_DYNAMIC segment */
+ dyn_seg = elf_find_dynamic_inplace(elf);
+ if (!dyn_seg) {
+ /*
+ * No PT_DYNAMIC segment found.
+ * This is fine for statically-linked binaries or
+ * binaries without relocations.
+ */
+ pr_debug("No PT_DYNAMIC segment found\n");
+ ret = 0;
+ goto out;
+ }
+
+ /* Apply architecture-specific relocations */
+ ret = elf_apply_relocations(elf, dyn_seg);
+ if (ret) {
+ pr_err("In-place relocation failed: %d\n", ret);
+ goto out;
+ }
+
+ pr_debug("In-place ELF relocation completed successfully\n");
+ return 0;
+
+out:
+ return ret;
+}
diff --git a/include/elf.h b/include/elf.h
index 7832a522550f64843f7891d1e033f7868565decb..84b98553cff69257f8c2fbfb2e7504a4400e0f53 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -422,6 +422,14 @@ int elf_load(struct elf_image *elf);
*/
void elf_set_load_address(struct elf_image *elf, void *addr);
+/*
+ * Apply dynamic relocations to an ELF binary already loaded in memory.
+ * This modifies the ELF image in place without allocating new memory.
+ * Useful for self-relocating loaders or externally loaded binaries.
+ * The elf parameter must have been previously opened with elf_open_binary().
+ */
+int elf_load_inplace(struct elf_image *elf);
+
/*
* Architecture-specific relocation handler.
* Returns 0 on success, -ENOSYS if architecture doesn't support relocations,
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 08/21] elf: create elf_open_binary_into()
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (6 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 07/21] elf: implement elf_load_inplace() Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:55 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 09/21] Makefile: add barebox.elf build target Sascha Hauer
` (12 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
elf_open_binary() returns a dynamically allocated struct elf_image *. We
do not have malloc in the PBL, so for better PBL support create
elf_open_binary_into() which takes a struct elf_image * as argument.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---
common/elf.c | 26 +++++++++++++++++++-------
include/elf.h | 1 +
2 files changed, 20 insertions(+), 7 deletions(-)
diff --git a/common/elf.c b/common/elf.c
index b3cc9b59644aa6e6e029fa44988dbdc0787c42f6..d51c99192f327abc0eb4ca68109bc88c6a451d94 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -318,6 +318,23 @@ static void elf_init_struct(struct elf_image *elf)
elf->filename = NULL;
}
+int elf_open_binary_into(struct elf_image *elf, void *buf)
+{
+ int ret;
+
+ memset(elf, 0, sizeof(*elf));
+ elf_init_struct(elf);
+
+ elf->hdr_buf = buf;
+ ret = elf_check_image(elf, buf);
+ if (ret)
+ return ret;
+
+ elf->entry = elf_hdr_e_entry(elf, elf->hdr_buf);
+
+ return 0;
+}
+
struct elf_image *elf_open_binary(void *buf)
{
int ret;
@@ -327,17 +344,12 @@ struct elf_image *elf_open_binary(void *buf)
if (!elf)
return ERR_PTR(-ENOMEM);
- elf_init_struct(elf);
-
- elf->hdr_buf = buf;
- ret = elf_check_image(elf, buf);
+ ret = elf_open_binary_into(elf, buf);
if (ret) {
free(elf);
- return ERR_PTR(-EINVAL);
+ return ERR_PTR(ret);
}
- elf->entry = elf_hdr_e_entry(elf, elf->hdr_buf);
-
return elf;
}
diff --git a/include/elf.h b/include/elf.h
index 84b98553cff69257f8c2fbfb2e7504a4400e0f53..64574cb4836cd67f54538ad1621d7fc5fbf58f92 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -410,6 +410,7 @@ static inline size_t elf_get_mem_size(struct elf_image *elf)
return elf->high_addr - elf->low_addr;
}
+int elf_open_binary_into(struct elf_image *elf, void *buf);
struct elf_image *elf_open_binary(void *buf);
struct elf_image *elf_open(const char *filename);
void elf_close(struct elf_image *elf);
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 09/21] Makefile: add barebox.elf build target
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (7 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 08/21] elf: create elf_open_binary_into() Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:13 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 10/21] PBL: allow to link ELF image into PBL Sascha Hauer
` (11 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Add a build target to create barebox.elf, which provides an ELF format
version of barebox that can be used for debugging or alternative boot
scenarios.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---
Makefile | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/Makefile b/Makefile
index 3b31cecc22c431a063b8d2d3c387da487b698e74..aa12b385c779512fe20d33792f4968fed2eec29a 100644
--- a/Makefile
+++ b/Makefile
@@ -853,6 +853,9 @@ all: barebox-flash-images
endif
all: $(symlink-y)
+ifeq ($(CONFIG_PBL_IMAGE)-$(CONFIG_PBL_IMAGE_NO_PIGGY),y-)
+all: barebox.elf
+endif
.SECONDEXPANSION:
$(symlink-y): $$(or $$(SYMLINK_DEP_$$(@F)),$$(SYMLINK_TARGET_$$(@F))) FORCE
@@ -1096,6 +1099,14 @@ barebox.fit: images/barebox-$(CONFIG_ARCH_LINUX_NAME).fit
barebox.srec: barebox
$(OBJCOPY) -O srec $< $@
+OBJCOPYFLAGS_barebox.elf = --strip-debug --strip-unneeded \
+ --remove-section=.comment \
+ --remove-section=.note* \
+ --remove-section=.gnu.hash
+
+barebox.elf: barebox FORCE
+ $(call if_changed,objcopy)
+
quiet_cmd_barebox_proper__ = CC $@
cmd_barebox_proper__ = $(CC) -r -o $@ -Wl,--whole-archive $(BAREBOX_OBJS)
@@ -1378,7 +1389,7 @@ CLEAN_FILES += barebox System.map include/generated/barebox_default_env.h \
.tmp_version .tmp_barebox* barebox.bin barebox.map \
.tmp_kallsyms* compile_commands.json \
.tmp_barebox.o barebox.o barebox-flash-image \
- barebox.srec barebox.efi
+ barebox.srec barebox.efi barebox.elf
CLEAN_FILES += scripts/bareboxenv-target scripts/kernel-install-target \
scripts/bareboxcrc32-target scripts/bareboximd-target \
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 10/21] PBL: allow to link ELF image into PBL
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (8 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 09/21] Makefile: add barebox.elf build target Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:18 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 11/21] mmu: add MAP_CACHED_RO mapping type Sascha Hauer
` (10 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Some architectures want to link the barebox proper ELF image into the
PBL. Allow that and provide a Kconfig option to select the ELF image.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
Makefile | 16 ++++++++++------
images/Makefile | 2 +-
pbl/Kconfig | 9 +++++++++
3 files changed, 20 insertions(+), 7 deletions(-)
diff --git a/Makefile b/Makefile
index aa12b385c779512fe20d33792f4968fed2eec29a..d1e6e3e418e4875b0fdee2e156acecdbbd5e3c9a 100644
--- a/Makefile
+++ b/Makefile
@@ -831,7 +831,11 @@ export KBUILD_BINARY ?= barebox.bin
# Also any assignments in arch/$(SRCARCH)/Makefile take precedence over
# the default value.
+ifeq ($(CONFIG_PBL_IMAGE_ELF),y)
+export BAREBOX_PROPER ?= vmbarebox
+else
export BAREBOX_PROPER ?= barebox.bin
+endif
barebox-flash-images: $(KBUILD_IMAGE)
@echo $^ > $@
@@ -854,7 +858,7 @@ endif
all: $(symlink-y)
ifeq ($(CONFIG_PBL_IMAGE)-$(CONFIG_PBL_IMAGE_NO_PIGGY),y-)
-all: barebox.elf
+all: vmbarebox
endif
.SECONDEXPANSION:
@@ -1099,12 +1103,12 @@ barebox.fit: images/barebox-$(CONFIG_ARCH_LINUX_NAME).fit
barebox.srec: barebox
$(OBJCOPY) -O srec $< $@
-OBJCOPYFLAGS_barebox.elf = --strip-debug --strip-unneeded \
- --remove-section=.comment \
- --remove-section=.note* \
- --remove-section=.gnu.hash
+OBJCOPYFLAGS_vmbarebox = --strip-debug --strip-unneeded \
+ --remove-section=.comment \
+ --remove-section=.note* \
+ --remove-section=.gnu.hash
-barebox.elf: barebox FORCE
+vmbarebox: barebox FORCE
$(call if_changed,objcopy)
quiet_cmd_barebox_proper__ = CC $@
diff --git a/images/Makefile b/images/Makefile
index ebbf57b463558724c97a6b4ca65c33f80dad253b..dd1be18eeef1a4ef42d3b9dc4b9c4f5d22c10d1a 100644
--- a/images/Makefile
+++ b/images/Makefile
@@ -231,7 +231,7 @@ ifneq ($(pblx-y)$(pblx-),)
$(error pblx- has been removed. Please use pblb- instead.)
endif
-targets += $(image-y) pbl.lds barebox.x barebox.z piggy.o sha_sum.o barebox.sha.bin barebox.sum
+targets += $(image-y) pbl.lds barebox.x barebox.z barebox.elf.z piggy.o sha_sum.o barebox.sha.bin barebox.sum
targets += $(patsubst %,%.pblb,$(pblb-y))
targets += $(patsubst %,%.pbl,$(pblb-y))
targets += $(patsubst %,%.s,$(pblb-y))
diff --git a/pbl/Kconfig b/pbl/Kconfig
index cab9325d16e8625bcca10125b3281062abffedbc..63f29cd6135926c48b355b80fc7a123b90098c20 100644
--- a/pbl/Kconfig
+++ b/pbl/Kconfig
@@ -21,6 +21,15 @@ config PBL_IMAGE_NO_PIGGY
want to use the piggy mechanism to load barebox proper.
It's so far only intended for sandbox.
+config PBL_IMAGE_ELF
+ bool
+ depends on PBL_IMAGE
+ select ELF
+ help
+ If yes, link ELF image into the PBL, otherwise a raw binary
+ is linked into the PBL. This must match the loader code in the
+ PBL.
+
config PBL_MULTI_IMAGES
bool
select PBL_IMAGE
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 11/21] mmu: add MAP_CACHED_RO mapping type
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (9 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 10/21] PBL: allow to link ELF image into PBL Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:14 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 12/21] mmu: introduce pbl_remap_range() Sascha Hauer
` (9 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
ARM32 and ARM64 have ARCH_MAP_CACHED_RO. We'll move parts of the MMU
initialization to generic code later, so add a new mapping type to
include/mmu.h.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu-common.c | 4 ++--
arch/arm/cpu/mmu-common.h | 3 +--
arch/arm/cpu/mmu_32.c | 4 ++--
arch/arm/cpu/mmu_64.c | 2 +-
include/mmu.h | 3 ++-
5 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index a1431c0ff46112552d2919269cc8a7a66d7a20c1..67317f127cadb138cc2e85bb18c92ab47bc1206f 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -22,7 +22,7 @@ const char *map_type_tostr(maptype_t map_type)
switch (map_type) {
case ARCH_MAP_CACHED_RWX: return "RWX";
- case ARCH_MAP_CACHED_RO: return "RO";
+ case MAP_CACHED_RO: return "RO";
case MAP_CACHED: return "CACHED";
case MAP_UNCACHED: return "UNCACHED";
case MAP_CODE: return "CODE";
@@ -158,7 +158,7 @@ static void mmu_remap_memory_banks(void)
}
remap_range((void *)code_start, code_size, MAP_CODE);
- remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
+ remap_range((void *)rodata_start, rodata_size, MAP_CACHED_RO);
setup_trap_pages();
}
diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
index a111e15a21b479b5ffa2ea8973e2ad189e531925..b42c421ffde8ebba84b17c6311b735f7759dc69b 100644
--- a/arch/arm/cpu/mmu-common.h
+++ b/arch/arm/cpu/mmu-common.h
@@ -12,7 +12,6 @@
#include <linux/bits.h>
#define ARCH_MAP_CACHED_RWX MAP_ARCH(2)
-#define ARCH_MAP_CACHED_RO MAP_ARCH(3)
#define ARCH_MAP_FLAG_PAGEWISE BIT(31)
@@ -32,7 +31,7 @@ static inline maptype_t arm_mmu_maybe_skip_permissions(maptype_t map_type)
switch (map_type & MAP_TYPE_MASK) {
case MAP_CODE:
case MAP_CACHED:
- case ARCH_MAP_CACHED_RO:
+ case MAP_CACHED_RO:
return ARCH_MAP_CACHED_RWX;
default:
return map_type;
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 912d14e8cf82afcfd1800e4e11503899e10ccbbc..71ead41c3d274548c9427c1ce9833de309114c4d 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -304,7 +304,7 @@ static uint32_t get_pte_flags(maptype_t map_type)
switch (map_type & MAP_TYPE_MASK) {
case ARCH_MAP_CACHED_RWX:
return PTE_FLAGS_CACHED_V7_RWX;
- case ARCH_MAP_CACHED_RO:
+ case MAP_CACHED_RO:
return PTE_FLAGS_CACHED_RO_V7;
case MAP_CACHED:
return PTE_FLAGS_CACHED_V7;
@@ -320,7 +320,7 @@ static uint32_t get_pte_flags(maptype_t map_type)
}
} else {
switch (map_type & MAP_TYPE_MASK) {
- case ARCH_MAP_CACHED_RO:
+ case MAP_CACHED_RO:
case MAP_CODE:
return PTE_FLAGS_CACHED_RO_V4;
case ARCH_MAP_CACHED_RWX:
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 56c6a21f2b2a8d8300fd6dbfaaf36a54d264a0f3..ddf1373ec0a801baad043146187d7f4c3eac6a2a 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -159,7 +159,7 @@ static unsigned long get_pte_attrs(maptype_t map_type)
return attrs_xn() | MEM_ALLOC_WRITECOMBINE;
case MAP_CODE:
return CACHED_MEM | PTE_BLOCK_RO;
- case ARCH_MAP_CACHED_RO:
+ case MAP_CACHED_RO:
return attrs_xn() | CACHED_MEM | PTE_BLOCK_RO;
case ARCH_MAP_CACHED_RWX:
return CACHED_MEM;
diff --git a/include/mmu.h b/include/mmu.h
index f79619808829532ed05f018b982e4bc76bca72a4..9f582f25e1de14d47cfe2eff64f9cce81c4e492d 100644
--- a/include/mmu.h
+++ b/include/mmu.h
@@ -9,9 +9,10 @@
#define MAP_CACHED 1
#define MAP_FAULT 2
#define MAP_CODE 3
+#define MAP_CACHED_RO 4
#ifdef CONFIG_ARCH_HAS_DMA_WRITE_COMBINE
-#define MAP_WRITECOMBINE 4
+#define MAP_WRITECOMBINE 5
#else
#define MAP_WRITECOMBINE MAP_UNCACHED
#endif
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 12/21] mmu: introduce pbl_remap_range()
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (10 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 11/21] mmu: add MAP_CACHED_RO mapping type Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:15 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 13/21] ARM: use relative jumps in exception table Sascha Hauer
` (8 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Add PBL-specific memory remapping function that always uses page-wise
mapping (ARCH_MAP_FLAG_PAGEWISE) for fine-grained permissions on
adjacent ELF segments with different protection requirements.
Wraps arch-specific __arch_remap_range() for ARMv7 (4KB pages) and
ARMv8 (page tables with BBM). Needed for ELF segment permission setup.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu_32.c | 7 +++++++
arch/arm/cpu/mmu_64.c | 8 ++++++++
include/mmu.h | 3 +++
3 files changed, 18 insertions(+)
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 71ead41c3d274548c9427c1ce9833de309114c4d..54371e3e5973c9fe98cadef63499105134e09539 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -435,6 +435,13 @@ static void early_remap_range(u32 addr, size_t size, maptype_t map_type)
__arch_remap_range((void *)addr, addr, size, map_type);
}
+void pbl_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
+ maptype_t map_type)
+{
+ __arch_remap_range(virt_addr, phys_addr, size,
+ map_type | ARCH_MAP_FLAG_PAGEWISE);
+}
+
static bool pte_is_cacheable(uint32_t pte, int level)
{
return (level == 2 && (pte & PTE_CACHEABLE)) ||
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index ddf1373ec0a801baad043146187d7f4c3eac6a2a..d4d6d1a849ba9b8666f4dcb4bd80247a70bdce1a 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -282,6 +282,14 @@ static void early_remap_range(uint64_t addr, size_t size, maptype_t map_type)
__arch_remap_range(addr, addr, size, map_type, false);
}
+void pbl_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
+ maptype_t map_type)
+{
+ __arch_remap_range((uint64_t)virt_addr, phys_addr,
+ (uint64_t)size, map_type | ARCH_MAP_FLAG_PAGEWISE,
+ true);
+}
+
int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, maptype_t map_type)
{
map_type = arm_mmu_maybe_skip_permissions(map_type);
diff --git a/include/mmu.h b/include/mmu.h
index 9f582f25e1de14d47cfe2eff64f9cce81c4e492d..32d9a7aca3b9a61d542bf3e21e27f1ac51f43ee2 100644
--- a/include/mmu.h
+++ b/include/mmu.h
@@ -65,6 +65,9 @@ static inline bool arch_can_remap(void)
}
#endif
+void pbl_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
+ maptype_t map_type);
+
static inline int remap_range(void *start, size_t size, maptype_t map_type)
{
return arch_remap_range(start, virt_to_phys(start), size, map_type);
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 13/21] ARM: use relative jumps in exception table
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (11 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 12/21] mmu: introduce pbl_remap_range() Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:57 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 14/21] ARM: exceptions: make in-binary exception table const Sascha Hauer
` (7 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Create position-independent exception vectors using relative branches
instead of absolute addresses. This works on ARMv7 onwards which
supports setting the address of the exception vectors.
New .text_inplace_exceptions section contains PC-relative branches,
enabling barebox proper to start with MMU already configured using
ELF segment addresses.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/exceptions_32.S | 20 ++++++++++++++++++++
arch/arm/cpu/interrupts_32.c | 7 ++-----
arch/arm/cpu/no-mmu.c | 11 +----------
arch/arm/include/asm/sections.h | 1 +
arch/arm/lib/pbl.lds.S | 6 +++---
arch/arm/lib32/barebox.lds.S | 4 ++++
6 files changed, 31 insertions(+), 18 deletions(-)
diff --git a/arch/arm/cpu/exceptions_32.S b/arch/arm/cpu/exceptions_32.S
index dc3d42663cbedd37947e27d449eb4ac8c3d8c3f1..85ee4ca3fd4e5aff8e1e02aa3d7375173a923072 100644
--- a/arch/arm/cpu/exceptions_32.S
+++ b/arch/arm/cpu/exceptions_32.S
@@ -155,6 +155,26 @@ ENTRY(arm_fixup_vectors)
ENDPROC(arm_fixup_vectors)
#endif
+.section .text_inplace_exceptions
+1: b 1b /* barebox_arm_reset_vector */
+#ifdef CONFIG_ARM_EXCEPTIONS
+ b undefined_instruction /* undefined instruction */
+ b software_interrupt /* software interrupt (SWI) */
+ b prefetch_abort /* prefetch abort */
+ b data_abort /* data abort */
+1: b 1b /* (reserved) */
+ b irq /* irq (interrupt) */
+ b fiq /* fiq (fast interrupt) */
+#else
+1: b 1b /* undefined instruction */
+1: b 1b /* software interrupt (SWI) */
+1: b 1b /* prefetch abort */
+1: b 1b /* data abort */
+1: b 1b /* (reserved) */
+1: b 1b /* irq (interrupt) */
+1: b 1b /* fiq (fast interrupt) */
+#endif
+
.section .text_exceptions
.globl extable
extable:
diff --git a/arch/arm/cpu/interrupts_32.c b/arch/arm/cpu/interrupts_32.c
index 0b88db10fe487378fe08018701bc672f63139fc1..5dc4802fafbfdbcd54707b84835f40177e10742b 100644
--- a/arch/arm/cpu/interrupts_32.c
+++ b/arch/arm/cpu/interrupts_32.c
@@ -231,10 +231,8 @@ static __maybe_unused int arm_init_vectors(void)
* First try to use the vectors where they actually are, works
* on ARMv7 and later.
*/
- if (!set_vector_table((unsigned long)__exceptions_start)) {
- arm_fixup_vectors();
+ if (!set_vector_table((unsigned long)__inplace_exceptions_start))
return 0;
- }
/*
* Next try high vectors at 0xffff0000.
@@ -264,7 +262,6 @@ void arm_pbl_init_exceptions(void)
if (cpu_architecture() < CPU_ARCH_ARMv7)
return;
- set_vbar((unsigned long)__exceptions_start);
- arm_fixup_vectors();
+ set_vbar((unsigned long)__inplace_exceptions_start);
}
#endif
diff --git a/arch/arm/cpu/no-mmu.c b/arch/arm/cpu/no-mmu.c
index c4ef5d1f9d55136d606c244309dbeeb8fd988784..68246d71156c7c84b9faff452cebb37132b83573 100644
--- a/arch/arm/cpu/no-mmu.c
+++ b/arch/arm/cpu/no-mmu.c
@@ -21,8 +21,6 @@
#include <asm/sections.h>
#include <asm/cputype.h>
-#define __exceptions_size (__exceptions_stop - __exceptions_start)
-
static bool has_vbar(void)
{
u32 mainid;
@@ -41,7 +39,6 @@ static bool has_vbar(void)
static int nommu_v7_vectors_init(void)
{
- void *vectors;
u32 cr;
if (cpu_architecture() < CPU_ARCH_ARMv7)
@@ -58,13 +55,7 @@ static int nommu_v7_vectors_init(void)
cr &= ~CR_V;
set_cr(cr);
- arm_fixup_vectors();
-
- vectors = xmemalign(PAGE_SIZE, PAGE_SIZE);
- memset(vectors, 0, PAGE_SIZE);
- memcpy(vectors, __exceptions_start, __exceptions_size);
-
- set_vbar((unsigned int)vectors);
+ set_vbar((unsigned int)__inplace_exceptions_start);
return 0;
}
diff --git a/arch/arm/include/asm/sections.h b/arch/arm/include/asm/sections.h
index 15b1a6482a5b148284ab47de2db1c2653909da09..bf4fb7b109a7a22d9a298257af23a11b9efe6861 100644
--- a/arch/arm/include/asm/sections.h
+++ b/arch/arm/include/asm/sections.h
@@ -13,6 +13,7 @@ extern char __dynsym_start[];
extern char __dynsym_end[];
extern char __exceptions_start[];
extern char __exceptions_stop[];
+extern char __inplace_exceptions_start[];
#endif
diff --git a/arch/arm/lib/pbl.lds.S b/arch/arm/lib/pbl.lds.S
index 9c51f5eb3a3d8256752a78e03fed851c84d92edb..53b21084cff2e3d916cd37485281f2f78166c37d 100644
--- a/arch/arm/lib/pbl.lds.S
+++ b/arch/arm/lib/pbl.lds.S
@@ -53,9 +53,9 @@ SECTIONS
*(.text_bare_init*)
__bare_init_end = .;
. = ALIGN(0x20);
- __exceptions_start = .;
- KEEP(*(.text_exceptions*))
- __exceptions_stop = .;
+ __inplace_exceptions_start = .;
+ KEEP(*(.text_inplace_exceptions*))
+ __inplace_exceptions_stop = .;
*(.text*)
}
diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
index c704dd6d70f3ab157ceb67dfb14760e03f2a5d62..17e0970ba4989e5213ed38ea5ff87bdf5bfa2740 100644
--- a/arch/arm/lib32/barebox.lds.S
+++ b/arch/arm/lib32/barebox.lds.S
@@ -26,6 +26,10 @@ SECTIONS
__exceptions_start = .;
KEEP(*(.text_exceptions*))
__exceptions_stop = .;
+ . = ALIGN(0x20);
+ __inplace_exceptions_start = .;
+ KEEP(*(.text_inplace_exceptions*))
+ __inplace_exceptions_stop = .;
*(.text*)
}
BAREBOX_BARE_INIT_SIZE
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 14/21] ARM: exceptions: make in-binary exception table const
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (12 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 13/21] ARM: use relative jumps in exception table Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 14:00 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 15/21] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
` (6 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
When making the .text section const we can no longer modify the code.
On ARMv5/v6 we finally use a copy of the exception table anyway, so
instead of modifying it in-place and copy afterwards, copy it first and
then modify it.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/exceptions_32.S | 40 ++++++--------------------------------
arch/arm/cpu/interrupts_32.c | 38 ++++++++++++++++++++++++++++++++++++
arch/arm/cpu/mmu_32.c | 3 +--
arch/arm/include/asm/barebox-arm.h | 4 ++--
4 files changed, 47 insertions(+), 38 deletions(-)
diff --git a/arch/arm/cpu/exceptions_32.S b/arch/arm/cpu/exceptions_32.S
index 85ee4ca3fd4e5aff8e1e02aa3d7375173a923072..386cf72e7ada73547d811e07b009ffc7fe1c4c0a 100644
--- a/arch/arm/cpu/exceptions_32.S
+++ b/arch/arm/cpu/exceptions_32.S
@@ -91,24 +91,28 @@ do_abort_\@:
.arm
.align 5
+.globl undefined_instruction
undefined_instruction:
get_bad_stack
bad_save_user_regs
bl do_undefined_instruction
.align 5
+.globl software_interrupt
software_interrupt:
get_bad_stack
bad_save_user_regs
bl do_software_interrupt
.align 5
+.globl prefetch_abort
prefetch_abort:
get_bad_stack
bad_save_user_regs
bl do_prefetch_abort
.align 5
+.globl data_abort
data_abort:
try_data_abort
get_bad_stack
@@ -116,45 +120,19 @@ data_abort:
bl do_data_abort
.align 5
+.globl irq
irq:
get_bad_stack
bad_save_user_regs
bl do_irq
.align 5
+.globl fiq
fiq:
get_bad_stack
bad_save_user_regs
bl do_fiq
-#ifdef CONFIG_ARM_EXCEPTIONS
-/*
- * With relocatable binary support the runtime exception vectors do not match
- * the addresses in the binary. We have to fix them up during runtime
- */
-ENTRY(arm_fixup_vectors)
- ldr r0, =undefined_instruction
- ldr r1, =_undefined_instruction
- str r0, [r1]
- ldr r0, =software_interrupt
- ldr r1, =_software_interrupt
- str r0, [r1]
- ldr r0, =prefetch_abort
- ldr r1, =_prefetch_abort
- str r0, [r1]
- ldr r0, =data_abort
- ldr r1, =_data_abort
- str r0, [r1]
- ldr r0, =irq
- ldr r1, =_irq
- str r0, [r1]
- ldr r0, =fiq
- ldr r1, =_fiq
- str r0, [r1]
- bx lr
-ENDPROC(arm_fixup_vectors)
-#endif
-
.section .text_inplace_exceptions
1: b 1b /* barebox_arm_reset_vector */
#ifdef CONFIG_ARM_EXCEPTIONS
@@ -187,17 +165,11 @@ extable:
1: b 1b /* (reserved) */
ldr pc, _irq /* irq (interrupt) */
ldr pc, _fiq /* fiq (fast interrupt) */
-.globl _undefined_instruction
_undefined_instruction: .word undefined_instruction
-.globl _software_interrupt
_software_interrupt: .word software_interrupt
-.globl _prefetch_abort
_prefetch_abort: .word prefetch_abort
-.globl _data_abort
_data_abort: .word data_abort
-.globl _irq
_irq: .word irq
-.globl _fiq
_fiq: .word fiq
#else
1: b 1b /* undefined instruction */
diff --git a/arch/arm/cpu/interrupts_32.c b/arch/arm/cpu/interrupts_32.c
index 5dc4802fafbfdbcd54707b84835f40177e10742b..385504834d8195201db8028073f76628198fdd1e 100644
--- a/arch/arm/cpu/interrupts_32.c
+++ b/arch/arm/cpu/interrupts_32.c
@@ -265,3 +265,41 @@ void arm_pbl_init_exceptions(void)
set_vbar((unsigned long)__inplace_exceptions_start);
}
#endif
+
+/* This struct must match the assembly exception table in exceptions_32.S */
+struct extable {
+ uint32_t reset;
+ uint32_t undefined_instruction;
+ uint32_t software_interrupt;
+ uint32_t prefetch_abort;
+ uint32_t data_abort;
+ uint32_t reserved;
+ uint32_t irq;
+ uint32_t fiq;
+ void *undefined_instruction_ptr;
+ void *software_interrupt_ptr;
+ void *prefetch_abort_ptr;
+ void *data_abort_ptr;
+ void *reserved_ptr;
+ void *irq_ptr;
+ void *fiq_ptr;
+};
+
+void undefined_instruction(void);
+void software_interrupt(void);
+void prefetch_abort(void);
+void data_abort(void);
+void irq(void);
+void fiq(void);
+
+void arm_fixup_vectors(void *_table)
+{
+ struct extable *table = _table;
+
+ table->undefined_instruction_ptr = undefined_instruction;
+ table->software_interrupt_ptr = software_interrupt;
+ table->prefetch_abort_ptr = prefetch_abort;
+ table->data_abort_ptr = irq;
+ table->irq_ptr = irq;
+ table->fiq_ptr = fiq;
+}
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index 54371e3e5973c9fe98cadef63499105134e09539..ff52e1d2b6fc763d6210630d181c7d4ed15cdadd 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -535,10 +535,9 @@ void create_vector_table(unsigned long adr)
get_pte_flags(MAP_CACHED), true);
}
- arm_fixup_vectors();
-
memset(vectors, 0, PAGE_SIZE);
memcpy(vectors, __exceptions_start, __exceptions_stop - __exceptions_start);
+ arm_fixup_vectors(vectors);
}
static void create_zero_page(void)
diff --git a/arch/arm/include/asm/barebox-arm.h b/arch/arm/include/asm/barebox-arm.h
index e1d89d5684d36f003ba8da3651ae86bda1d9b34c..276807dc484f429f22859e1eda4c88644235c57d 100644
--- a/arch/arm/include/asm/barebox-arm.h
+++ b/arch/arm/include/asm/barebox-arm.h
@@ -45,10 +45,10 @@ unsigned long arm_mem_membase_get(void);
unsigned long arm_mem_endmem_get(void);
#ifdef CONFIG_ARM_EXCEPTIONS
-void arm_fixup_vectors(void);
+void arm_fixup_vectors(void *table);
ulong arm_get_vector_table(void);
#else
-static inline void arm_fixup_vectors(void)
+static inline void arm_fixup_vectors(void *table)
{
}
static inline ulong arm_get_vector_table(void)
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 15/21] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (13 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 14/21] ARM: exceptions: make in-binary exception table const Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 14:05 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 16/21] ARM: link ELF image into PBL Sascha Hauer
` (5 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Fix the linker scripts to generate three distinct PT_LOAD segments with
correct permissions instead of combining .rodata with .data.
Before this fix, the linker auto-generated only two PT_LOAD segments:
1. Text segment (PF_R|PF_X)
2. Data segment (PF_R|PF_W) - containing .rodata, .data, .bss, etc.
This caused .rodata to be mapped with write permissions when
pbl_mmu_setup_from_elf() set up MMU permissions based on ELF segments,
defeating the W^X protection that commit d9ccb0cf14 intended to provide.
With explicit PHDRS directives, we now generate three segments:
1. text segment (PF_R|PF_X): .text and related code sections
2. rodata segment (PF_R): .rodata and unwind tables
3. data segment (PF_R|PF_W): .data, .bss, and related sections
This ensures pbl_mmu_setup_from_elf() correctly maps .rodata as
read-only (MAP_CACHED_RO) instead of read-write (MAP_CACHED).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/include/asm/barebox.lds.h | 2 +-
arch/arm/lib32/barebox.lds.S | 34 ++++++++++++++++++++++------------
arch/arm/lib64/barebox.lds.S | 29 +++++++++++++++++++----------
arch/riscv/lib/barebox.lds.S | 1 +
4 files changed, 43 insertions(+), 23 deletions(-)
diff --git a/arch/arm/include/asm/barebox.lds.h b/arch/arm/include/asm/barebox.lds.h
index 72aabe155b5c9e8b9159c7da6c6f0fa1f7b93375..7d1811645762a2ce1f58b7d86bed188a93fdb711 100644
--- a/arch/arm/include/asm/barebox.lds.h
+++ b/arch/arm/include/asm/barebox.lds.h
@@ -16,7 +16,7 @@
#define BAREBOX_RELOCATION_TABLE \
.rel_dyn_start : { *(.__rel_dyn_start) } \
- .BAREBOX_RELOCATION_TYPE.dyn : { *(.BAREBOX_RELOCATION_TYPE*) } \
+ .BAREBOX_RELOCATION_TYPE.dyn : { *(.BAREBOX_RELOCATION_TYPE*) } \
.rel_dyn_end : { *(.__rel_dyn_end) } \
.__dynsym_start : { *(.__dynsym_start) } \
.dynsym : { *(.dynsym) } \
diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
index 17e0970ba4989e5213ed38ea5ff87bdf5bfa2740..e84118ee2f43b07bd82fcd9936398847d0a3b42f 100644
--- a/arch/arm/lib32/barebox.lds.S
+++ b/arch/arm/lib32/barebox.lds.S
@@ -7,14 +7,23 @@
OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
ENTRY(start)
+
+PHDRS
+{
+ text PT_LOAD FLAGS(5); /* PF_R | PF_X */
+ rodata PT_LOAD FLAGS(4); /* PF_R */
+ data PT_LOAD FLAGS(6); /* PF_R | PF_W */
+ dynamic PT_DYNAMIC FLAGS(6); /* PF_R | PF_W */
+}
+
SECTIONS
{
. = 0x0;
- .image_start : { *(.__image_start) }
+ .image_start : { *(.__image_start) } :text
. = ALIGN(4);
- ._text : { *(._text) }
+ ._text : { *(._text) } :text
.text :
{
_stext = .;
@@ -31,7 +40,7 @@ SECTIONS
KEEP(*(.text_inplace_exceptions*))
__inplace_exceptions_stop = .;
*(.text*)
- }
+ } :text
BAREBOX_BARE_INIT_SIZE
. = ALIGN(4096);
@@ -39,7 +48,7 @@ SECTIONS
.rodata : {
*(.rodata*)
RO_DATA_SECTION
- }
+ } :rodata
#ifdef CONFIG_ARM_UNWIND
/*
@@ -50,20 +59,21 @@ SECTIONS
__start_unwind_idx = .;
*(.ARM.exidx*)
__stop_unwind_idx = .;
- }
+ } :rodata
.ARM.unwind_tab : {
__start_unwind_tab = .;
*(.ARM.extab*)
__stop_unwind_tab = .;
- }
+ } :rodata
#endif
. = ALIGN(4096);
__end_rodata = .;
_etext = .;
_sdata = .;
- . = ALIGN(4);
- .data : { *(.data*) }
+ .data : { *(.data*) } :data
+
+ .dynamic : { *(.dynamic) } :data :dynamic
. = .;
@@ -73,12 +83,12 @@ SECTIONS
BAREBOX_EFI_RUNTIME
- .image_end : { *(.__image_end) }
+ .image_end : { *(.__image_end) } :data
. = ALIGN(4);
- .__bss_start : { *(.__bss_start) }
- .bss : { *(.bss*) }
- .__bss_stop : { *(.__bss_stop) }
+ .__bss_start : { *(.__bss_start) } :data
+ .bss : { *(.bss*) } :data
+ .__bss_stop : { *(.__bss_stop) } :data
#ifdef CONFIG_ARM_SECURE_MONITOR
. = ALIGN(16);
diff --git a/arch/arm/lib64/barebox.lds.S b/arch/arm/lib64/barebox.lds.S
index 5ee5fbc3741e1f7644c00f9b37c0903c27704a3e..0278347e78c70318a03eddda77510e38d3e9f026 100644
--- a/arch/arm/lib64/barebox.lds.S
+++ b/arch/arm/lib64/barebox.lds.S
@@ -6,14 +6,23 @@
OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
ENTRY(start)
+
+PHDRS
+{
+ text PT_LOAD FLAGS(5); /* PF_R | PF_X */
+ rodata PT_LOAD FLAGS(4); /* PF_R */
+ data PT_LOAD FLAGS(6); /* PF_R | PF_W */
+ dynamic PT_DYNAMIC FLAGS(6); /* PF_R | PF_W */
+}
+
SECTIONS
{
. = 0x0;
- .image_start : { *(.__image_start) }
+ .image_start : { *(.__image_start) } :text
. = ALIGN(4);
- ._text : { *(._text) }
+ ._text : { *(._text) } :text
.text :
{
_stext = .;
@@ -22,7 +31,7 @@ SECTIONS
*(.text_bare_init*)
__bare_init_end = .;
*(.text*)
- }
+ } :text
BAREBOX_BARE_INIT_SIZE
. = ALIGN(4096);
@@ -30,7 +39,7 @@ SECTIONS
.rodata : {
*(.rodata*)
RO_DATA_SECTION
- }
+ } :rodata
. = ALIGN(4096);
@@ -38,20 +47,20 @@ SECTIONS
_etext = .;
_sdata = .;
- .data : { *(.data*) }
+ .data : { *(.data*) } :data
- BAREBOX_RELOCATION_TABLE
+ .dynamic : { *(.dynamic) } :data :dynamic
_edata = .;
BAREBOX_EFI_RUNTIME
- .image_end : { *(.__image_end) }
+ .image_end : { *(.__image_end) } :data
. = ALIGN(4);
- .__bss_start : { *(.__bss_start) }
- .bss : { *(.bss*) }
- .__bss_stop : { *(.__bss_stop) }
+ .__bss_start : { *(.__bss_start) } :data
+ .bss : { *(.bss*) } :data
+ .__bss_stop : { *(.__bss_stop) } :data
_end = .;
_barebox_image_size = __bss_start;
}
diff --git a/arch/riscv/lib/barebox.lds.S b/arch/riscv/lib/barebox.lds.S
index 03b3a967193cfee1c67b96632cf972a553e8bec4..38376befe9a82ead2152f8c7fc581eb5bb35fab4 100644
--- a/arch/riscv/lib/barebox.lds.S
+++ b/arch/riscv/lib/barebox.lds.S
@@ -16,6 +16,7 @@
OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
ENTRY(start)
OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
+
SECTIONS
{
. = 0x0;
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 16/21] ARM: link ELF image into PBL
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (14 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 15/21] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 14:06 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 17/21] ARM: PBL: setup MMU with proper permissions from ELF segments Sascha Hauer
` (4 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Instead of linking the raw binary barebox proper image into the PBL link
the ELF image into the PBL. With this barebox proper starts with a properly
linked and fully initialized C environment, so the calls to
relocate_to_adr() and setup_c() can be removed from barebox proper.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/Kconfig | 2 ++
arch/arm/cpu/start.c | 11 +++--------
arch/arm/cpu/uncompress.c | 26 +++++++++++++++++++-------
3 files changed, 24 insertions(+), 15 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 5123e9b1402c56db94df6a7a33ae993c61d51fbc..c53c58844a9411e3777711db2900b0e01cf55eec 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -18,6 +18,8 @@ config ARM
select HW_HAS_PCI
select ARCH_HAS_DMA_WRITE_COMBINE
select HAVE_EFI_LOADER if MMU # for payload unaligned accesses
+ select ELF
+ select PBL_IMAGE_ELF
default y
config ARCH_LINUX_NAME
diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
index f7d4507e71588ba5e241b24b952d55e2a4b0f794..f7062380cd5d265b3326f247a99b1847e16d64f0 100644
--- a/arch/arm/cpu/start.c
+++ b/arch/arm/cpu/start.c
@@ -127,8 +127,9 @@ static int barebox_memory_areas_init(void)
}
device_initcall(barebox_memory_areas_init);
-__noreturn __prereloc void barebox_non_pbl_start(unsigned long membase,
- unsigned long memsize, struct handoff_data *hd)
+__noreturn void barebox_non_pbl_start(unsigned long membase,
+ unsigned long memsize,
+ struct handoff_data *hd)
{
unsigned long endmem = membase + memsize;
unsigned long malloc_start, malloc_end;
@@ -139,12 +140,6 @@ __noreturn __prereloc void barebox_non_pbl_start(unsigned long membase,
if (IS_ENABLED(CONFIG_CPU_V7))
armv7_hyp_install();
- relocate_to_adr(barebox_base);
-
- setup_c();
-
- barrier();
-
pbl_barebox_break();
pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
index b9fc1d04db96e77c8fcd7fd1930798ea1d9294d7..8cc7102290986e71d2f3a2f34df1a9f946c56ced 100644
--- a/arch/arm/cpu/uncompress.c
+++ b/arch/arm/cpu/uncompress.c
@@ -20,6 +20,7 @@
#include <asm/mmu.h>
#include <asm/unaligned.h>
#include <compressed-dtb.h>
+#include <elf.h>
#include <debug_ll.h>
@@ -41,6 +42,8 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
void *pg_start, *pg_end;
unsigned long pc = get_pc();
void *handoff_data;
+ struct elf_image elf;
+ int ret;
/* piggy data is not relocated, so determine the bounds now */
pg_start = runtime_address(input_data);
@@ -85,21 +88,30 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
else if (IS_ENABLED(CONFIG_ARMV7R_MPU))
set_cr(get_cr() | CR_C);
- pr_debug("uncompressing barebox binary at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
+ pr_debug("uncompressing barebox ELF at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
pg_start, pg_len, barebox_base, uncompressed_len);
pbl_barebox_uncompress((void*)barebox_base, pg_start, pg_len);
+ pr_debug("relocating ELF in place\n");
+
+ ret = elf_open_binary_into(&elf, (void *)barebox_base);
+ if (ret)
+ panic("Failed to open ELF binary: %d\n", ret);
+
+ ret = elf_load_inplace(&elf);
+ if (ret)
+ panic("Failed to relocate ELF: %d\n", ret);
+
+ pr_debug("ELF entry point: 0x%llx\n", elf.entry);
+
+ barebox = (void *)(unsigned long)elf.entry;
+
handoff_data_move(handoff_data);
sync_caches_for_execution();
- if (IS_ENABLED(CONFIG_THUMB2_BAREBOX))
- barebox = (void *)(barebox_base + 1);
- else
- barebox = (void *)barebox_base;
-
- pr_debug("jumping to uncompressed image at 0x%p\n", barebox);
+ pr_debug("jumping to ELF entry point at 0x%p\n", barebox);
if (IS_ENABLED(CONFIG_CPU_V7) && boot_cpu_mode() == HYP_MODE)
armv7_switch_to_hyp();
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 17/21] ARM: PBL: setup MMU with proper permissions from ELF segments
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (15 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 16/21] ARM: link ELF image into PBL Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 14:10 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 18/21] riscv: link ELF image into PBL Sascha Hauer
` (3 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Move complete MMU setup into PBL by leveraging ELF segment information
to apply correct memory permissions before jumping to barebox proper.
After ELF relocation, parse PT_LOAD segments and map each with
permissions derived from p_flags:
- Text segments (PF_R|PF_X): Read-only + executable (MAP_CODE)
- Data segments (PF_R|PF_W): Read-write (MAP_CACHED)
- RO data segments (PF_R): Read-only (ARCH_MAP_CACHED_RO)
This ensures barebox proper starts with full W^X protection already
in place, eliminating the need for complex remapping in barebox proper.
The mmu_init() function now only sets up trap pages for exception
handling.
The framework is portable - common ELF parsing in pbl/mmu.c uses
architecture-specific early_remap_range() exported from mmu_*.c.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu-common.c | 12 +++--
arch/arm/cpu/uncompress.c | 14 ++++++
include/pbl/mmu.h | 29 ++++++++++++
pbl/Makefile | 1 +
pbl/mmu.c | 111 ++++++++++++++++++++++++++++++++++++++++++++++
5 files changed, 160 insertions(+), 7 deletions(-)
diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index 67317f127cadb138cc2e85bb18c92ab47bc1206f..e469db0544a3842b49e9d8ba3c8ce3e1c3f7a20c 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -127,10 +127,6 @@ static inline void remap_range_end_sans_text(unsigned long start, unsigned long
static void mmu_remap_memory_banks(void)
{
struct memory_bank *bank;
- unsigned long code_start = (unsigned long)&_stext;
- unsigned long code_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
- unsigned long rodata_start = (unsigned long)&__start_rodata;
- unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
/*
* Early mmu init will have mapped everything but the initial memory area
@@ -138,6 +134,10 @@ static void mmu_remap_memory_banks(void)
* all memory banks, so let's map all pages, excluding reserved memory areas
* and barebox text area cacheable.
*
+ * PBL has already set up the MMU with proper permissions for text and
+ * rodata based on ELF segment information, so we don't need to remap
+ * those here.
+ *
* This code will become much less complex once we switch over to using
* CONFIG_MEMORY_ATTRIBUTES for MMU as well.
*/
@@ -157,9 +157,7 @@ static void mmu_remap_memory_banks(void)
remap_range_end_sans_text(pos, bank->res->end + 1, MAP_CACHED);
}
- remap_range((void *)code_start, code_size, MAP_CODE);
- remap_range((void *)rodata_start, rodata_size, MAP_CACHED_RO);
-
+ /* Do this while interrupt vectors are still writable */
setup_trap_pages();
}
diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
index 8cc7102290986e71d2f3a2f34df1a9f946c56ced..619bd8d5b0b56ab2704a0fa1e4964bb603b761d9 100644
--- a/arch/arm/cpu/uncompress.c
+++ b/arch/arm/cpu/uncompress.c
@@ -21,6 +21,7 @@
#include <asm/unaligned.h>
#include <compressed-dtb.h>
#include <elf.h>
+#include <pbl/mmu.h>
#include <debug_ll.h>
@@ -105,6 +106,19 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
pr_debug("ELF entry point: 0x%llx\n", elf.entry);
+ /*
+ * Now that the ELF image is relocated, we know the exact addresses
+ * of all segments. Set up MMU with proper permissions based on
+ * ELF segment flags (PF_R/W/X).
+ */
+ if (IS_ENABLED(CONFIG_MMU)) {
+ ret = pbl_mmu_setup_from_elf(&elf, membase, memsize);
+ if (ret) {
+ pr_err("Failed to setup MMU from ELF: %d\n", ret);
+ hang();
+ }
+ }
+
barebox = (void *)(unsigned long)elf.entry;
handoff_data_move(handoff_data);
diff --git a/include/pbl/mmu.h b/include/pbl/mmu.h
new file mode 100644
index 0000000000000000000000000000000000000000..4a00d8e528ab5452981347185c9114235f213e2b
--- /dev/null
+++ b/include/pbl/mmu.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __PBL_MMU_H
+#define __PBL_MMU_H
+
+#include <linux/types.h>
+
+struct elf_image;
+
+/**
+ * pbl_mmu_setup_from_elf() - Configure MMU using ELF segment information
+ * @elf: ELF image structure from elf_open_binary_into()
+ * @membase: Base address of RAM
+ * @memsize: Size of RAM
+ *
+ * This function sets up the MMU with proper permissions based on ELF
+ * segment flags. It should be called after elf_load_inplace() has
+ * relocated the barebox image.
+ *
+ * Segment permissions are mapped as follows:
+ * PF_R | PF_X -> Read-only + executable (text)
+ * PF_R | PF_W -> Read-write (data, bss)
+ * PF_R -> Read-only (rodata)
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+int pbl_mmu_setup_from_elf(struct elf_image *elf, unsigned long membase,
+ unsigned long memsize);
+
+#endif /* __PBL_MMU_H */
diff --git a/pbl/Makefile b/pbl/Makefile
index f66391be7b2898388425657f54afcd6e4c72e3db..b78124cdcd2a4690be11d5503006723252b4904f 100644
--- a/pbl/Makefile
+++ b/pbl/Makefile
@@ -9,3 +9,4 @@ pbl-$(CONFIG_HAVE_IMAGE_COMPRESSION) += decomp.o
pbl-$(CONFIG_LIBFDT) += fdt.o
pbl-$(CONFIG_PBL_CONSOLE) += console.o
obj-pbl-y += handoff-data.o
+obj-pbl-$(CONFIG_MMU) += mmu.o
diff --git a/pbl/mmu.c b/pbl/mmu.c
new file mode 100644
index 0000000000000000000000000000000000000000..853fdcba55699025ea1d2a49385747e29cb2debc
--- /dev/null
+++ b/pbl/mmu.c
@@ -0,0 +1,111 @@
+// SPDX-License-Identifier: GPL-2.0-only
+// SPDX-FileCopyrightText: 2025 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix
+
+#define pr_fmt(fmt) "pbl-mmu: " fmt
+
+#include <common.h>
+#include <elf.h>
+#include <mmu.h>
+#include <pbl/mmu.h>
+#include <asm/mmu.h>
+#include <linux/bits.h>
+#include <linux/sizes.h>
+
+/*
+ * Map ELF segment permissions (p_flags) to architecture MMU flags
+ */
+static unsigned int elf_flags_to_mmu_flags(u32 p_flags)
+{
+ bool readable = p_flags & PF_R;
+ bool writable = p_flags & PF_W;
+ bool executable = p_flags & PF_X;
+
+ if (readable && writable) {
+ /* Data, BSS: Read-write, cached, non-executable */
+ return MAP_CACHED;
+ } else if (readable && executable) {
+ /* Text: Read-only, cached, executable */
+ return MAP_CODE;
+ } else if (readable) {
+ /* Read-only data: Read-only, cached, non-executable */
+ return MAP_CACHED_RO;
+ } else {
+ /*
+ * Unusual: segment with no read permission.
+ * Map as uncached, non-executable for safety.
+ */
+ pr_warn("Segment with unusual permissions: flags=0x%x\n", p_flags);
+ return MAP_UNCACHED;
+ }
+}
+
+int pbl_mmu_setup_from_elf(struct elf_image *elf, unsigned long membase,
+ unsigned long memsize)
+{
+ void *phdr;
+ int i;
+ int phnum = elf_hdr_e_phnum(elf, elf->hdr_buf);
+ size_t phoff = elf_hdr_e_phoff(elf, elf->hdr_buf);
+ size_t phentsize = elf_size_of_phdr(elf);
+
+ pr_debug("Setting up MMU from ELF segments\n");
+ pr_debug("ELF entry point: 0x%llx\n", elf->entry);
+ pr_debug("ELF loaded at: 0x%p - 0x%p\n", elf->low_addr, elf->high_addr);
+
+ /*
+ * Iterate through all PT_LOAD segments and set up MMU permissions
+ * based on the segment's p_flags
+ */
+ for (i = 0; i < phnum; i++) {
+ phdr = elf->hdr_buf + phoff + i * phentsize;
+
+ if (elf_phdr_p_type(elf, phdr) != PT_LOAD)
+ continue;
+
+ u64 p_vaddr = elf_phdr_p_vaddr(elf, phdr);
+ u64 p_memsz = elf_phdr_p_memsz(elf, phdr);
+ u32 p_flags = elf_phdr_p_flags(elf, phdr);
+
+ /*
+ * Calculate actual address after relocation.
+ * For ET_EXEC: reloc_offset is 0, use p_vaddr directly
+ * For ET_DYN: reloc_offset adjusts virtual to actual address
+ */
+ unsigned long addr = p_vaddr + elf->reloc_offset;
+ unsigned long size = p_memsz;
+ unsigned long segment_end = addr + size;
+
+ /* Validate segment is within available memory */
+ if (segment_end < addr || /* overflow check */
+ addr < membase ||
+ segment_end > membase + memsize) {
+ pr_err("Segment %d outside memory bounds\n", i);
+ return -EINVAL;
+ }
+
+ /* Validate alignment - warn and round if needed */
+ if (!IS_ALIGNED(addr, PAGE_SIZE) || !IS_ALIGNED(size, PAGE_SIZE)) {
+ pr_debug("Segment %d not page-aligned, rounding\n", i);
+ size = ALIGN(size, PAGE_SIZE);
+ }
+
+ unsigned int mmu_flags = elf_flags_to_mmu_flags(p_flags);
+
+ pr_debug("Segment %d: addr=0x%08lx size=0x%08lx flags=0x%x [%c%c%c] -> mmu_flags=0x%x\n",
+ i, addr, size, p_flags,
+ (p_flags & PF_R) ? 'R' : '-',
+ (p_flags & PF_W) ? 'W' : '-',
+ (p_flags & PF_X) ? 'X' : '-',
+ mmu_flags);
+
+ /*
+ * Remap this segment with proper permissions.
+ * Use page-wise mapping to allow different permissions for
+ * different segments even if they're nearby.
+ */
+ pbl_remap_range((void *)addr, addr, size, mmu_flags);
+ }
+
+ pr_debug("MMU setup from ELF complete\n");
+ return 0;
+}
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 18/21] riscv: link ELF image into PBL
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (16 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 17/21] ARM: PBL: setup MMU with proper permissions from ELF segments Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 14:11 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 19/21] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
` (2 subsequent siblings)
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Instead of linking the raw binary barebox proper image into the PBL link
the ELF image into the PBL. With this barebox proper starts with a properly
linked and fully initialized C environment, so the calls to
relocate_to_adr() and setup_c() can be removed from barebox proper.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/Kconfig | 1 +
arch/riscv/boot/start.c | 6 ------
arch/riscv/boot/uncompress.c | 21 ++++++++++++++++++++-
3 files changed, 21 insertions(+), 7 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 96d013d8514ac12de5c34a426262d85f8cf021b9..d9794354f4ed2e8bf7276e03b968c566002c2ec6 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -17,6 +17,7 @@ config RISCV
select HAS_KALLSYMS
select RISCV_TIMER if RISCV_SBI
select HW_HAS_PCI
+ select PBL_IMAGE_ELF
select HAVE_ARCH_BOARD_GENERIC_DT
select HAVE_ARCH_BOOTM_OFTREE
diff --git a/arch/riscv/boot/start.c b/arch/riscv/boot/start.c
index 5091340c8a374fc360ab732ba01ec8516e82a83d..002ab3eccb4292d8d88a95f8b93163c970cb1d64 100644
--- a/arch/riscv/boot/start.c
+++ b/arch/riscv/boot/start.c
@@ -123,12 +123,6 @@ void barebox_non_pbl_start(unsigned long membase, unsigned long memsize,
unsigned long barebox_size = barebox_image_size + MAX_BSS_SIZE;
unsigned long barebox_base = riscv_mem_barebox_image(membase, endmem, barebox_size);
- relocate_to_current_adr();
-
- setup_c();
-
- barrier();
-
irq_init_vector(riscv_mode());
pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
diff --git a/arch/riscv/boot/uncompress.c b/arch/riscv/boot/uncompress.c
index 84142acf9c66fe1fcceb6ae63d15ac078ccddee7..fba04cf4fb6ed450d80b83a8a595346a3186f1e7 100644
--- a/arch/riscv/boot/uncompress.c
+++ b/arch/riscv/boot/uncompress.c
@@ -32,6 +32,8 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
unsigned long barebox_base;
void *pg_start, *pg_end;
unsigned long pc = get_pc();
+ struct elf_image elf;
+ int ret;
irq_init_vector(riscv_mode());
@@ -68,7 +70,24 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
sync_caches_for_execution();
- barebox = (void *)barebox_base;
+ ret = elf_open_binary_into(&elf, (void *)barebox_base);
+ if (ret) {
+ pr_err("Failed to open ELF binary: %d\n", ret);
+ hang();
+ }
+
+ ret = elf_load_inplace(&elf);
+ if (ret) {
+ pr_err("Failed to relocate ELF: %d\n", ret);
+ hang();
+ }
+
+ /*
+ * TODO: Add pbl_mmu_setup_from_elf() call when RISC-V PBL
+ * MMU support is implemented, similar to ARM
+ */
+
+ barebox = (void *)elf.entry;
pr_debug("jumping to uncompressed image at 0x%p. dtb=0x%p\n", barebox, fdt);
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 19/21] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (17 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 18/21] riscv: link ELF image into PBL Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 14:12 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 20/21] riscv: Allwinner D1: Drop M-Mode Sascha Hauer
2026-01-06 12:53 ` [PATCH v2 21/21] riscv: add ELF segment-based memory protection with MMU Sascha Hauer
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Fix the linker script to generate three distinct PT_LOAD segments with
correct permissions instead of combining .rodata with .data.
Before this fix, the linker auto-generated only two PT_LOAD segments:
1. Text segment (PF_R|PF_X)
2. Data segment (PF_R|PF_W) - containing .rodata, .data, .bss, etc.
This caused .rodata to be mapped with write permissions when
riscv_mmu_setup_from_elf() or riscv_pmp_setup_from_elf() set up memory
permissions based on ELF segments, defeating the W^X protection.
With explicit PHDRS directives, we now generate three segments:
1. text segment (PF_R|PF_X): .text and related code sections
2. rodata segment (PF_R): .rodata and related read-only sections
3. data segment (PF_R|PF_W): .data, .bss, and related sections
This ensures riscv_mmu_setup_from_elf() and riscv_pmp_setup_from_elf()
correctly map .rodata as read-only instead of read-write.
Also update the prelink script to handle binaries without a PT_DYNAMIC
segment, as the new PHDRS layout may result in this case.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/lib/barebox.lds.S | 37 ++++++++++++++++++++++++-------------
scripts/prelink-riscv.inc | 9 +++++++--
2 files changed, 31 insertions(+), 15 deletions(-)
diff --git a/arch/riscv/lib/barebox.lds.S b/arch/riscv/lib/barebox.lds.S
index 38376befe9a82ead2152f8c7fc581eb5bb35fab4..1565a6fedef1ade7687740240bc36f407ca880fc 100644
--- a/arch/riscv/lib/barebox.lds.S
+++ b/arch/riscv/lib/barebox.lds.S
@@ -17,14 +17,22 @@ OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
ENTRY(start)
OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
+PHDRS
+{
+ text PT_LOAD FLAGS(5); /* PF_R | PF_X */
+ rodata PT_LOAD FLAGS(4); /* PF_R */
+ data PT_LOAD FLAGS(6); /* PF_R | PF_W */
+ dynamic PT_DYNAMIC FLAGS(6); /* PF_R | PF_W */
+}
+
SECTIONS
{
. = 0x0;
- .image_start : { *(.__image_start) }
+ .image_start : { *(.__image_start) } :text
. = ALIGN(4);
- ._text : { *(._text) }
+ ._text : { *(._text) } :text
.text :
{
_stext = .;
@@ -36,44 +44,47 @@ SECTIONS
KEEP(*(.text_exceptions*))
__exceptions_stop = .;
*(.text*)
- }
+ } :text
BAREBOX_BARE_INIT_SIZE
- . = ALIGN(4);
+ . = ALIGN(4096);
__start_rodata = .;
.rodata : {
*(.rodata*)
RO_DATA_SECTION
- }
+ } :rodata
__end_rodata = .;
_etext = .;
_sdata = .;
- . = ALIGN(4);
- .data : { *(.data*) }
+ . = ALIGN(4096);
+
+ .data : { *(.data*) } :data
/DISCARD/ : { *(.rela.plt*) }
.rela.dyn : {
__rel_dyn_start = .;
*(.rel*)
__rel_dyn_end = .;
- }
+ } :data
.dynsym : {
__dynsym_start = .;
*(.dynsym)
__dynsym_end = .;
- }
+ } :data
+
+ .dynamic : { *(.dynamic) } :data :dynamic
_edata = .;
- .image_end : { *(.__image_end) }
+ .image_end : { *(.__image_end) } :data
. = ALIGN(4);
- .__bss_start : { *(.__bss_start) }
- .bss : { *(.bss*) }
- .__bss_stop : { *(.__bss_stop) }
+ .__bss_start : { *(.__bss_start) } :data
+ .bss : { *(.bss*) } :data
+ .__bss_stop : { *(.__bss_stop) } :data
_end = .;
_barebox_image_size = __bss_start;
}
diff --git a/scripts/prelink-riscv.inc b/scripts/prelink-riscv.inc
index f2b5467f5b3c19be285153d3ad7cdb210a24a94c..8a54a9737fe73827ad8cab01a61fbecc68a1140a 100644
--- a/scripts/prelink-riscv.inc
+++ b/scripts/prelink-riscv.inc
@@ -61,8 +61,13 @@ static void prelink_bonn(void *data)
}
}
- if (dyns == NULL)
- die("No dynamic section found");
+ if (dyns == NULL) {
+ /* No PT_DYNAMIC segment found - binary may not need prelinking.
+ * This can happen with statically-linked relocatable binaries
+ * that handle relocations differently. Exit successfully.
+ */
+ return;
+ }
Elf_Rela *rela_dyn = NULL;
size_t rela_count = 0;
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 20/21] riscv: Allwinner D1: Drop M-Mode
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (18 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 19/21] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 13:11 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 21/21] riscv: add ELF segment-based memory protection with MMU Sascha Hauer
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
The Allwinner D1 selects both RISCV_M_MODE and RISCV_S_MODE. The board
code uses barebox_riscv_machine_entry() and not barebox_riscv_machine_entry()
which indicates RISCV_M_MODE was only selected by accident. Remove it.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/Kconfig.socs | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/riscv/Kconfig.socs b/arch/riscv/Kconfig.socs
index 4a3b56b5fff48c86901ed0346be490a6847ac14e..0d9984dd2888e6cab81939e3ee97ef83851362a0 100644
--- a/arch/riscv/Kconfig.socs
+++ b/arch/riscv/Kconfig.socs
@@ -123,7 +123,6 @@ if SOC_ALLWINNER_SUN20I
config BOARD_ALLWINNER_D1
bool "Allwinner D1 Nezha"
select RISCV_S_MODE
- select RISCV_M_MODE
def_bool y
endif
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v2 21/21] riscv: add ELF segment-based memory protection with MMU
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (19 preceding siblings ...)
2026-01-06 12:53 ` [PATCH v2 20/21] riscv: Allwinner D1: Drop M-Mode Sascha Hauer
@ 2026-01-06 12:53 ` Sascha Hauer
2026-01-06 14:20 ` Ahmad Fatoum
20 siblings, 1 reply; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 12:53 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Enable hardware-enforced W^X (Write XOR Execute) memory protection through
ELF segment-based permissions using the RISC-V MMU.
This implementation provides memory protection for RISC-V S-mode using
Sv39 (RV64) or Sv32 (RV32) page tables.
Linker Script Changes:
- Add PHDRS directives to pbl.lds.S and barebox.lds.S
- Create three separate PT_LOAD segments with proper permissions:
* text segment (FLAGS(5) = PF_R|PF_X): code sections
* rodata segment (FLAGS(4) = PF_R): read-only data
* data segment (FLAGS(6) = PF_R|PF_W): data and BSS
- Add 4K alignment between segments for page-granular protection
S-mode MMU Implementation (mmu.c):
- Implement page table walking for Sv39/Sv32
- pbl_remap_range(): remap segments with ELF-derived permissions
- mmu_early_enable(): create identity mapping and enable SATP CSR
- Map ELF flags to PTE bits:
* MAP_CODE → PTE_R | PTE_X (read + execute)
* MAP_CACHED_RO → PTE_R (read only)
* MAP_CACHED → PTE_R | PTE_W (read + write)
Integration:
- Update uncompress.c to call mmu_early_enable() before decompression
(enables caching for faster decompression)
- Call pbl_mmu_setup_from_elf() after ELF relocation to apply final
segment-based permissions
- Uses portable pbl/mmu.c infrastructure to parse PT_LOAD segments
Configuration:
- Add CONFIG_MMU option (default y for RISCV_S_MODE)
- Update asm/mmu.h with ARCH_HAS_REMAP and function declarations
Security Benefits:
- Text sections are read-only and executable (cannot be modified)
- Read-only data sections are read-only and non-executable
- Data sections are read-write and non-executable (cannot be executed)
- Hardware-enforced W^X prevents code injection attacks
This matches the ARM implementation philosophy and provides genuine
security improvements on RISC-V S-mode platforms.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/Kconfig | 17 ++
arch/riscv/boot/uncompress.c | 26 ++-
arch/riscv/cpu/Makefile | 1 +
arch/riscv/cpu/mmu.c | 386 +++++++++++++++++++++++++++++++++++++++++++
arch/riscv/cpu/mmu.h | 144 ++++++++++++++++
arch/riscv/include/asm/asm.h | 3 +-
arch/riscv/include/asm/mmu.h | 44 +++++
include/mmu.h | 8 +-
8 files changed, 622 insertions(+), 7 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index d9794354f4ed2e8bf7276e03b968c566002c2ec6..99562c7df8927e11d4de448ad486e49dd5a0d0fd 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -129,4 +129,21 @@ config RISCV_MULTI_MODE
config RISCV_SBI
def_bool RISCV_S_MODE
+config MMU
+ bool "MMU-based memory protection"
+ default y if RISCV_S_MODE
+ help
+ Enable MMU (Memory Management Unit) support for RISC-V S-mode.
+ This provides hardware-enforced W^X (Write XOR Execute) memory
+ protection using page tables (Sv39 for RV64, Sv32 for RV32).
+
+ The PBL sets up page table entries based on ELF segment permissions,
+ ensuring that:
+ - Text sections are read-only and executable
+ - Read-only data sections are read-only and non-executable
+ - Data sections are read-write and non-executable
+
+ Say Y if running in S-mode (supervisor mode) with virtual memory.
+ Say N if running in M-mode or if you don't need memory protection.
+
endmenu
diff --git a/arch/riscv/boot/uncompress.c b/arch/riscv/boot/uncompress.c
index fba04cf4fb6ed450d80b83a8a595346a3186f1e7..10861724acc0f44fe431a0384ce15124d9198e4c 100644
--- a/arch/riscv/boot/uncompress.c
+++ b/arch/riscv/boot/uncompress.c
@@ -10,11 +10,14 @@
#include <init.h>
#include <linux/sizes.h>
#include <pbl.h>
+#include <pbl/mmu.h>
#include <asm/barebox-riscv.h>
#include <asm-generic/memory_layout.h>
#include <asm/sections.h>
#include <asm/unaligned.h>
+#include <asm/mmu.h>
#include <asm/irq.h>
+#include <elf.h>
#include <debug_ll.h>
@@ -63,6 +66,15 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
free_mem_ptr = riscv_mem_early_malloc(membase, endmem);
free_mem_end_ptr = riscv_mem_early_malloc_end(membase, endmem);
+#ifdef CONFIG_MMU
+ /*
+ * Enable MMU early to enable caching for faster decompression.
+ * This creates an initial identity mapping that will be refined
+ * later based on ELF segments.
+ */
+ mmu_early_enable(membase, memsize, barebox_base);
+#endif
+
pr_debug("uncompressing barebox binary at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
pg_start, pg_len, barebox_base, uncompressed_len);
@@ -82,10 +94,20 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
hang();
}
+ pr_debug("ELF entry point: 0x%llx\n", elf.entry);
+
/*
- * TODO: Add pbl_mmu_setup_from_elf() call when RISC-V PBL
- * MMU support is implemented, similar to ARM
+ * Now that the ELF image is relocated, we know the exact addresses
+ * of all segments. Set up MMU with proper permissions based on
+ * ELF segment flags (PF_R/W/X).
*/
+#ifdef CONFIG_MMU
+ ret = pbl_mmu_setup_from_elf(&elf, membase, memsize);
+ if (ret) {
+ pr_err("Failed to setup memory protection from ELF: %d\n", ret);
+ hang();
+ }
+#endif
barebox = (void *)elf.entry;
diff --git a/arch/riscv/cpu/Makefile b/arch/riscv/cpu/Makefile
index d79bafc6f142a0060d2a86078f0fb969b298ba98..6bf31b574cd6242df6393fbdc8accc08dceb822a 100644
--- a/arch/riscv/cpu/Makefile
+++ b/arch/riscv/cpu/Makefile
@@ -7,3 +7,4 @@ obj-pbl-$(CONFIG_RISCV_M_MODE) += mtrap.o
obj-pbl-$(CONFIG_RISCV_S_MODE) += strap.o
obj-pbl-y += interrupts.o
endif
+obj-pbl-$(CONFIG_MMU) += mmu.o
diff --git a/arch/riscv/cpu/mmu.c b/arch/riscv/cpu/mmu.c
new file mode 100644
index 0000000000000000000000000000000000000000..6cf4586f364c98dd69105dfa1c558b560755b7d4
--- /dev/null
+++ b/arch/riscv/cpu/mmu.c
@@ -0,0 +1,386 @@
+// SPDX-License-Identifier: GPL-2.0-only
+// SPDX-FileCopyrightText: 2026 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix
+
+#define pr_fmt(fmt) "mmu: " fmt
+
+#include <common.h>
+#include <init.h>
+#include <mmu.h>
+#include <errno.h>
+#include <linux/sizes.h>
+#include <linux/bitops.h>
+#include <asm/sections.h>
+
+#include "mmu.h"
+
+#ifdef __PBL__
+
+/*
+ * Page table storage for early MMU setup in PBL.
+ * Static allocation before BSS is available.
+ */
+static char early_pt_storage[RISCV_EARLY_PAGETABLE_SIZE] __aligned(RISCV_PGSIZE);
+static unsigned int early_pt_idx;
+
+/*
+ * Allocate a page table from the early PBL storage
+ */
+static pte_t *alloc_pte(void)
+{
+ pte_t *pt;
+
+ if ((early_pt_idx + 1) * RISCV_PGSIZE >= RISCV_EARLY_PAGETABLE_SIZE) {
+ pr_err("Out of early page table memory (need more than %d KB)\n",
+ RISCV_EARLY_PAGETABLE_SIZE / 1024);
+ hang();
+ }
+
+ pt = (pte_t *)(early_pt_storage + early_pt_idx * RISCV_PGSIZE);
+ early_pt_idx++;
+
+ /* Clear the page table */
+ memset(pt, 0, RISCV_PGSIZE);
+
+ return pt;
+}
+
+/*
+ * split_pte - Split a megapage/gigapage PTE into a page table
+ * @pte: Pointer to the PTE to split
+ * @level: Current page table level (0-2 for Sv39)
+ *
+ * This function takes a leaf PTE (megapage/gigapage) and converts it into
+ * a page table pointer with 512 entries, each covering 1/512th of the
+ * original range with identical permissions.
+ *
+ * Example: A 2MB megapage at Level 1 becomes a Level 2 page table with
+ * 512 × 4KB pages, all with the same R/W/X attributes.
+ */
+static void split_pte(pte_t *pte, int level)
+{
+ pte_t old_pte = *pte;
+ pte_t *new_table;
+ pte_t phys_base;
+ pte_t attrs;
+ unsigned long granularity;
+ int i;
+
+ /* If already a table pointer (no RWX bits), nothing to do */
+ if (!(*pte & (PTE_R | PTE_W | PTE_X)))
+ return;
+
+ /* Allocate new page table (512 entries × 8 bytes = 4KB) */
+ new_table = alloc_pte();
+
+ /* Extract physical base address from old PTE */
+ phys_base = (old_pte >> PTE_PPN_SHIFT) << RISCV_PGSHIFT;
+
+ /* Extract permission attributes to replicate */
+ attrs = old_pte & (PTE_R | PTE_W | PTE_X | PTE_A | PTE_D | PTE_U | PTE_G);
+
+ /*
+ * Calculate granularity of child level.
+ * Level 0 (1GB) → Level 1 (2MB): granularity = 2MB = 1 << 21
+ * Level 1 (2MB) → Level 2 (4KB): granularity = 4KB = 1 << 12
+ *
+ * Formula: granularity = 1 << (12 + 9 * (Levels - 2 - level))
+ * For Sv39 (3 levels):
+ * level=0: 1 << (12 + 9*1) = 2MB
+ * level=1: 1 << (12 + 9*0) = 4KB
+ */
+ granularity = 1UL << (RISCV_PGSHIFT + RISCV_PGLEVEL_BITS *
+ (RISCV_PGTABLE_LEVELS - 2 - level));
+
+ /* Populate new table: replicate old mapping across 512 entries */
+ for (i = 0; i < RISCV_PTE_ENTRIES; i++) {
+ unsigned long new_phys = phys_base + (i * granularity);
+ pte_t new_pte = ((new_phys >> RISCV_PGSHIFT) << PTE_PPN_SHIFT) |
+ attrs | PTE_V;
+ new_table[i] = new_pte;
+ }
+
+ /*
+ * Replace old leaf PTE with table pointer.
+ * No RWX bits = pointer to next level.
+ */
+ *pte = (((unsigned long)new_table >> RISCV_PGSHIFT) << PTE_PPN_SHIFT) | PTE_V;
+
+ pr_debug("Split level %d PTE at phys=0x%llx (granularity=%lu KB)\n",
+ level, (unsigned long long)phys_base, granularity / 1024);
+}
+
+/*
+ * Get the root page table base
+ */
+static pte_t *get_ttb(void)
+{
+ return (pte_t *)early_pt_storage;
+}
+
+/*
+ * Convert maptype flags to PTE permission bits
+ */
+static unsigned long flags_to_pte(maptype_t flags)
+{
+ unsigned long pte = PTE_V; /* Valid bit always set */
+
+ /*
+ * Map barebox memory types to RISC-V PTE flags:
+ * - ARCH_MAP_CACHED_RWX: read + write + execute (early boot, full RAM access)
+ * - MAP_CODE: read + execute (text sections)
+ * - MAP_CACHED_RO: read only (rodata sections)
+ * - MAP_CACHED: read + write (data/bss sections)
+ * - MAP_UNCACHED: read + write, uncached (device memory)
+ */
+ switch (flags & MAP_TYPE_MASK) {
+ case ARCH_MAP_CACHED_RWX:
+ /* Full access for early boot: R + W + X */
+ pte |= PTE_R | PTE_W | PTE_X;
+ break;
+ case MAP_CACHED_RO:
+ /* Read-only data: R, no W, no X */
+ pte |= PTE_R;
+ break;
+ case MAP_CODE:
+ /* Code: R + X, no W */
+ pte |= PTE_R | PTE_X;
+ break;
+ case MAP_CACHED:
+ case MAP_UNCACHED:
+ default:
+ /* Data or uncached: R + W, no X */
+ pte |= PTE_R | PTE_W;
+ break;
+ }
+
+ /* Set accessed and dirty bits to avoid hardware updates */
+ pte |= PTE_A | PTE_D;
+
+ return pte;
+}
+
+/*
+ * Walk page tables and get/create PTE for given address at specified level
+ */
+static pte_t *walk_pgtable(unsigned long addr, int target_level)
+{
+ pte_t *table = get_ttb();
+ int level;
+
+ for (level = 0; level < target_level; level++) {
+ unsigned int index = VPN(addr, RISCV_PGTABLE_LEVELS - 1 - level);
+ pte_t *pte = &table[index];
+
+ if (!(*pte & PTE_V)) {
+ /* Entry not valid - allocate new page table */
+ pte_t *new_table = alloc_pte();
+ pte_t new_pte = ((unsigned long)new_table >> RISCV_PGSHIFT) << PTE_PPN_SHIFT;
+ new_pte |= PTE_V;
+ *pte = new_pte;
+ table = new_table;
+ } else if (*pte & (PTE_R | PTE_W | PTE_X)) {
+ /* This is a leaf PTE - split it before descending */
+ split_pte(pte, level);
+ /* After split, PTE is now a table pointer - follow it */
+ table = (pte_t *)(((*pte >> PTE_PPN_SHIFT) << RISCV_PGSHIFT));
+ } else {
+ /* Valid non-leaf PTE - follow to next level */
+ table = (pte_t *)(((*pte >> PTE_PPN_SHIFT) << RISCV_PGSHIFT));
+ }
+ }
+
+ return table;
+}
+
+/*
+ * Create a page table entry mapping virt -> phys with given permissions
+ */
+static void create_pte(unsigned long virt, phys_addr_t phys, maptype_t flags)
+{
+ pte_t *table;
+ unsigned int index;
+ pte_t pte;
+
+ /* Walk to leaf level page table */
+ table = walk_pgtable(virt, RISCV_PGTABLE_LEVELS - 1);
+
+ /* Get index for this address at leaf level */
+ index = VPN(virt, 0);
+
+ /* Build PTE: PPN + flags */
+ pte = (phys >> RISCV_PGSHIFT) << PTE_PPN_SHIFT;
+ pte |= flags_to_pte(flags);
+
+ /* Write PTE */
+ table[index] = pte;
+}
+
+/*
+ * create_megapage - Create a 2MB megapage mapping
+ * @virt: Virtual address (should be 2MB-aligned)
+ * @phys: Physical address (should be 2MB-aligned)
+ * @flags: Mapping flags (MAP_CACHED, etc.)
+ *
+ * Creates a leaf PTE at Level 1 covering 2MB. This is identical to a 4KB
+ * PTE except it's placed at Level 1 instead of Level 2, saving page tables.
+ */
+static void create_megapage(unsigned long virt, phys_addr_t phys, maptype_t flags)
+{
+ pte_t *table;
+ unsigned int index;
+ pte_t pte;
+
+ /* Walk to Level 1 (one level above 4KB leaf) */
+ table = walk_pgtable(virt, RISCV_PGTABLE_LEVELS - 2);
+
+ /* Get VPN[1] index for this address at Level 1 */
+ index = VPN(virt, 1);
+
+ /* Build leaf PTE at Level 1: PPN + RWX flags make it a megapage */
+ pte = (phys >> RISCV_PGSHIFT) << PTE_PPN_SHIFT;
+ pte |= flags_to_pte(flags);
+
+ /* Write megapage PTE */
+ table[index] = pte;
+}
+
+/*
+ * pbl_remap_range - Remap a virtual address range with specified permissions
+ *
+ * This is called by the portable pbl/mmu.c code after ELF relocation to set up
+ * proper memory protection based on ELF segment flags.
+ */
+void pbl_remap_range(void *virt, phys_addr_t phys, size_t size, maptype_t flags)
+{
+ unsigned long addr = (unsigned long)virt;
+ unsigned long end = addr + size;
+
+ pr_debug("Remapping 0x%08lx-0x%08lx -> 0x%08llx (flags=0x%x)\n",
+ addr, end, (unsigned long long)phys, flags);
+
+ /* Align to page boundaries */
+ addr &= ~(RISCV_PGSIZE - 1);
+ end = ALIGN(end, RISCV_PGSIZE);
+
+ /* Create page table entries for each page in the range */
+ while (addr < end) {
+ create_pte(addr, phys, flags);
+ addr += RISCV_PGSIZE;
+ phys += RISCV_PGSIZE;
+ }
+
+ /* Flush TLB for the remapped range */
+ sfence_vma();
+}
+
+/*
+ * mmu_early_enable - Set up initial MMU with identity mapping
+ *
+ * Called before barebox decompression to enable caching for faster decompression.
+ * Creates a simple identity map of all RAM with RWX permissions.
+ */
+void mmu_early_enable(unsigned long membase, unsigned long memsize,
+ unsigned long barebox_base)
+{
+ unsigned long addr;
+ unsigned long end = membase + memsize;
+ unsigned long satp;
+
+ pr_debug("Enabling MMU: mem=0x%08lx-0x%08lx barebox=0x%08lx\n",
+ membase, end, barebox_base);
+
+ /* Reset page table allocator */
+ early_pt_idx = 0;
+
+ /* Allocate root page table */
+ (void)alloc_pte();
+
+ pr_debug("Creating flat identity mapping...\n");
+
+ /*
+ * Create a flat identity mapping of the lower address space as uncached.
+ * This ensures I/O devices (UART, etc.) are accessible after MMU is enabled.
+ * RV64: Map lower 4GB using 2MB megapages (2048 entries).
+ * RV32: Map entire 4GB using 4MB superpages (1024 entries in root table).
+ */
+ addr = 0;
+ do {
+ create_megapage(addr, addr, MAP_UNCACHED);
+ addr += RISCV_L1_SIZE;
+ } while (lower_32_bits(addr) != 0); /* Wraps around to 0 after 0xFFFFFFFF */
+
+ /*
+ * Remap RAM as cached with RWX permissions using superpages.
+ * This overwrites the uncached mappings for RAM regions, providing
+ * better performance. Later, pbl_mmu_setup_from_elf() will split
+ * superpages as needed to set fine-grained permissions based on ELF segments.
+ */
+ pr_debug("Remapping RAM 0x%08lx-0x%08lx as cached RWX...\n", membase, end);
+ for (addr = membase; addr < end; addr += RISCV_L1_SIZE)
+ create_megapage(addr, addr, ARCH_MAP_CACHED_RWX);
+
+ pr_debug("Page table setup complete, used %lu KB\n",
+ (early_pt_idx * RISCV_PGSIZE) / 1024);
+
+ /*
+ * Enable MMU by setting SATP CSR:
+ * - MODE field: Sv39 (RV64) or Sv32 (RV32)
+ * - ASID: 0 (no address space ID)
+ * - PPN: physical address of root page table
+ */
+ satp = SATP_MODE | (((unsigned long)get_ttb() >> RISCV_PGSHIFT) & SATP_PPN_MASK);
+
+ pr_debug("Enabling MMU: SATP=0x%08lx\n", satp);
+
+ /* Synchronize before enabling MMU */
+ sfence_vma();
+
+ /* Enable MMU */
+ csr_write(satp, satp);
+
+ /* Synchronize after enabling MMU */
+ sfence_vma();
+
+ pr_debug("MMU enabled with %lu %spages for RAM\n",
+ (memsize / RISCV_L1_SIZE),
+ IS_ENABLED(CONFIG_64BIT) ? "2MB mega" : "4MB super");
+}
+
+#else /* !__PBL__ */
+
+/*
+ * arch_remap_range - Remap a virtual address range (barebox proper)
+ *
+ * This is the non-PBL version used in barebox proper after full relocation.
+ * Currently provides basic remapping support. For full MMU management in
+ * barebox proper, this would need to be extended with:
+ * - Dynamic page table allocation
+ * - Cache flushing for non-cached mappings
+ * - TLB management
+ * - Support for MAP_FAULT (guard pages)
+ */
+int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
+ maptype_t map_type)
+{
+ /*
+ * For now, only allow identity mappings that match the default
+ * cached mapping. This is sufficient for most barebox proper use cases
+ * where the PBL has already set up the basic MMU configuration.
+ *
+ * TODO: Implement full remapping support for:
+ * - Non-identity mappings
+ * - Uncached device memory (MAP_UNCACHED)
+ * - Guard pages (MAP_FAULT)
+ */
+ if (phys_addr == virt_to_phys(virt_addr) &&
+ maptype_is_compatible(map_type, MAP_ARCH_DEFAULT))
+ return 0;
+
+ pr_warn("arch_remap_range: non-identity or non-default mapping not yet supported\n");
+ pr_warn(" virt=0x%p phys=0x%pad size=0x%zx type=0x%x\n",
+ virt_addr, &phys_addr, size, map_type);
+
+ return -ENOSYS;
+}
+
+#endif /* __PBL__ */
diff --git a/arch/riscv/cpu/mmu.h b/arch/riscv/cpu/mmu.h
new file mode 100644
index 0000000000000000000000000000000000000000..dda1e30ad97cd7c5cf99867735c5376c22edf938
--- /dev/null
+++ b/arch/riscv/cpu/mmu.h
@@ -0,0 +1,144 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* SPDX-FileCopyrightText: 2026 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix */
+
+#ifndef __RISCV_CPU_MMU_H
+#define __RISCV_CPU_MMU_H
+
+#include <linux/types.h>
+
+/*
+ * RISC-V MMU constants for Sv39 (RV64) and Sv32 (RV32) page tables
+ */
+
+/* Page table configuration */
+#define RISCV_PGSHIFT 12
+#define RISCV_PGSIZE (1UL << RISCV_PGSHIFT) /* 4KB */
+
+#ifdef CONFIG_64BIT
+/* Sv39: 9-bit VPN fields, 512 entries per table */
+#define RISCV_PGLEVEL_BITS 9
+#define RISCV_PGTABLE_ENTRIES 512
+#else
+/* Sv32: 10-bit VPN fields, 1024 entries per table */
+#define RISCV_PGLEVEL_BITS 10
+#define RISCV_PGTABLE_ENTRIES 1024
+#endif
+
+/* Page table entry (PTE) bit definitions */
+#define PTE_V (1UL << 0) /* Valid */
+#define PTE_R (1UL << 1) /* Read */
+#define PTE_W (1UL << 2) /* Write */
+#define PTE_X (1UL << 3) /* Execute */
+#define PTE_U (1UL << 4) /* User accessible */
+#define PTE_G (1UL << 5) /* Global mapping */
+#define PTE_A (1UL << 6) /* Accessed */
+#define PTE_D (1UL << 7) /* Dirty */
+#define PTE_RSW_MASK (3UL << 8) /* Reserved for software */
+
+/* PTE physical page number (PPN) field position */
+#define PTE_PPN_SHIFT 10
+
+#ifdef CONFIG_64BIT
+/*
+ * Sv39: 39-bit virtual addressing, 3-level page tables
+ * Virtual address format: [38:30] VPN[2], [29:21] VPN[1], [20:12] VPN[0], [11:0] offset
+ */
+#define SATP_MODE_SV39 (8UL << 60)
+#define SATP_MODE SATP_MODE_SV39
+#define RISCV_PGTABLE_LEVELS 3
+#define VA_BITS 39
+#else
+/*
+ * Sv32: 32-bit virtual addressing, 2-level page tables
+ * Virtual address format: [31:22] VPN[1], [21:12] VPN[0], [11:0] offset
+ */
+#define SATP_MODE_SV32 (1UL << 31)
+#define SATP_MODE SATP_MODE_SV32
+#define RISCV_PGTABLE_LEVELS 2
+#define VA_BITS 32
+#endif
+
+/* SATP register fields */
+#ifdef CONFIG_64BIT
+#define SATP_PPN_MASK ((1ULL << 44) - 1) /* Physical page number (Sv39) */
+#else
+#define SATP_PPN_MASK ((1UL << 22) - 1) /* Physical page number (Sv32) */
+#endif
+#define SATP_ASID_SHIFT 44
+#define SATP_ASID_MASK (0xFFFFUL << SATP_ASID_SHIFT)
+
+/* Extract VPN (Virtual Page Number) from virtual address */
+#define VPN_MASK ((1UL << RISCV_PGLEVEL_BITS) - 1)
+#define VPN(addr, level) (((addr) >> (RISCV_PGSHIFT + (level) * RISCV_PGLEVEL_BITS)) & VPN_MASK)
+
+/* RISC-V page sizes by level */
+#ifdef CONFIG_64BIT
+/* Sv39: 3-level page tables */
+#define RISCV_L2_SHIFT 30 /* 1GB gigapages (Level 0 in Sv39) */
+#define RISCV_L1_SHIFT 21 /* 2MB megapages (Level 1 in Sv39) */
+#define RISCV_L0_SHIFT 12 /* 4KB pages (Level 2 in Sv39) */
+#else
+/* Sv32: 2-level page tables */
+#define RISCV_L1_SHIFT 22 /* 4MB superpages (Level 0 in Sv32) */
+#define RISCV_L0_SHIFT 12 /* 4KB pages (Level 1 in Sv32) */
+#endif
+
+#ifdef CONFIG_64BIT
+#define RISCV_L2_SIZE (1UL << RISCV_L2_SHIFT) /* 1GB (RV64 only) */
+#endif
+#define RISCV_L1_SIZE (1UL << RISCV_L1_SHIFT) /* 2MB (RV64) or 4MB (RV32) */
+#define RISCV_L0_SIZE (1UL << RISCV_L0_SHIFT) /* 4KB */
+
+/* Number of entries per page table (use RISCV_PGTABLE_ENTRIES instead) */
+#define RISCV_PTE_ENTRIES RISCV_PGTABLE_ENTRIES
+
+/* PTE type - 64-bit on RV64, 32-bit on RV32 */
+#ifdef CONFIG_64BIT
+typedef uint64_t pte_t;
+#else
+typedef uint32_t pte_t;
+#endif
+
+/* Early page table allocation size (PBL) */
+#ifdef CONFIG_64BIT
+/* Sv39: 3 levels, allocate space for root + worst case intermediate tables */
+#define RISCV_EARLY_PAGETABLE_SIZE (64 * 1024) /* 64KB */
+#else
+/* Sv32: 2 levels, smaller allocation */
+#define RISCV_EARLY_PAGETABLE_SIZE (32 * 1024) /* 32KB */
+#endif
+
+#ifndef __ASSEMBLY__
+
+/* CSR access */
+#define csr_read(csr) \
+({ \
+ unsigned long __v; \
+ __asm__ __volatile__ ("csrr %0, " #csr \
+ : "=r" (__v) : \
+ : "memory"); \
+ __v; \
+})
+
+#define csr_write(csr, val) \
+({ \
+ unsigned long __v = (unsigned long)(val); \
+ __asm__ __volatile__ ("csrw " #csr ", %0" \
+ : : "rK" (__v) \
+ : "memory"); \
+})
+
+/* SFENCE.VMA - Synchronize updates to page tables */
+static inline void sfence_vma(void)
+{
+ __asm__ __volatile__ ("sfence.vma" : : : "memory");
+}
+
+static inline void sfence_vma_addr(unsigned long addr)
+{
+ __asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory");
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __RISCV_CPU_MMU_H */
diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
index 9c992a88d858fe6105f16978849f3a564d42b85f..23f60b615ea91e680c57b7b65b868260a761de5e 100644
--- a/arch/riscv/include/asm/asm.h
+++ b/arch/riscv/include/asm/asm.h
@@ -9,7 +9,8 @@
#ifdef __ASSEMBLY__
#define __ASM_STR(x) x
#else
-#define __ASM_STR(x) #x
+#define __ASM_STR_HELPER(x) #x
+#define __ASM_STR(x) __ASM_STR_HELPER(x)
#endif
#if __riscv_xlen == 64
diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
index 1c2646ebb3393f120ad7208109372fef8bc32e81..63878b4bb4f13287ffe86f86425a167c0beb852b 100644
--- a/arch/riscv/include/asm/mmu.h
+++ b/arch/riscv/include/asm/mmu.h
@@ -3,6 +3,50 @@
#ifndef __ASM_MMU_H
#define __ASM_MMU_H
+#include <linux/types.h>
+
+/*
+ * RISC-V supports memory protection through two mechanisms:
+ * - S-mode: Virtual memory with page tables (MMU)
+ * - M-mode: Physical Memory Protection (PMP) regions
+ */
+
+#if defined(CONFIG_MMU) || defined(CONFIG_RISCV_PMP)
+#define ARCH_HAS_REMAP
+#define MAP_ARCH_DEFAULT MAP_CACHED
+
+/* Architecture-specific memory type flags */
+#define ARCH_MAP_CACHED_RWX MAP_ARCH(2) /* Cached, RWX (early boot) */
+#define ARCH_MAP_FLAG_PAGEWISE (1 << 16) /* Force page-wise mapping */
+
+#ifdef __PBL__
+/*
+ * PBL remap function - used by pbl/mmu.c to apply ELF segment permissions.
+ * Implementation is in arch/riscv/cpu/mmu.c (S-mode) or pmp.c (M-mode).
+ */
+void pbl_remap_range(void *virt, phys_addr_t phys, size_t size, maptype_t flags);
+
+/*
+ * Early MMU/PMP setup - called before decompression for performance.
+ * S-mode: Sets up basic page tables and enables MMU via SATP CSR.
+ * M-mode: Configures initial PMP regions.
+ */
+void mmu_early_enable(unsigned long membase, unsigned long memsize,
+ unsigned long barebox_base);
+#endif /* __PBL__ */
+
+/*
+ * Remap a virtual address range with specified memory type (barebox proper).
+ * Used by the generic remap infrastructure after barebox is fully relocated.
+ * Implementation is in arch/riscv/cpu/mmu.c (S-mode) or pmp.c (M-mode).
+ */
+int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
+ maptype_t map_type);
+
+#else
#define MAP_ARCH_DEFAULT MAP_UNCACHED
+#endif
+
+#include <mmu.h>
#endif /* __ASM_MMU_H */
diff --git a/include/mmu.h b/include/mmu.h
index 32d9a7aca3b9a61d542bf3e21e27f1ac51f43ee2..ecd44d4ac756009cd44d7dedbda5a13f1ca3f93d 100644
--- a/include/mmu.h
+++ b/include/mmu.h
@@ -20,6 +20,9 @@
#define MAP_TYPE_MASK 0xFFFF
#define MAP_ARCH(x) ((u16)~(x))
+#include <asm/mmu.h>
+#include <asm/io.h>
+
/*
* Depending on the architecture the default mapping can be
* cached or uncached. Without ARCH_HAS_REMAP being set this
@@ -27,9 +30,6 @@
*/
#define MAP_DEFAULT MAP_ARCH_DEFAULT
-#include <asm/mmu.h>
-#include <asm/io.h>
-
static inline bool maptype_is_compatible(maptype_t active, maptype_t check)
{
active &= MAP_TYPE_MASK;
@@ -47,7 +47,7 @@ static inline bool maptype_is_compatible(maptype_t active, maptype_t check)
static inline int arch_remap_range(void *virt_addr, phys_addr_t phys_addr,
size_t size, maptype_t map_type)
{
- if (maptype_is_compatible(map_type, MAP_ARCH_DEFAULT) &&
+ if (maptype_is_compatible(map_type, MAP_DEFAULT) &&
phys_addr == virt_to_phys(virt_addr))
return 0;
--
2.47.3
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 01/21] elf: only accept images matching the native ELF_CLASS
2026-01-06 12:53 ` [PATCH v2 01/21] elf: only accept images matching the native ELF_CLASS Sascha Hauer
@ 2026-01-06 12:58 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 12:58 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> common/elf.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/common/elf.c b/common/elf.c
> index 18c541bf827e6077e64c15f62cb4abedc68cf278..a0e67a9353a12779ec841c53db7f6dba47070d8d 100644
> --- a/common/elf.c
> +++ b/common/elf.c
> @@ -213,6 +213,9 @@ static int elf_check_image(struct elf_image *elf, void *buf)
> return -ENOEXEC;
> }
>
> + if (elf->class != ELF_CLASS)
> + return -EINVAL;
> +
> if (!elf_hdr_e_phnum(elf, buf)) {
> pr_err("No phdr found.\n");
> return -ENOEXEC;
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 04/21] ARM: implement elf_apply_relocations() for ELF relocation support
2026-01-06 12:53 ` [PATCH v2 04/21] ARM: implement elf_apply_relocations() for ELF " Sascha Hauer
@ 2026-01-06 13:07 ` Ahmad Fatoum
2026-01-06 14:25 ` Ahmad Fatoum
0 siblings, 1 reply; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:07 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hello,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Implement architecture-specific ELF relocation handlers for ARM32 and ARM64.
> The implementation reuses the existing relocate_image().
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
But a bit more verbose than needed:
> --- a/arch/arm/lib32/reloc.c
> +++ b/arch/arm/lib32/reloc.c
> @@ -6,6 +6,7 @@
> #include <barebox.h>
> #include <elf.h>
> #include <debug_ll.h>
> +#include <linux/printk.h>
Unused?
> +/*
> + * Apply ARM32 ELF relocations
> + */
> +int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
> +{
> + Elf32_Rel *rel;
> + void *rel_ptr;
> + u64 relsz;
> + phys_addr_t base = (phys_addr_t)elf->reloc_offset;
> + int ret;
> +
> + ret = elf_parse_dynamic_section_rel(elf, dyn_seg, &rel_ptr, &relsz);
> + if (ret)
> + return ret;
> +
> + rel = (Elf32_Rel *)rel_ptr;
Nitpick: rel can be dropped and rel_ptr used instead.
> +
> + relocate_image(base, rel, (void *)rel + relsz, NULL, NULL);
> +
> + return 0;
> +}
> diff --git a/arch/arm/lib64/reloc.c b/arch/arm/lib64/reloc.c
> index 2288f9e2e336887c5edfbf6b080f487394754113..50bd0b88fae0a59a6a86a84c1df0743ac158e06c 100644
> --- a/arch/arm/lib64/reloc.c
> +++ b/arch/arm/lib64/reloc.c
> @@ -7,8 +7,7 @@
> #include <elf.h>
> #include <debug_ll.h>
> #include <asm/reloc.h>
> -
> -#define R_AARCH64_RELATIVE 1027
> +#include <linux/printk.h>
Unused?
> +/*
> + * Apply ARM64 ELF relocations
> + */
> +int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
> +{
> + Elf64_Rela *rela;
> + void *rel_ptr;
> + u64 relasz;
> + phys_addr_t base = (phys_addr_t)elf->reloc_offset;
> + int ret;
> +
> + ret = elf_parse_dynamic_section_rela(elf, dyn_seg, &rel_ptr, &relasz);
> + if (ret)
> + return ret;
> +
> + rela = (Elf64_Rela *)rel_ptr;
Same thing.
> +
> + relocate_image(base, rela, (void *)rela + relasz, NULL, NULL);
> +
> + return 0;
> +}
Cheers,
Ahmad
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 05/21] riscv: define generic relocate_image
2026-01-06 12:53 ` [PATCH v2 05/21] riscv: define generic relocate_image Sascha Hauer
@ 2026-01-06 13:10 ` Ahmad Fatoum
2026-01-06 13:11 ` Sascha Hauer
0 siblings, 1 reply; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:10 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> For use by the ELF loader in PBL to relocate barebox proper, export a
> new relocate_image capable of relocating barebox and implement
> relocate_to_current_adr() in terms of it.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> - dstart = runtime_address(__rel_dyn_start);
> - dend = runtime_address(__rel_dyn_end);
> - dynsym = runtime_address(__dynsym_start);
Interestingly dynend wasn't used here and no memset, unlike for ARM and
also it needs dynsym even with RELA...
Cheers,
Ahmad
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 06/21] riscv: implement elf_apply_relocations() for ELF relocation support
2026-01-06 12:53 ` [PATCH v2 06/21] riscv: implement elf_apply_relocations() for ELF relocation support Sascha Hauer
@ 2026-01-06 13:11 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:11 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Add architecture-specific ELF relocation support for RISC-V,
> enabling dynamic relocation of position-independent ELF binaries.
> The implemetation reuses the existing relocate_image().
>
> 🤖 Generated with [Claude Code](https://claude.com/claude-code)
>
> Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
With below point addressed:
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/riscv/lib/reloc.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 55 insertions(+)
>
> diff --git a/arch/riscv/lib/reloc.c b/arch/riscv/lib/reloc.c
> index 18b13a7013cff4032c12b999470f265dbda13c51..9a5004cf5b762b39bb98f7f0ab112c204185b33a 100644
> --- a/arch/riscv/lib/reloc.c
> +++ b/arch/riscv/lib/reloc.c
> @@ -75,3 +75,58 @@ void relocate_to_current_adr(void)
>
> sync_caches_for_execution();
> }
> +
> +#if __riscv_xlen == 64
> +
> +/*
> + * Apply RISC-V 64-bit ELF relocations
> + */
> +int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
> +{
> + Elf64_Rela *rela;
If you drop this and use only rel_ptr, this can be made a single
function for both platforms.
Cheers,
Ahmad
> + void *rel_ptr;
> + u64 relasz;
> + phys_addr_t base = (phys_addr_t)elf->reloc_offset;
> + int ret;
> +
> + ret = elf_parse_dynamic_section_rela(elf, dyn_seg, &rel_ptr, &relasz);
> + if (ret)
> + return ret;
> +
> + rela = (Elf64_Rela *)rel_ptr;
> +
> + relocate_image(base, rela, (void *)rela + relasz, NULL, NULL);
> +
> + return 0;
> +}
> +
> +#else /* 32-bit RISC-V */
> +
> +/*
> + * Apply RISC-V 32-bit ELF relocations
> + */
> +int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
> +{
> + Elf32_Rela *rela;
> + void *rel_ptr;
> + u64 relasz;
> + phys_addr_t base = (phys_addr_t)elf->reloc_offset;
> + int ret;
> +
> + if (elf->class != ELFCLASS32) {
> + pr_err("Wrong ELF class for RISC-V 32 relocation\n");
> + return -EINVAL;
> + }
> +
> + ret = elf_parse_dynamic_section_rela(elf, dyn_seg, &rel_ptr, &relasz);
> + if (ret)
> + return ret;
> +
> + rela = (Elf32_Rela *)rel_ptr;
> +
> + relocate_image(base, rela, (void *)rela + relasz, NULL, NULL);
> +
> + return 0;
> +}
> +
> +#endif
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 05/21] riscv: define generic relocate_image
2026-01-06 13:10 ` Ahmad Fatoum
@ 2026-01-06 13:11 ` Sascha Hauer
0 siblings, 0 replies; 46+ messages in thread
From: Sascha Hauer @ 2026-01-06 13:11 UTC (permalink / raw)
To: Ahmad Fatoum; +Cc: BAREBOX, Claude Sonnet 4.5
On Tue, Jan 06, 2026 at 02:10:02PM +0100, Ahmad Fatoum wrote:
> Hi,
>
> On 1/6/26 1:53 PM, Sascha Hauer wrote:
> > For use by the ELF loader in PBL to relocate barebox proper, export a
> > new relocate_image capable of relocating barebox and implement
> > relocate_to_current_adr() in terms of it.
> >
> > Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
>
> Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
>
> > - dstart = runtime_address(__rel_dyn_start);
> > - dend = runtime_address(__rel_dyn_end);
> > - dynsym = runtime_address(__dynsym_start);
>
> Interestingly dynend wasn't used here and no memset, unlike for ARM and
> also it needs dynsym even with RELA...
Yes, as said, the memset is only needed for multiple relocations of the
same image. Likely nobody trapped into this on RiscV.
Sascha
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 20/21] riscv: Allwinner D1: Drop M-Mode
2026-01-06 12:53 ` [PATCH v2 20/21] riscv: Allwinner D1: Drop M-Mode Sascha Hauer
@ 2026-01-06 13:11 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:11 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5, Marco Felsch
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> The Allwinner D1 selects both RISCV_M_MODE and RISCV_S_MODE. The board
> code uses barebox_riscv_machine_entry() and not barebox_riscv_machine_entry()
> which indicates RISCV_M_MODE was only selected by accident. Remove it.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/riscv/Kconfig.socs | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/arch/riscv/Kconfig.socs b/arch/riscv/Kconfig.socs
> index 4a3b56b5fff48c86901ed0346be490a6847ac14e..0d9984dd2888e6cab81939e3ee97ef83851362a0 100644
> --- a/arch/riscv/Kconfig.socs
> +++ b/arch/riscv/Kconfig.socs
> @@ -123,7 +123,6 @@ if SOC_ALLWINNER_SUN20I
> config BOARD_ALLWINNER_D1
> bool "Allwinner D1 Nezha"
> select RISCV_S_MODE
> - select RISCV_M_MODE
> def_bool y
>
> endif
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 09/21] Makefile: add barebox.elf build target
2026-01-06 12:53 ` [PATCH v2 09/21] Makefile: add barebox.elf build target Sascha Hauer
@ 2026-01-06 13:13 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:13 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Add a build target to create barebox.elf, which provides an ELF format
> version of barebox that can be used for debugging or alternative boot
> scenarios.
I don't find my feedback re: different name for barebox.elf (e.g.
vmbarebox) and use of --strip-all/--strip-section-headers addressed here.
Cheers,
Ahmad
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
> ---
> Makefile | 13 ++++++++++++-
> 1 file changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/Makefile b/Makefile
> index 3b31cecc22c431a063b8d2d3c387da487b698e74..aa12b385c779512fe20d33792f4968fed2eec29a 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -853,6 +853,9 @@ all: barebox-flash-images
> endif
>
> all: $(symlink-y)
> +ifeq ($(CONFIG_PBL_IMAGE)-$(CONFIG_PBL_IMAGE_NO_PIGGY),y-)
> +all: barebox.elf
> +endif
>
> .SECONDEXPANSION:
> $(symlink-y): $$(or $$(SYMLINK_DEP_$$(@F)),$$(SYMLINK_TARGET_$$(@F))) FORCE
> @@ -1096,6 +1099,14 @@ barebox.fit: images/barebox-$(CONFIG_ARCH_LINUX_NAME).fit
> barebox.srec: barebox
> $(OBJCOPY) -O srec $< $@
>
> +OBJCOPYFLAGS_barebox.elf = --strip-debug --strip-unneeded \
> + --remove-section=.comment \
> + --remove-section=.note* \
> + --remove-section=.gnu.hash
> +
> +barebox.elf: barebox FORCE
> + $(call if_changed,objcopy)
> +
> quiet_cmd_barebox_proper__ = CC $@
> cmd_barebox_proper__ = $(CC) -r -o $@ -Wl,--whole-archive $(BAREBOX_OBJS)
>
> @@ -1378,7 +1389,7 @@ CLEAN_FILES += barebox System.map include/generated/barebox_default_env.h \
> .tmp_version .tmp_barebox* barebox.bin barebox.map \
> .tmp_kallsyms* compile_commands.json \
> .tmp_barebox.o barebox.o barebox-flash-image \
> - barebox.srec barebox.efi
> + barebox.srec barebox.efi barebox.elf
>
> CLEAN_FILES += scripts/bareboxenv-target scripts/kernel-install-target \
> scripts/bareboxcrc32-target scripts/bareboximd-target \
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 11/21] mmu: add MAP_CACHED_RO mapping type
2026-01-06 12:53 ` [PATCH v2 11/21] mmu: add MAP_CACHED_RO mapping type Sascha Hauer
@ 2026-01-06 13:14 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:14 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> ARM32 and ARM64 have ARCH_MAP_CACHED_RO. We'll move parts of the MMU
> initialization to generic code later, so add a new mapping type to
> include/mmu.h.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/arm/cpu/mmu-common.c | 4 ++--
> arch/arm/cpu/mmu-common.h | 3 +--
> arch/arm/cpu/mmu_32.c | 4 ++--
> arch/arm/cpu/mmu_64.c | 2 +-
> include/mmu.h | 3 ++-
> 5 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
> index a1431c0ff46112552d2919269cc8a7a66d7a20c1..67317f127cadb138cc2e85bb18c92ab47bc1206f 100644
> --- a/arch/arm/cpu/mmu-common.c
> +++ b/arch/arm/cpu/mmu-common.c
> @@ -22,7 +22,7 @@ const char *map_type_tostr(maptype_t map_type)
>
> switch (map_type) {
> case ARCH_MAP_CACHED_RWX: return "RWX";
> - case ARCH_MAP_CACHED_RO: return "RO";
> + case MAP_CACHED_RO: return "RO";
> case MAP_CACHED: return "CACHED";
> case MAP_UNCACHED: return "UNCACHED";
> case MAP_CODE: return "CODE";
> @@ -158,7 +158,7 @@ static void mmu_remap_memory_banks(void)
> }
>
> remap_range((void *)code_start, code_size, MAP_CODE);
> - remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
> + remap_range((void *)rodata_start, rodata_size, MAP_CACHED_RO);
>
> setup_trap_pages();
> }
> diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
> index a111e15a21b479b5ffa2ea8973e2ad189e531925..b42c421ffde8ebba84b17c6311b735f7759dc69b 100644
> --- a/arch/arm/cpu/mmu-common.h
> +++ b/arch/arm/cpu/mmu-common.h
> @@ -12,7 +12,6 @@
> #include <linux/bits.h>
>
> #define ARCH_MAP_CACHED_RWX MAP_ARCH(2)
> -#define ARCH_MAP_CACHED_RO MAP_ARCH(3)
>
> #define ARCH_MAP_FLAG_PAGEWISE BIT(31)
>
> @@ -32,7 +31,7 @@ static inline maptype_t arm_mmu_maybe_skip_permissions(maptype_t map_type)
> switch (map_type & MAP_TYPE_MASK) {
> case MAP_CODE:
> case MAP_CACHED:
> - case ARCH_MAP_CACHED_RO:
> + case MAP_CACHED_RO:
> return ARCH_MAP_CACHED_RWX;
> default:
> return map_type;
> diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
> index 912d14e8cf82afcfd1800e4e11503899e10ccbbc..71ead41c3d274548c9427c1ce9833de309114c4d 100644
> --- a/arch/arm/cpu/mmu_32.c
> +++ b/arch/arm/cpu/mmu_32.c
> @@ -304,7 +304,7 @@ static uint32_t get_pte_flags(maptype_t map_type)
> switch (map_type & MAP_TYPE_MASK) {
> case ARCH_MAP_CACHED_RWX:
> return PTE_FLAGS_CACHED_V7_RWX;
> - case ARCH_MAP_CACHED_RO:
> + case MAP_CACHED_RO:
> return PTE_FLAGS_CACHED_RO_V7;
> case MAP_CACHED:
> return PTE_FLAGS_CACHED_V7;
> @@ -320,7 +320,7 @@ static uint32_t get_pte_flags(maptype_t map_type)
> }
> } else {
> switch (map_type & MAP_TYPE_MASK) {
> - case ARCH_MAP_CACHED_RO:
> + case MAP_CACHED_RO:
> case MAP_CODE:
> return PTE_FLAGS_CACHED_RO_V4;
> case ARCH_MAP_CACHED_RWX:
> diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
> index 56c6a21f2b2a8d8300fd6dbfaaf36a54d264a0f3..ddf1373ec0a801baad043146187d7f4c3eac6a2a 100644
> --- a/arch/arm/cpu/mmu_64.c
> +++ b/arch/arm/cpu/mmu_64.c
> @@ -159,7 +159,7 @@ static unsigned long get_pte_attrs(maptype_t map_type)
> return attrs_xn() | MEM_ALLOC_WRITECOMBINE;
> case MAP_CODE:
> return CACHED_MEM | PTE_BLOCK_RO;
> - case ARCH_MAP_CACHED_RO:
> + case MAP_CACHED_RO:
> return attrs_xn() | CACHED_MEM | PTE_BLOCK_RO;
> case ARCH_MAP_CACHED_RWX:
> return CACHED_MEM;
> diff --git a/include/mmu.h b/include/mmu.h
> index f79619808829532ed05f018b982e4bc76bca72a4..9f582f25e1de14d47cfe2eff64f9cce81c4e492d 100644
> --- a/include/mmu.h
> +++ b/include/mmu.h
> @@ -9,9 +9,10 @@
> #define MAP_CACHED 1
> #define MAP_FAULT 2
> #define MAP_CODE 3
> +#define MAP_CACHED_RO 4
>
> #ifdef CONFIG_ARCH_HAS_DMA_WRITE_COMBINE
> -#define MAP_WRITECOMBINE 4
> +#define MAP_WRITECOMBINE 5
> #else
> #define MAP_WRITECOMBINE MAP_UNCACHED
> #endif
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 12/21] mmu: introduce pbl_remap_range()
2026-01-06 12:53 ` [PATCH v2 12/21] mmu: introduce pbl_remap_range() Sascha Hauer
@ 2026-01-06 13:15 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:15 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Add PBL-specific memory remapping function that always uses page-wise
> mapping (ARCH_MAP_FLAG_PAGEWISE) for fine-grained permissions on
> adjacent ELF segments with different protection requirements.
>
> Wraps arch-specific __arch_remap_range() for ARMv7 (4KB pages) and
> ARMv8 (page tables with BBM). Needed for ELF segment permission setup.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
The verdict was that we shouldn't need PAGEWISE here and in that case,
we could just use normal remap_range function, right?
Thanks,
Ahmad
> ---
> arch/arm/cpu/mmu_32.c | 7 +++++++
> arch/arm/cpu/mmu_64.c | 8 ++++++++
> include/mmu.h | 3 +++
> 3 files changed, 18 insertions(+)
>
> diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
> index 71ead41c3d274548c9427c1ce9833de309114c4d..54371e3e5973c9fe98cadef63499105134e09539 100644
> --- a/arch/arm/cpu/mmu_32.c
> +++ b/arch/arm/cpu/mmu_32.c
> @@ -435,6 +435,13 @@ static void early_remap_range(u32 addr, size_t size, maptype_t map_type)
> __arch_remap_range((void *)addr, addr, size, map_type);
> }
>
> +void pbl_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
> + maptype_t map_type)
> +{
> + __arch_remap_range(virt_addr, phys_addr, size,
> + map_type | ARCH_MAP_FLAG_PAGEWISE);
> +}
> +
> static bool pte_is_cacheable(uint32_t pte, int level)
> {
> return (level == 2 && (pte & PTE_CACHEABLE)) ||
> diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
> index ddf1373ec0a801baad043146187d7f4c3eac6a2a..d4d6d1a849ba9b8666f4dcb4bd80247a70bdce1a 100644
> --- a/arch/arm/cpu/mmu_64.c
> +++ b/arch/arm/cpu/mmu_64.c
> @@ -282,6 +282,14 @@ static void early_remap_range(uint64_t addr, size_t size, maptype_t map_type)
> __arch_remap_range(addr, addr, size, map_type, false);
> }
>
> +void pbl_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
> + maptype_t map_type)
> +{
> + __arch_remap_range((uint64_t)virt_addr, phys_addr,
> + (uint64_t)size, map_type | ARCH_MAP_FLAG_PAGEWISE,
> + true);
> +}
> +
> int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size, maptype_t map_type)
> {
> map_type = arm_mmu_maybe_skip_permissions(map_type);
> diff --git a/include/mmu.h b/include/mmu.h
> index 9f582f25e1de14d47cfe2eff64f9cce81c4e492d..32d9a7aca3b9a61d542bf3e21e27f1ac51f43ee2 100644
> --- a/include/mmu.h
> +++ b/include/mmu.h
> @@ -65,6 +65,9 @@ static inline bool arch_can_remap(void)
> }
> #endif
>
> +void pbl_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
> + maptype_t map_type);
> +
> static inline int remap_range(void *start, size_t size, maptype_t map_type)
> {
> return arch_remap_range(start, virt_to_phys(start), size, map_type);
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 10/21] PBL: allow to link ELF image into PBL
2026-01-06 12:53 ` [PATCH v2 10/21] PBL: allow to link ELF image into PBL Sascha Hauer
@ 2026-01-06 13:18 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:18 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Some architectures want to link the barebox proper ELF image into the
> PBL. Allow that and provide a Kconfig option to select the ELF image.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> ---
> Makefile | 16 ++++++++++------
> images/Makefile | 2 +-
> pbl/Kconfig | 9 +++++++++
> 3 files changed, 20 insertions(+), 7 deletions(-)
>
> diff --git a/Makefile b/Makefile
> index aa12b385c779512fe20d33792f4968fed2eec29a..d1e6e3e418e4875b0fdee2e156acecdbbd5e3c9a 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -831,7 +831,11 @@ export KBUILD_BINARY ?= barebox.bin
> # Also any assignments in arch/$(SRCARCH)/Makefile take precedence over
> # the default value.
>
> +ifeq ($(CONFIG_PBL_IMAGE_ELF),y)
> +export BAREBOX_PROPER ?= vmbarebox
> +else
> export BAREBOX_PROPER ?= barebox.bin
> +endif
Should be squashed into previous commit.
>
> barebox-flash-images: $(KBUILD_IMAGE)
> @echo $^ > $@
> @@ -854,7 +858,7 @@ endif
>
> all: $(symlink-y)
> ifeq ($(CONFIG_PBL_IMAGE)-$(CONFIG_PBL_IMAGE_NO_PIGGY),y-)
> -all: barebox.elf
> +all: vmbarebox
Shouldn't be needed, there's already
ifdef CONFIG_PBL_IMAGE
all: $(BAREBOX_PROPER) images
> endif
>
> .SECONDEXPANSION:
> @@ -1099,12 +1103,12 @@ barebox.fit: images/barebox-$(CONFIG_ARCH_LINUX_NAME).fit
> barebox.srec: barebox
> $(OBJCOPY) -O srec $< $@
>
> -OBJCOPYFLAGS_barebox.elf = --strip-debug --strip-unneeded \
> - --remove-section=.comment \
> - --remove-section=.note* \
> - --remove-section=.gnu.hash
> +OBJCOPYFLAGS_vmbarebox = --strip-debug --strip-unneeded \
> + --remove-section=.comment \
> + --remove-section=.note* \
> + --remove-section=.gnu.hash
>
> -barebox.elf: barebox FORCE
> +vmbarebox: barebox FORCE
> $(call if_changed,objcopy)
Ok. I can add a patch later with --strip-section-headers.
> quiet_cmd_barebox_proper__ = CC $@
> diff --git a/images/Makefile b/images/Makefile
> index ebbf57b463558724c97a6b4ca65c33f80dad253b..dd1be18eeef1a4ef42d3b9dc4b9c4f5d22c10d1a 100644
> --- a/images/Makefile
> +++ b/images/Makefile
> @@ -231,7 +231,7 @@ ifneq ($(pblx-y)$(pblx-),)
> $(error pblx- has been removed. Please use pblb- instead.)
> endif
>
> -targets += $(image-y) pbl.lds barebox.x barebox.z piggy.o sha_sum.o barebox.sha.bin barebox.sum
> +targets += $(image-y) pbl.lds barebox.x barebox.z barebox.elf.z piggy.o sha_sum.o barebox.sha.bin barebox.sum
barebox.elf.z not needed.
Cheers,
Ahmad
> targets += $(patsubst %,%.pblb,$(pblb-y))
> targets += $(patsubst %,%.pbl,$(pblb-y))
> targets += $(patsubst %,%.s,$(pblb-y))
> diff --git a/pbl/Kconfig b/pbl/Kconfig
> index cab9325d16e8625bcca10125b3281062abffedbc..63f29cd6135926c48b355b80fc7a123b90098c20 100644
> --- a/pbl/Kconfig
> +++ b/pbl/Kconfig
> @@ -21,6 +21,15 @@ config PBL_IMAGE_NO_PIGGY
> want to use the piggy mechanism to load barebox proper.
> It's so far only intended for sandbox.
>
> +config PBL_IMAGE_ELF
> + bool
> + depends on PBL_IMAGE
> + select ELF
> + help
> + If yes, link ELF image into the PBL, otherwise a raw binary
> + is linked into the PBL. This must match the loader code in the
> + PBL.
> +
> config PBL_MULTI_IMAGES
> bool
> select PBL_IMAGE
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 02/21] elf: build for PBL as well
2026-01-06 12:53 ` [PATCH v2 02/21] elf: build for PBL as well Sascha Hauer
@ 2026-01-06 13:26 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:26 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> We'll link barebox proper as an ELF image into the PBL later, so compile
> ELF support for PBL as well.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
As we have a separate CONFIG_ARM_BOOTM_ELF, we should end up discarding
ELF support code in barebox proper when it's disabled, so:
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> common/Makefile | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/common/Makefile b/common/Makefile
> index 36dee5f7a98a7b2bbf67263ee5942520e6ce53e0..b1170ea4a801de2e7d0279edaddbf6f370a053c1 100644
> --- a/common/Makefile
> +++ b/common/Makefile
> @@ -14,7 +14,7 @@ obj-y += misc.o
> obj-pbl-y += memsize.o
> obj-y += resource.o
> obj-pbl-y += bootsource.o
> -obj-$(CONFIG_ELF) += elf.o
> +obj-pbl-$(CONFIG_ELF) += elf.o
> obj-y += restart.o
> obj-y += poweroff.o
> obj-y += slice.o
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 03/21] elf: add dynamic relocation support
2026-01-06 12:53 ` [PATCH v2 03/21] elf: add dynamic relocation support Sascha Hauer
@ 2026-01-06 13:51 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:51 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Add support for applying dynamic relocations to ELF binaries. This allows
> loading ET_DYN (position-independent) binaries and ET_EXEC binaries at
> custom load addresses.
>
> Key changes:
> - Add elf_image.reloc_offset to track offset between vaddr and load address
> - Implement elf_compute_load_offset() to calculate relocation offset
> - Add elf_set_load_address() API to specify custom load address
> - Implement elf_find_dynamic_segment() to locate PT_DYNAMIC
> - Add elf_relocate() to apply relocations
> - Provide weak default elf_apply_relocations() stub for unsupported architectures
> - Add ELF dynamic section accessors
>
> The relocation offset type is unsigned long to properly handle pointer
> arithmetic and avoid casting issues.
>
> Architecture-specific implementations should override the weak
> elf_apply_relocations() function to handle their relocation types.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Wrong way round ^
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
But some nitpicks and a question below.
> +static void *elf_get_dest(struct elf_image *elf, void *phdr)
How about elf_phdr_relocated_paddr as a more descriptive name?
> + if (elf->load_address) {
> + elf->base_load_addr = elf->load_address;
> + } else if (elf->type == ET_EXEC) {
> + elf->base_load_addr = NULL;
> + } else {
> + elf->base_load_addr = (void *)(phys_addr_t)min_paddr;
> + }
Curly braces can be removed.
> + if (elf->type == ET_EXEC && !elf->load_address) {
> + elf->reloc_offset = 0;
> + } else {
> + elf->reloc_offset = ((unsigned long)elf->base_load_addr - min_vaddr);
> + }
Ditto.
> +void elf_set_load_address(struct elf_image *elf, void *addr)
> +{
> + elf->load_address = addr;
> +}
> +
> +static void *elf_find_dynamic_segment(struct elf_image *elf)
> +{
> + void *buf = elf->hdr_buf;
> + void *phdr = buf + elf_hdr_e_phoff(elf, buf);
> + int i;
> +
> + for (i = 0; i < elf_hdr_e_phnum(elf, buf); i++) {
> + if (elf_phdr_p_type(elf, phdr) == PT_DYNAMIC) {
> + u64 offset = elf_phdr_p_offset(elf, phdr);
> +
> + /* If loaded from file, PT_DYNAMIC might not be in hdr_buf */
> + if (elf->filename)
> + return elf_get_dest(elf, phdr);
> + else
> + /* Binary in memory, use offset */
> + return elf->hdr_buf + offset;
Does elf_get_dest(elf, phdr) not compute the same address that we would
get here?
> + /* Check that we found exactly one relocation type */
> + if (rel && rela) {
> + pr_err("ELF has both REL and RELA relocations\n");
> + return -EINVAL;
> + }
Future work could be checking these things at compile time and skipping
the checks in PBL.
> diff --git a/lib/Makefile b/lib/Makefile
> index 6d259dd94e163336d7fdab38c7b74b301aabc5c5..da2c8ffe1dbf512a901295c89494e0837f31a0d9 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -24,6 +24,7 @@ obj-y += readkey.o
> obj-y += kfifo.o
> obj-y += libbb.o
> obj-y += libgen.o
> +obj-y += elf_reloc.o
Why is this not obj-pbl-y?
Cheers,
Ahmad
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 07/21] elf: implement elf_load_inplace()
2026-01-06 12:53 ` [PATCH v2 07/21] elf: implement elf_load_inplace() Sascha Hauer
@ 2026-01-06 13:53 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:53 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> For ET_DYN (position-independent) binaries, the relocation offset is
> calculated relative to the first executable PT_LOAD segment (.text section),
> taking into account the difference between the segment's virtual address
> and its file offset.
Same review feedback as for v1: I believe it's the lowest load address
with no special significance to the text section.
Apart from that:
Acked-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Thanks,
Ahmad
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 08/21] elf: create elf_open_binary_into()
2026-01-06 12:53 ` [PATCH v2 08/21] elf: create elf_open_binary_into() Sascha Hauer
@ 2026-01-06 13:55 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:55 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> elf_open_binary() returns a dynamically allocated struct elf_image *. We
> do not have malloc in the PBL, so for better PBL support create
> elf_open_binary_into() which takes a struct elf_image * as argument.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Thanks,
Ahmad
> ---
> common/elf.c | 26 +++++++++++++++++++-------
> include/elf.h | 1 +
> 2 files changed, 20 insertions(+), 7 deletions(-)
>
> diff --git a/common/elf.c b/common/elf.c
> index b3cc9b59644aa6e6e029fa44988dbdc0787c42f6..d51c99192f327abc0eb4ca68109bc88c6a451d94 100644
> --- a/common/elf.c
> +++ b/common/elf.c
> @@ -318,6 +318,23 @@ static void elf_init_struct(struct elf_image *elf)
> elf->filename = NULL;
> }
>
> +int elf_open_binary_into(struct elf_image *elf, void *buf)
> +{
> + int ret;
> +
> + memset(elf, 0, sizeof(*elf));
> + elf_init_struct(elf);
> +
> + elf->hdr_buf = buf;
> + ret = elf_check_image(elf, buf);
> + if (ret)
> + return ret;
> +
> + elf->entry = elf_hdr_e_entry(elf, elf->hdr_buf);
> +
> + return 0;
> +}
> +
> struct elf_image *elf_open_binary(void *buf)
> {
> int ret;
> @@ -327,17 +344,12 @@ struct elf_image *elf_open_binary(void *buf)
> if (!elf)
> return ERR_PTR(-ENOMEM);
>
> - elf_init_struct(elf);
> -
> - elf->hdr_buf = buf;
> - ret = elf_check_image(elf, buf);
> + ret = elf_open_binary_into(elf, buf);
> if (ret) {
> free(elf);
> - return ERR_PTR(-EINVAL);
> + return ERR_PTR(ret);
> }
>
> - elf->entry = elf_hdr_e_entry(elf, elf->hdr_buf);
> -
> return elf;
> }
>
> diff --git a/include/elf.h b/include/elf.h
> index 84b98553cff69257f8c2fbfb2e7504a4400e0f53..64574cb4836cd67f54538ad1621d7fc5fbf58f92 100644
> --- a/include/elf.h
> +++ b/include/elf.h
> @@ -410,6 +410,7 @@ static inline size_t elf_get_mem_size(struct elf_image *elf)
> return elf->high_addr - elf->low_addr;
> }
>
> +int elf_open_binary_into(struct elf_image *elf, void *buf);
> struct elf_image *elf_open_binary(void *buf);
> struct elf_image *elf_open(const char *filename);
> void elf_close(struct elf_image *elf);
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 13/21] ARM: use relative jumps in exception table
2026-01-06 12:53 ` [PATCH v2 13/21] ARM: use relative jumps in exception table Sascha Hauer
@ 2026-01-06 13:57 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 13:57 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Create position-independent exception vectors using relative branches
> instead of absolute addresses. This works on ARMv7 onwards which
> supports setting the address of the exception vectors.
>
> New .text_inplace_exceptions section contains PC-relative branches,
> enabling barebox proper to start with MMU already configured using
> ELF segment addresses.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/arm/cpu/exceptions_32.S | 20 ++++++++++++++++++++
> arch/arm/cpu/interrupts_32.c | 7 ++-----
> arch/arm/cpu/no-mmu.c | 11 +----------
> arch/arm/include/asm/sections.h | 1 +
> arch/arm/lib/pbl.lds.S | 6 +++---
> arch/arm/lib32/barebox.lds.S | 4 ++++
> 6 files changed, 31 insertions(+), 18 deletions(-)
>
> diff --git a/arch/arm/cpu/exceptions_32.S b/arch/arm/cpu/exceptions_32.S
> index dc3d42663cbedd37947e27d449eb4ac8c3d8c3f1..85ee4ca3fd4e5aff8e1e02aa3d7375173a923072 100644
> --- a/arch/arm/cpu/exceptions_32.S
> +++ b/arch/arm/cpu/exceptions_32.S
> @@ -155,6 +155,26 @@ ENTRY(arm_fixup_vectors)
> ENDPROC(arm_fixup_vectors)
> #endif
>
> +.section .text_inplace_exceptions
> +1: b 1b /* barebox_arm_reset_vector */
> +#ifdef CONFIG_ARM_EXCEPTIONS
> + b undefined_instruction /* undefined instruction */
> + b software_interrupt /* software interrupt (SWI) */
> + b prefetch_abort /* prefetch abort */
> + b data_abort /* data abort */
> +1: b 1b /* (reserved) */
> + b irq /* irq (interrupt) */
> + b fiq /* fiq (fast interrupt) */
> +#else
> +1: b 1b /* undefined instruction */
> +1: b 1b /* software interrupt (SWI) */
> +1: b 1b /* prefetch abort */
> +1: b 1b /* data abort */
> +1: b 1b /* (reserved) */
> +1: b 1b /* irq (interrupt) */
> +1: b 1b /* fiq (fast interrupt) */
> +#endif
> +
> .section .text_exceptions
> .globl extable
> extable:
> diff --git a/arch/arm/cpu/interrupts_32.c b/arch/arm/cpu/interrupts_32.c
> index 0b88db10fe487378fe08018701bc672f63139fc1..5dc4802fafbfdbcd54707b84835f40177e10742b 100644
> --- a/arch/arm/cpu/interrupts_32.c
> +++ b/arch/arm/cpu/interrupts_32.c
> @@ -231,10 +231,8 @@ static __maybe_unused int arm_init_vectors(void)
> * First try to use the vectors where they actually are, works
> * on ARMv7 and later.
> */
> - if (!set_vector_table((unsigned long)__exceptions_start)) {
> - arm_fixup_vectors();
> + if (!set_vector_table((unsigned long)__inplace_exceptions_start))
> return 0;
> - }
>
> /*
> * Next try high vectors at 0xffff0000.
> @@ -264,7 +262,6 @@ void arm_pbl_init_exceptions(void)
> if (cpu_architecture() < CPU_ARCH_ARMv7)
> return;
>
> - set_vbar((unsigned long)__exceptions_start);
> - arm_fixup_vectors();
> + set_vbar((unsigned long)__inplace_exceptions_start);
> }
> #endif
> diff --git a/arch/arm/cpu/no-mmu.c b/arch/arm/cpu/no-mmu.c
> index c4ef5d1f9d55136d606c244309dbeeb8fd988784..68246d71156c7c84b9faff452cebb37132b83573 100644
> --- a/arch/arm/cpu/no-mmu.c
> +++ b/arch/arm/cpu/no-mmu.c
> @@ -21,8 +21,6 @@
> #include <asm/sections.h>
> #include <asm/cputype.h>
>
> -#define __exceptions_size (__exceptions_stop - __exceptions_start)
> -
> static bool has_vbar(void)
> {
> u32 mainid;
> @@ -41,7 +39,6 @@ static bool has_vbar(void)
>
> static int nommu_v7_vectors_init(void)
> {
> - void *vectors;
> u32 cr;
>
> if (cpu_architecture() < CPU_ARCH_ARMv7)
> @@ -58,13 +55,7 @@ static int nommu_v7_vectors_init(void)
> cr &= ~CR_V;
> set_cr(cr);
>
> - arm_fixup_vectors();
> -
> - vectors = xmemalign(PAGE_SIZE, PAGE_SIZE);
> - memset(vectors, 0, PAGE_SIZE);
> - memcpy(vectors, __exceptions_start, __exceptions_size);
> -
> - set_vbar((unsigned int)vectors);
> + set_vbar((unsigned int)__inplace_exceptions_start);
>
> return 0;
> }
> diff --git a/arch/arm/include/asm/sections.h b/arch/arm/include/asm/sections.h
> index 15b1a6482a5b148284ab47de2db1c2653909da09..bf4fb7b109a7a22d9a298257af23a11b9efe6861 100644
> --- a/arch/arm/include/asm/sections.h
> +++ b/arch/arm/include/asm/sections.h
> @@ -13,6 +13,7 @@ extern char __dynsym_start[];
> extern char __dynsym_end[];
> extern char __exceptions_start[];
> extern char __exceptions_stop[];
> +extern char __inplace_exceptions_start[];
>
> #endif
>
> diff --git a/arch/arm/lib/pbl.lds.S b/arch/arm/lib/pbl.lds.S
> index 9c51f5eb3a3d8256752a78e03fed851c84d92edb..53b21084cff2e3d916cd37485281f2f78166c37d 100644
> --- a/arch/arm/lib/pbl.lds.S
> +++ b/arch/arm/lib/pbl.lds.S
> @@ -53,9 +53,9 @@ SECTIONS
> *(.text_bare_init*)
> __bare_init_end = .;
> . = ALIGN(0x20);
> - __exceptions_start = .;
> - KEEP(*(.text_exceptions*))
> - __exceptions_stop = .;
> + __inplace_exceptions_start = .;
> + KEEP(*(.text_inplace_exceptions*))
> + __inplace_exceptions_stop = .;
> *(.text*)
> }
>
> diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
> index c704dd6d70f3ab157ceb67dfb14760e03f2a5d62..17e0970ba4989e5213ed38ea5ff87bdf5bfa2740 100644
> --- a/arch/arm/lib32/barebox.lds.S
> +++ b/arch/arm/lib32/barebox.lds.S
> @@ -26,6 +26,10 @@ SECTIONS
> __exceptions_start = .;
> KEEP(*(.text_exceptions*))
> __exceptions_stop = .;
> + . = ALIGN(0x20);
> + __inplace_exceptions_start = .;
> + KEEP(*(.text_inplace_exceptions*))
> + __inplace_exceptions_stop = .;
> *(.text*)
> }
> BAREBOX_BARE_INIT_SIZE
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 14/21] ARM: exceptions: make in-binary exception table const
2026-01-06 12:53 ` [PATCH v2 14/21] ARM: exceptions: make in-binary exception table const Sascha Hauer
@ 2026-01-06 14:00 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 14:00 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> When making the .text section const we can no longer modify the code.
> On ARMv5/v6 we finally use a copy of the exception table anyway, so
> instead of modifying it in-place and copy afterwards, copy it first and
> then modify it.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> +/* This struct must match the assembly exception table in exceptions_32.S */
One could add offset definitions to arch/arm/lib/asm-offsets.c and reuse
it this way, but I am fine duplicating it.
> #ifdef CONFIG_ARM_EXCEPTIONS
> -void arm_fixup_vectors(void);
> +void arm_fixup_vectors(void *table);
> ulong arm_get_vector_table(void);
> #else
> -static inline void arm_fixup_vectors(void)
> +static inline void arm_fixup_vectors(void *table)
> {
> }
I am still wondering why the assemble code couldn't emit relocations,
but let's get this over with for now.
Cheers,
Ahmad
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 15/21] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data
2026-01-06 12:53 ` [PATCH v2 15/21] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
@ 2026-01-06 14:05 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 14:05 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Fix the linker scripts to generate three distinct PT_LOAD segments with
> correct permissions instead of combining .rodata with .data.
>
> Before this fix, the linker auto-generated only two PT_LOAD segments:
> 1. Text segment (PF_R|PF_X)
> 2. Data segment (PF_R|PF_W) - containing .rodata, .data, .bss, etc.
>
> This caused .rodata to be mapped with write permissions when
> pbl_mmu_setup_from_elf() set up MMU permissions based on ELF segments,
> defeating the W^X protection that commit d9ccb0cf14 intended to provide.
>
> With explicit PHDRS directives, we now generate three segments:
> 1. text segment (PF_R|PF_X): .text and related code sections
> 2. rodata segment (PF_R): .rodata and unwind tables
> 3. data segment (PF_R|PF_W): .data, .bss, and related sections
>
> This ensures pbl_mmu_setup_from_elf() correctly maps .rodata as
> read-only (MAP_CACHED_RO) instead of read-write (MAP_CACHED).
>
> 🤖 Generated with [Claude Code](https://claude.com/claude-code)
>
> Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> ---
> arch/arm/include/asm/barebox.lds.h | 2 +-
> arch/arm/lib32/barebox.lds.S | 34 ++++++++++++++++++++++------------
> arch/arm/lib64/barebox.lds.S | 29 +++++++++++++++++++----------
> arch/riscv/lib/barebox.lds.S | 1 +
> 4 files changed, 43 insertions(+), 23 deletions(-)
>
> diff --git a/arch/arm/include/asm/barebox.lds.h b/arch/arm/include/asm/barebox.lds.h
> index 72aabe155b5c9e8b9159c7da6c6f0fa1f7b93375..7d1811645762a2ce1f58b7d86bed188a93fdb711 100644
> --- a/arch/arm/include/asm/barebox.lds.h
> +++ b/arch/arm/include/asm/barebox.lds.h
> @@ -16,7 +16,7 @@
>
> #define BAREBOX_RELOCATION_TABLE \
> .rel_dyn_start : { *(.__rel_dyn_start) } \
> - .BAREBOX_RELOCATION_TYPE.dyn : { *(.BAREBOX_RELOCATION_TYPE*) } \
> + .BAREBOX_RELOCATION_TYPE.dyn : { *(.BAREBOX_RELOCATION_TYPE*) } \
Stray change?
> .rel_dyn_end : { *(.__rel_dyn_end) } \
> .__dynsym_start : { *(.__dynsym_start) } \
> .dynsym : { *(.dynsym) } \
> diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
> index 17e0970ba4989e5213ed38ea5ff87bdf5bfa2740..e84118ee2f43b07bd82fcd9936398847d0a3b42f 100644
> --- a/arch/arm/lib32/barebox.lds.S
> +++ b/arch/arm/lib32/barebox.lds.S
> @@ -7,14 +7,23 @@
> OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
> OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
> ENTRY(start)
> +
> +PHDRS
> +{
> + text PT_LOAD FLAGS(5); /* PF_R | PF_X */
> + rodata PT_LOAD FLAGS(4); /* PF_R */
> + data PT_LOAD FLAGS(6); /* PF_R | PF_W */
> + dynamic PT_DYNAMIC FLAGS(6); /* PF_R | PF_W */
> +}
Why not map dynamic R/O as well, so it's merged with rodata?
> diff --git a/arch/riscv/lib/barebox.lds.S b/arch/riscv/lib/barebox.lds.S
> index 03b3a967193cfee1c67b96632cf972a553e8bec4..38376befe9a82ead2152f8c7fc581eb5bb35fab4 100644
> --- a/arch/riscv/lib/barebox.lds.S
> +++ b/arch/riscv/lib/barebox.lds.S
> @@ -16,6 +16,7 @@
> OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
> ENTRY(start)
> OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
> +
stray change.
Cheers,
Ahmad
> SECTIONS
> {
> . = 0x0;
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 16/21] ARM: link ELF image into PBL
2026-01-06 12:53 ` [PATCH v2 16/21] ARM: link ELF image into PBL Sascha Hauer
@ 2026-01-06 14:06 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 14:06 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Instead of linking the raw binary barebox proper image into the PBL link
> the ELF image into the PBL. With this barebox proper starts with a properly
> linked and fully initialized C environment, so the calls to
> relocate_to_adr() and setup_c() can be removed from barebox proper.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
with below issues addressed
> ---
> arch/arm/Kconfig | 2 ++
> arch/arm/cpu/start.c | 11 +++--------
> arch/arm/cpu/uncompress.c | 26 +++++++++++++++++++-------
> 3 files changed, 24 insertions(+), 15 deletions(-)
>
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 5123e9b1402c56db94df6a7a33ae993c61d51fbc..c53c58844a9411e3777711db2900b0e01cf55eec 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -18,6 +18,8 @@ config ARM
> select HW_HAS_PCI
> select ARCH_HAS_DMA_WRITE_COMBINE
> select HAVE_EFI_LOADER if MMU # for payload unaligned accesses
> + select ELF
can be dropped.
> + select PBL_IMAGE_ELF
> default y
>
> config ARCH_LINUX_NAME
> diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
> index f7d4507e71588ba5e241b24b952d55e2a4b0f794..f7062380cd5d265b3326f247a99b1847e16d64f0 100644
> --- a/arch/arm/cpu/start.c
> +++ b/arch/arm/cpu/start.c
> @@ -127,8 +127,9 @@ static int barebox_memory_areas_init(void)
> }
> device_initcall(barebox_memory_areas_init);
>
> -__noreturn __prereloc void barebox_non_pbl_start(unsigned long membase,
> - unsigned long memsize, struct handoff_data *hd)
> +__noreturn void barebox_non_pbl_start(unsigned long membase,
> + unsigned long memsize,
> + struct handoff_data *hd)
The NAKED start below could also go away.
Cheers,
Ahmad
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 17/21] ARM: PBL: setup MMU with proper permissions from ELF segments
2026-01-06 12:53 ` [PATCH v2 17/21] ARM: PBL: setup MMU with proper permissions from ELF segments Sascha Hauer
@ 2026-01-06 14:10 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 14:10 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Move complete MMU setup into PBL by leveraging ELF segment information
> to apply correct memory permissions before jumping to barebox proper.
>
> After ELF relocation, parse PT_LOAD segments and map each with
> permissions derived from p_flags:
> - Text segments (PF_R|PF_X): Read-only + executable (MAP_CODE)
> - Data segments (PF_R|PF_W): Read-write (MAP_CACHED)
> - RO data segments (PF_R): Read-only (ARCH_MAP_CACHED_RO)
>
> This ensures barebox proper starts with full W^X protection already
> in place, eliminating the need for complex remapping in barebox proper.
> The mmu_init() function now only sets up trap pages for exception
> handling.
>
> The framework is portable - common ELF parsing in pbl/mmu.c uses
> architecture-specific early_remap_range() exported from mmu_*.c.
>
> 🤖 Generated with [Claude Code](https://claude.com/claude-code)
The commit message is now outdated.
> @@ -138,6 +134,10 @@ static void mmu_remap_memory_banks(void)
> * all memory banks, so let's map all pages, excluding reserved memory areas
> * and barebox text area cacheable.
> *
> + * PBL has already set up the MMU with proper permissions for text and
> + * rodata based on ELF segment information, so we don't need to remap
> + * those here.
> + *
Yes, but the loop only skips over the text area. It needs to be adapted
to skip over the whole barebox image:
- unsigned long text_start = (unsigned long)&_stext;
- unsigned long text_end = (unsigned long)&_etext;
+ unsigned long image_start = (unsigned long)&__image_start;
+ unsigned long image_end = (unsigned long)PTR_ALIGN(&_end,
PAGE_SIZE);
> * This code will become much less complex once we switch over to using
> * CONFIG_MEMORY_ATTRIBUTES for MMU as well.
> */
> @@ -157,9 +157,7 @@ static void mmu_remap_memory_banks(void)
> remap_range_end_sans_text(pos, bank->res->end + 1, MAP_CACHED);
> }
>
> - remap_range((void *)code_start, code_size, MAP_CODE);
> - remap_range((void *)rodata_start, rodata_size, MAP_CACHED_RO);
> -
> + /* Do this while interrupt vectors are still writable */
> setup_trap_pages();
> }
>
> diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
> index 8cc7102290986e71d2f3a2f34df1a9f946c56ced..619bd8d5b0b56ab2704a0fa1e4964bb603b761d9 100644
> --- a/arch/arm/cpu/uncompress.c
> +++ b/arch/arm/cpu/uncompress.c
> @@ -21,6 +21,7 @@
> #include <asm/unaligned.h>
> #include <compressed-dtb.h>
> #include <elf.h>
> +#include <pbl/mmu.h>
>
> #include <debug_ll.h>
>
> @@ -105,6 +106,19 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
>
> pr_debug("ELF entry point: 0x%llx\n", elf.entry);
>
> + /*
> + * Now that the ELF image is relocated, we know the exact addresses
> + * of all segments. Set up MMU with proper permissions based on
> + * ELF segment flags (PF_R/W/X).
> + */
> + if (IS_ENABLED(CONFIG_MMU)) {
> + ret = pbl_mmu_setup_from_elf(&elf, membase, memsize);
> + if (ret) {
> + pr_err("Failed to setup MMU from ELF: %d\n", ret);
> + hang();
> + }
> + }
> +
> barebox = (void *)(unsigned long)elf.entry;
>
> handoff_data_move(handoff_data);
> diff --git a/include/pbl/mmu.h b/include/pbl/mmu.h
> new file mode 100644
> index 0000000000000000000000000000000000000000..4a00d8e528ab5452981347185c9114235f213e2b
> --- /dev/null
> +++ b/include/pbl/mmu.h
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef __PBL_MMU_H
> +#define __PBL_MMU_H
> +
> +#include <linux/types.h>
> +
> +struct elf_image;
> +
> +/**
> + * pbl_mmu_setup_from_elf() - Configure MMU using ELF segment information
> + * @elf: ELF image structure from elf_open_binary_into()
> + * @membase: Base address of RAM
> + * @memsize: Size of RAM
> + *
> + * This function sets up the MMU with proper permissions based on ELF
> + * segment flags. It should be called after elf_load_inplace() has
> + * relocated the barebox image.
> + *
> + * Segment permissions are mapped as follows:
> + * PF_R | PF_X -> Read-only + executable (text)
> + * PF_R | PF_W -> Read-write (data, bss)
> + * PF_R -> Read-only (rodata)
> + *
> + * Return: 0 on success, negative error code on failure
> + */
> +int pbl_mmu_setup_from_elf(struct elf_image *elf, unsigned long membase,
> + unsigned long memsize);
> +
> +#endif /* __PBL_MMU_H */
> diff --git a/pbl/Makefile b/pbl/Makefile
> index f66391be7b2898388425657f54afcd6e4c72e3db..b78124cdcd2a4690be11d5503006723252b4904f 100644
> --- a/pbl/Makefile
> +++ b/pbl/Makefile
> @@ -9,3 +9,4 @@ pbl-$(CONFIG_HAVE_IMAGE_COMPRESSION) += decomp.o
> pbl-$(CONFIG_LIBFDT) += fdt.o
> pbl-$(CONFIG_PBL_CONSOLE) += console.o
> obj-pbl-y += handoff-data.o
> +obj-pbl-$(CONFIG_MMU) += mmu.o
> diff --git a/pbl/mmu.c b/pbl/mmu.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..853fdcba55699025ea1d2a49385747e29cb2debc
> --- /dev/null
> +++ b/pbl/mmu.c
> @@ -0,0 +1,111 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +// SPDX-FileCopyrightText: 2025 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix
> +
> +#define pr_fmt(fmt) "pbl-mmu: " fmt
> +
> +#include <common.h>
> +#include <elf.h>
> +#include <mmu.h>
> +#include <pbl/mmu.h>
> +#include <asm/mmu.h>
> +#include <linux/bits.h>
> +#include <linux/sizes.h>
> +
> +/*
> + * Map ELF segment permissions (p_flags) to architecture MMU flags
> + */
> +static unsigned int elf_flags_to_mmu_flags(u32 p_flags)
> +{
> + bool readable = p_flags & PF_R;
> + bool writable = p_flags & PF_W;
> + bool executable = p_flags & PF_X;
> +
> + if (readable && writable) {
> + /* Data, BSS: Read-write, cached, non-executable */
> + return MAP_CACHED;
> + } else if (readable && executable) {
> + /* Text: Read-only, cached, executable */
> + return MAP_CODE;
> + } else if (readable) {
> + /* Read-only data: Read-only, cached, non-executable */
> + return MAP_CACHED_RO;
> + } else {
> + /*
> + * Unusual: segment with no read permission.
> + * Map as uncached, non-executable for safety.
> + */
> + pr_warn("Segment with unusual permissions: flags=0x%x\n", p_flags);
> + return MAP_UNCACHED;
> + }
> +}
> +
> +int pbl_mmu_setup_from_elf(struct elf_image *elf, unsigned long membase,
> + unsigned long memsize)
> +{
> + void *phdr;
> + int i;
> + int phnum = elf_hdr_e_phnum(elf, elf->hdr_buf);
> + size_t phoff = elf_hdr_e_phoff(elf, elf->hdr_buf);
> + size_t phentsize = elf_size_of_phdr(elf);
> +
> + pr_debug("Setting up MMU from ELF segments\n");
> + pr_debug("ELF entry point: 0x%llx\n", elf->entry);
> + pr_debug("ELF loaded at: 0x%p - 0x%p\n", elf->low_addr, elf->high_addr);
> +
> + /*
> + * Iterate through all PT_LOAD segments and set up MMU permissions
> + * based on the segment's p_flags
> + */
> + for (i = 0; i < phnum; i++) {
> + phdr = elf->hdr_buf + phoff + i * phentsize;
> +
> + if (elf_phdr_p_type(elf, phdr) != PT_LOAD)
> + continue;
> +
> + u64 p_vaddr = elf_phdr_p_vaddr(elf, phdr);
> + u64 p_memsz = elf_phdr_p_memsz(elf, phdr);
> + u32 p_flags = elf_phdr_p_flags(elf, phdr);
> +
> + /*
> + * Calculate actual address after relocation.
> + * For ET_EXEC: reloc_offset is 0, use p_vaddr directly
> + * For ET_DYN: reloc_offset adjusts virtual to actual address
> + */
> + unsigned long addr = p_vaddr + elf->reloc_offset;
> + unsigned long size = p_memsz;
> + unsigned long segment_end = addr + size;
> +
> + /* Validate segment is within available memory */
> + if (segment_end < addr || /* overflow check */
> + addr < membase ||
> + segment_end > membase + memsize) {
> + pr_err("Segment %d outside memory bounds\n", i);
> + return -EINVAL;
> + }
> +
> + /* Validate alignment - warn and round if needed */
> + if (!IS_ALIGNED(addr, PAGE_SIZE) || !IS_ALIGNED(size, PAGE_SIZE)) {
> + pr_debug("Segment %d not page-aligned, rounding\n", i);
> + size = ALIGN(size, PAGE_SIZE);
> + }
> +
> + unsigned int mmu_flags = elf_flags_to_mmu_flags(p_flags);
> +
> + pr_debug("Segment %d: addr=0x%08lx size=0x%08lx flags=0x%x [%c%c%c] -> mmu_flags=0x%x\n",
> + i, addr, size, p_flags,
> + (p_flags & PF_R) ? 'R' : '-',
> + (p_flags & PF_W) ? 'W' : '-',
> + (p_flags & PF_X) ? 'X' : '-',
> + mmu_flags);
> +
> + /*
> + * Remap this segment with proper permissions.
> + * Use page-wise mapping to allow different permissions for
> + * different segments even if they're nearby.
> + */
> + pbl_remap_range((void *)addr, addr, size, mmu_flags);
> + }
> +
> + pr_debug("MMU setup from ELF complete\n");
> + return 0;
> +}
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 18/21] riscv: link ELF image into PBL
2026-01-06 12:53 ` [PATCH v2 18/21] riscv: link ELF image into PBL Sascha Hauer
@ 2026-01-06 14:11 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 14:11 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Instead of linking the raw binary barebox proper image into the PBL link
> the ELF image into the PBL. With this barebox proper starts with a properly
> linked and fully initialized C environment, so the calls to
> relocate_to_adr() and setup_c() can be removed from barebox proper.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
> arch/riscv/Kconfig | 1 +
> arch/riscv/boot/start.c | 6 ------
> arch/riscv/boot/uncompress.c | 21 ++++++++++++++++++++-
> 3 files changed, 21 insertions(+), 7 deletions(-)
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 96d013d8514ac12de5c34a426262d85f8cf021b9..d9794354f4ed2e8bf7276e03b968c566002c2ec6 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -17,6 +17,7 @@ config RISCV
> select HAS_KALLSYMS
> select RISCV_TIMER if RISCV_SBI
> select HW_HAS_PCI
> + select PBL_IMAGE_ELF
> select HAVE_ARCH_BOARD_GENERIC_DT
> select HAVE_ARCH_BOOTM_OFTREE
>
> diff --git a/arch/riscv/boot/start.c b/arch/riscv/boot/start.c
> index 5091340c8a374fc360ab732ba01ec8516e82a83d..002ab3eccb4292d8d88a95f8b93163c970cb1d64 100644
> --- a/arch/riscv/boot/start.c
> +++ b/arch/riscv/boot/start.c
> @@ -123,12 +123,6 @@ void barebox_non_pbl_start(unsigned long membase, unsigned long memsize,
> unsigned long barebox_size = barebox_image_size + MAX_BSS_SIZE;
> unsigned long barebox_base = riscv_mem_barebox_image(membase, endmem, barebox_size);
>
> - relocate_to_current_adr();
> -
> - setup_c();
> -
> - barrier();
> -
> irq_init_vector(riscv_mode());
>
> pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
> diff --git a/arch/riscv/boot/uncompress.c b/arch/riscv/boot/uncompress.c
> index 84142acf9c66fe1fcceb6ae63d15ac078ccddee7..fba04cf4fb6ed450d80b83a8a595346a3186f1e7 100644
> --- a/arch/riscv/boot/uncompress.c
> +++ b/arch/riscv/boot/uncompress.c
> @@ -32,6 +32,8 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
> unsigned long barebox_base;
> void *pg_start, *pg_end;
> unsigned long pc = get_pc();
> + struct elf_image elf;
> + int ret;
>
> irq_init_vector(riscv_mode());
>
> @@ -68,7 +70,24 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
>
> sync_caches_for_execution();
>
> - barebox = (void *)barebox_base;
> + ret = elf_open_binary_into(&elf, (void *)barebox_base);
> + if (ret) {
> + pr_err("Failed to open ELF binary: %d\n", ret);
> + hang();
> + }
> +
> + ret = elf_load_inplace(&elf);
> + if (ret) {
> + pr_err("Failed to relocate ELF: %d\n", ret);
> + hang();
> + }
> +
> + /*
> + * TODO: Add pbl_mmu_setup_from_elf() call when RISC-V PBL
> + * MMU support is implemented, similar to ARM
> + */
> +
> + barebox = (void *)elf.entry;
>
> pr_debug("jumping to uncompressed image at 0x%p. dtb=0x%p\n", barebox, fdt);
>
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 19/21] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data
2026-01-06 12:53 ` [PATCH v2 19/21] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
@ 2026-01-06 14:12 ` Ahmad Fatoum
2026-01-08 15:25 ` Sascha Hauer
0 siblings, 1 reply; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 14:12 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Fix the linker script to generate three distinct PT_LOAD segments with
> correct permissions instead of combining .rodata with .data.
Does this need to be moved before the previous patch to avoid
intermittent breakage?
> Before this fix, the linker auto-generated only two PT_LOAD segments:
> 1. Text segment (PF_R|PF_X)
> 2. Data segment (PF_R|PF_W) - containing .rodata, .data, .bss, etc.
>
> This caused .rodata to be mapped with write permissions when
> riscv_mmu_setup_from_elf() or riscv_pmp_setup_from_elf() set up memory
> permissions based on ELF segments, defeating the W^X protection.
>
> With explicit PHDRS directives, we now generate three segments:
> 1. text segment (PF_R|PF_X): .text and related code sections
> 2. rodata segment (PF_R): .rodata and related read-only sections
> 3. data segment (PF_R|PF_W): .data, .bss, and related sections
>
> This ensures riscv_mmu_setup_from_elf() and riscv_pmp_setup_from_elf()
> correctly map .rodata as read-only instead of read-write.
>
> Also update the prelink script to handle binaries without a PT_DYNAMIC
> segment, as the new PHDRS layout may result in this case.
Did you observe this happening?
Cheers,
Ahmad
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 21/21] riscv: add ELF segment-based memory protection with MMU
2026-01-06 12:53 ` [PATCH v2 21/21] riscv: add ELF segment-based memory protection with MMU Sascha Hauer
@ 2026-01-06 14:20 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 14:20 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 1:53 PM, Sascha Hauer wrote:
> Enable hardware-enforced W^X (Write XOR Execute) memory protection through
> ELF segment-based permissions using the RISC-V MMU.
>
> This implementation provides memory protection for RISC-V S-mode using
> Sv39 (RV64) or Sv32 (RV32) page tables.
>
> Linker Script Changes:
> - Add PHDRS directives to pbl.lds.S and barebox.lds.S
> - Create three separate PT_LOAD segments with proper permissions:
> * text segment (FLAGS(5) = PF_R|PF_X): code sections
> * rodata segment (FLAGS(4) = PF_R): read-only data
> * data segment (FLAGS(6) = PF_R|PF_W): data and BSS
> - Add 4K alignment between segments for page-granular protection
That's a separate patch from this.
> S-mode MMU Implementation (mmu.c):
> - Implement page table walking for Sv39/Sv32
> - pbl_remap_range(): remap segments with ELF-derived permissions
> - mmu_early_enable(): create identity mapping and enable SATP CSR
> - Map ELF flags to PTE bits:
> * MAP_CODE → PTE_R | PTE_X (read + execute)
> * MAP_CACHED_RO → PTE_R (read only)
> * MAP_CACHED → PTE_R | PTE_W (read + write)
>
> Integration:
> - Update uncompress.c to call mmu_early_enable() before decompression
> (enables caching for faster decompression)
> - Call pbl_mmu_setup_from_elf() after ELF relocation to apply final
> segment-based permissions
> - Uses portable pbl/mmu.c infrastructure to parse PT_LOAD segments
>
> Configuration:
> - Add CONFIG_MMU option (default y for RISCV_S_MODE)
> - Update asm/mmu.h with ARCH_HAS_REMAP and function declarations
>
> Security Benefits:
> - Text sections are read-only and executable (cannot be modified)
> - Read-only data sections are read-only and non-executable
> - Data sections are read-write and non-executable (cannot be executed)
> - Hardware-enforced W^X prevents code injection attacks
>
> This matches the ARM implementation philosophy and provides genuine
> security improvements on RISC-V S-mode platforms.
Mhm, I think this commit message needs to be toned down a bit..
> +#ifdef CONFIG_MMU
As with feedback on V1, why no IS_ENABLED() or stub?
> +#ifdef CONFIG_MMU
Likewise.
> +#ifdef __PBL__
See comments on v1.
> + case MAP_CACHED:
> + case MAP_UNCACHED:
Add a TODO here or some other comment that uncached memory is not support.
> +/* CSR access */
> +#define csr_read(csr) \
We already define these. Refer to feedback on v1.
> +#if defined(CONFIG_MMU) || defined(CONFIG_RISCV_PMP)
Not defined symbol (CONFIG_RISCV_PMP)
> index 32d9a7aca3b9a61d542bf3e21e27f1ac51f43ee2..ecd44d4ac756009cd44d7dedbda5a13f1ca3f93d 100644
> --- a/include/mmu.h
> +++ b/include/mmu.h
> @@ -20,6 +20,9 @@
> #define MAP_TYPE_MASK 0xFFFF
> #define MAP_ARCH(x) ((u16)~(x))
>
> +#include <asm/mmu.h>
> +#include <asm/io.h>
> +
> /*
> * Depending on the architecture the default mapping can be
> * cached or uncached. Without ARCH_HAS_REMAP being set this
> @@ -27,9 +30,6 @@
> */
> #define MAP_DEFAULT MAP_ARCH_DEFAULT
>
> -#include <asm/mmu.h>
> -#include <asm/io.h>
Why is this moved around?
> -
> static inline bool maptype_is_compatible(maptype_t active, maptype_t check)
> {
> active &= MAP_TYPE_MASK;
> @@ -47,7 +47,7 @@ static inline bool maptype_is_compatible(maptype_t active, maptype_t check)
> static inline int arch_remap_range(void *virt_addr, phys_addr_t phys_addr,
> size_t size, maptype_t map_type)
> {
> - if (maptype_is_compatible(map_type, MAP_ARCH_DEFAULT) &&
> + if (maptype_is_compatible(map_type, MAP_DEFAULT) &&
Unneeded change?
> phys_addr == virt_to_phys(virt_addr))
> return 0;
>
Cheers,
Ahmad
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 04/21] ARM: implement elf_apply_relocations() for ELF relocation support
2026-01-06 13:07 ` Ahmad Fatoum
@ 2026-01-06 14:25 ` Ahmad Fatoum
0 siblings, 0 replies; 46+ messages in thread
From: Ahmad Fatoum @ 2026-01-06 14:25 UTC (permalink / raw)
To: Sascha Hauer, BAREBOX; +Cc: Claude Sonnet 4.5
Hi,
On 1/6/26 2:07 PM, Ahmad Fatoum wrote:
>> +/*
>> + * Apply ARM32 ELF relocations
>> + */
>> +int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
>> +{
>> + Elf32_Rel *rel;
>> + void *rel_ptr;
>> + u64 relsz;
>> + phys_addr_t base = (phys_addr_t)elf->reloc_offset;
>> + int ret;
>> +
>> + ret = elf_parse_dynamic_section_rel(elf, dyn_seg, &rel_ptr, &relsz);
>> + if (ret)
>> + return ret;
>> +
>> + rel = (Elf32_Rel *)rel_ptr;
>
> Nitpick: rel can be dropped and rel_ptr used instead.
>
>> +
>> + relocate_image(base, rel, (void *)rel + relsz, NULL, NULL);
By passing NULL here, we won't be able to handle R_ARM_ABS32
relocations. A compile-time check that verifies we don't have such
relocations in the first place would be apt.
Cheers,
Ahmad
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v2 19/21] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data
2026-01-06 14:12 ` Ahmad Fatoum
@ 2026-01-08 15:25 ` Sascha Hauer
0 siblings, 0 replies; 46+ messages in thread
From: Sascha Hauer @ 2026-01-08 15:25 UTC (permalink / raw)
To: Ahmad Fatoum; +Cc: BAREBOX, Claude Sonnet 4.5
On Tue, Jan 06, 2026 at 03:12:59PM +0100, Ahmad Fatoum wrote:
> Hi,
>
> On 1/6/26 1:53 PM, Sascha Hauer wrote:
> > Fix the linker script to generate three distinct PT_LOAD segments with
> > correct permissions instead of combining .rodata with .data.
>
> Does this need to be moved before the previous patch to avoid
> intermittent breakage?
>
> > Before this fix, the linker auto-generated only two PT_LOAD segments:
> > 1. Text segment (PF_R|PF_X)
> > 2. Data segment (PF_R|PF_W) - containing .rodata, .data, .bss, etc.
> >
> > This caused .rodata to be mapped with write permissions when
> > riscv_mmu_setup_from_elf() or riscv_pmp_setup_from_elf() set up memory
> > permissions based on ELF segments, defeating the W^X protection.
> >
> > With explicit PHDRS directives, we now generate three segments:
> > 1. text segment (PF_R|PF_X): .text and related code sections
> > 2. rodata segment (PF_R): .rodata and related read-only sections
> > 3. data segment (PF_R|PF_W): .data, .bss, and related sections
> >
> > This ensures riscv_mmu_setup_from_elf() and riscv_pmp_setup_from_elf()
> > correctly map .rodata as read-only instead of read-write.
> >
> > Also update the prelink script to handle binaries without a PT_DYNAMIC
> > segment, as the new PHDRS layout may result in this case.
>
> Did you observe this happening?
I did during development, but not any longer. Maybe this is no longer
necessary due to other changes. I'll remove that for now.
Sascha
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
^ permalink raw reply [flat|nested] 46+ messages in thread
end of thread, other threads:[~2026-01-08 15:26 UTC | newest]
Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-06 12:53 [PATCH v2 00/21] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
2026-01-06 12:53 ` [PATCH v2 01/21] elf: only accept images matching the native ELF_CLASS Sascha Hauer
2026-01-06 12:58 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 02/21] elf: build for PBL as well Sascha Hauer
2026-01-06 13:26 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 03/21] elf: add dynamic relocation support Sascha Hauer
2026-01-06 13:51 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 04/21] ARM: implement elf_apply_relocations() for ELF " Sascha Hauer
2026-01-06 13:07 ` Ahmad Fatoum
2026-01-06 14:25 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 05/21] riscv: define generic relocate_image Sascha Hauer
2026-01-06 13:10 ` Ahmad Fatoum
2026-01-06 13:11 ` Sascha Hauer
2026-01-06 12:53 ` [PATCH v2 06/21] riscv: implement elf_apply_relocations() for ELF relocation support Sascha Hauer
2026-01-06 13:11 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 07/21] elf: implement elf_load_inplace() Sascha Hauer
2026-01-06 13:53 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 08/21] elf: create elf_open_binary_into() Sascha Hauer
2026-01-06 13:55 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 09/21] Makefile: add barebox.elf build target Sascha Hauer
2026-01-06 13:13 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 10/21] PBL: allow to link ELF image into PBL Sascha Hauer
2026-01-06 13:18 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 11/21] mmu: add MAP_CACHED_RO mapping type Sascha Hauer
2026-01-06 13:14 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 12/21] mmu: introduce pbl_remap_range() Sascha Hauer
2026-01-06 13:15 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 13/21] ARM: use relative jumps in exception table Sascha Hauer
2026-01-06 13:57 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 14/21] ARM: exceptions: make in-binary exception table const Sascha Hauer
2026-01-06 14:00 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 15/21] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
2026-01-06 14:05 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 16/21] ARM: link ELF image into PBL Sascha Hauer
2026-01-06 14:06 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 17/21] ARM: PBL: setup MMU with proper permissions from ELF segments Sascha Hauer
2026-01-06 14:10 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 18/21] riscv: link ELF image into PBL Sascha Hauer
2026-01-06 14:11 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 19/21] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
2026-01-06 14:12 ` Ahmad Fatoum
2026-01-08 15:25 ` Sascha Hauer
2026-01-06 12:53 ` [PATCH v2 20/21] riscv: Allwinner D1: Drop M-Mode Sascha Hauer
2026-01-06 13:11 ` Ahmad Fatoum
2026-01-06 12:53 ` [PATCH v2 21/21] riscv: add ELF segment-based memory protection with MMU Sascha Hauer
2026-01-06 14:20 ` Ahmad Fatoum
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox