* [PATCH v5 01/22] Makefile.compiler: add objcopy-option
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 02/22] elf: only accept images matching the native ELF_CLASS Sascha Hauer
` (21 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Similar to other *-option macros this one is for testing if objcopy
flags are supported.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
scripts/Makefile.compiler | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/scripts/Makefile.compiler b/scripts/Makefile.compiler
index 1d34239b3b..f2fdddd07b 100644
--- a/scripts/Makefile.compiler
+++ b/scripts/Makefile.compiler
@@ -69,6 +69,11 @@ cc-ifversion = $(shell [ $(call cc-version, $(CC)) $(1) $(2) ] && echo $(3))
ld-option = $(call try-run,\
$(CC) -x c /dev/null -c -o "$$TMPO" ; $(LD) $(1) "$$TMPO" -o "$$TMP",$(1),$(2))
+# objcopy-option
+# Usage: KBUILD_LDFLAGS += $(call objcopy-option,--strip-section-headers,--strip-all)
+objcopy-option = $(call try-run,\
+ $(CC) -x c /dev/null -c -o "$$TMPO"; $(OBJCOPY) $(1) "$$TMPO" "$$TMP",$(1),$(2))
+
# Prefix -I with $(srctree) if it is not an absolute path.
# skip if -I has no parameter
addtree = $(if $(patsubst -I%,%,$(1)), \
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 02/22] elf: only accept images matching the native ELF_CLASS
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 01/22] Makefile.compiler: add objcopy-option Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 03/22] elf: build for PBL as well Sascha Hauer
` (20 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
common/elf.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/common/elf.c b/common/elf.c
index 34d790bad3..f772018deb 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -208,6 +208,9 @@ static int elf_check_image(struct elf_image *elf, void *buf)
return -ENOEXEC;
}
+ if (elf->class != ELF_CLASS)
+ return -EINVAL;
+
if (!elf_hdr_e_phnum(elf, buf)) {
pr_err("No phdr found.\n");
return -ENOEXEC;
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 03/22] elf: build for PBL as well
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 01/22] Makefile.compiler: add objcopy-option Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 02/22] elf: only accept images matching the native ELF_CLASS Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 04/22] elf: add elf segment iterator Sascha Hauer
` (19 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
We'll link barebox proper as an ELF image into the PBL later, so compile
ELF support for PBL as well.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
common/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/common/Makefile b/common/Makefile
index 751f3ba202..c937987cf9 100644
--- a/common/Makefile
+++ b/common/Makefile
@@ -14,7 +14,7 @@ obj-y += misc.o
obj-pbl-y += memsize.o
obj-y += resource.o
obj-pbl-y += bootsource.o
-obj-$(CONFIG_ELF) += elf.o
+obj-pbl-$(CONFIG_ELF) += elf.o
obj-y += restart.o
obj-y += poweroff.o
obj-y += slice.o
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 04/22] elf: add elf segment iterator
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (2 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 03/22] elf: build for PBL as well Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 05/22] elf: add dynamic relocation support Sascha Hauer
` (18 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
We currently have only one place which iterates over elf segments, but
there are more to come, so add an iterator for it.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
common/elf.c | 8 +++-----
include/elf.h | 6 ++++++
2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/common/elf.c b/common/elf.c
index f772018deb..af73396b35 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -161,19 +161,17 @@ static int load_elf_to_memory(struct elf_image *elf)
static int load_elf_image_segments(struct elf_image *elf)
{
void *buf = elf->hdr_buf;
- void *phdr = (void *) (buf + elf_hdr_e_phoff(elf, buf));
- int i, ret;
+ void *phdr;
+ int ret;
/* File as already been loaded */
if (!list_empty(&elf->list))
return -EINVAL;
- for (i = 0; i < elf_hdr_e_phnum(elf, buf) ; ++i) {
+ elf_for_each_segment(phdr, elf, buf) {
ret = request_elf_segment(elf, phdr);
if (ret)
goto elf_release_regions;
-
- phdr += elf_size_of_phdr(elf);
}
/*
diff --git a/include/elf.h b/include/elf.h
index 994db642b0..dc1aa8d5d1 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -439,4 +439,10 @@ static inline unsigned long elf_size_of_phdr(struct elf_image *elf)
return sizeof(Elf64_Phdr);
}
+#define elf_for_each_segment(phdr, elf, buf) \
+ for (phdr = (void *)buf + elf_hdr_e_phoff(elf, buf); \
+ phdr < (void *)buf + elf_hdr_e_phoff(elf, buf) + \
+ elf_hdr_e_phnum(elf, buf) * elf_size_of_phdr(elf); \
+ phdr = (void *)phdr + elf_size_of_phdr(elf))
+
#endif /* _LINUX_ELF_H */
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 05/22] elf: add dynamic relocation support
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (3 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 04/22] elf: add elf segment iterator Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 06/22] ARM: implement elf_apply_relocations() for ELF " Sascha Hauer
` (17 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Add support for applying dynamic relocations to ELF binaries. This allows
loading ET_DYN (position-independent) binaries and ET_EXEC binaries at
custom load addresses.
Key changes:
- Add elf_image.reloc_offset to track offset between vaddr and load address
- Implement elf_compute_load_offset() to calculate relocation offset
- Add elf_set_load_address() API to specify custom load address
- Implement elf_find_dynamic_segment() to locate PT_DYNAMIC
- Add elf_relocate() to apply relocations
- Provide weak default elf_apply_relocations() stub for unsupported architectures
- Add ELF dynamic section accessors
The relocation offset type is unsigned long to properly handle pointer
arithmetic and avoid casting issues.
Architecture-specific implementations should override the weak
elf_apply_relocations() function to handle their relocation types.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
common/elf.c | 282 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
include/elf.h | 64 +++++++++++++
2 files changed, 341 insertions(+), 5 deletions(-)
diff --git a/common/elf.c b/common/elf.c
index af73396b35..9f96943988 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -20,6 +20,18 @@ struct elf_segment {
void *phdr;
};
+static void *elf_phdr_relocated_paddr(struct elf_image *elf, void *phdr)
+{
+ void *dst;
+
+ if (elf->reloc_offset)
+ dst = (void *)(unsigned long)(elf->reloc_offset + elf_phdr_p_vaddr(elf, phdr));
+ else
+ dst = (void *)(unsigned long)elf_phdr_p_paddr(elf, phdr);
+
+ return dst;
+}
+
static int elf_request_region(struct elf_image *elf, resource_size_t start,
resource_size_t size, void *phdr)
{
@@ -60,9 +72,59 @@ static void elf_release_regions(struct elf_image *elf)
}
}
+static int elf_compute_load_offset(struct elf_image *elf)
+{
+ void *buf = elf->hdr_buf;
+ void *phdr = buf + elf_hdr_e_phoff(elf, buf);
+ u64 min_vaddr = (u64)-1;
+ u64 min_paddr = (u64)-1;
+
+ /* Find lowest p_vaddr and p_paddr in PT_LOAD segments */
+ elf_for_each_segment(phdr, elf, buf) {
+ if (elf_phdr_p_type(elf, phdr) == PT_LOAD) {
+ u64 vaddr = elf_phdr_p_vaddr(elf, phdr);
+ u64 paddr = elf_phdr_p_paddr(elf, phdr);
+
+ if (vaddr < min_vaddr)
+ min_vaddr = vaddr;
+ if (paddr < min_paddr)
+ min_paddr = paddr;
+ }
+ }
+
+ /*
+ * Determine base load address:
+ * 1. If user specified load_address, use it
+ * 2. Otherwise for ET_EXEC, use NULL (segments use p_paddr directly)
+ * 3. For ET_DYN, use lowest p_paddr
+ */
+ if (elf->load_address)
+ elf->base_load_addr = elf->load_address;
+ else if (elf->type == ET_EXEC)
+ elf->base_load_addr = NULL;
+ else
+ elf->base_load_addr = (void *)(phys_addr_t)min_paddr;
+
+ /*
+ * Calculate relocation offset:
+ * - For ET_EXEC with no custom load address: no offset needed
+ * - Otherwise: offset = base_load_addr - lowest_vaddr
+ */
+ if (elf->type == ET_EXEC && !elf->load_address)
+ elf->reloc_offset = 0;
+ else
+ elf->reloc_offset = ((unsigned long)elf->base_load_addr - min_vaddr);
+
+ pr_debug("ELF load: type=%s, base=%p, offset=%08lx\n",
+ elf->type == ET_EXEC ? "ET_EXEC" : "ET_DYN",
+ elf->base_load_addr, elf->reloc_offset);
+
+ return 0;
+}
+
static int request_elf_segment(struct elf_image *elf, void *phdr)
{
- void *dst = (void *) (phys_addr_t) elf_phdr_p_paddr(elf, phdr);
+ void *dst;
int ret;
u64 p_memsz = elf_phdr_p_memsz(elf, phdr);
@@ -73,6 +135,15 @@ static int request_elf_segment(struct elf_image *elf, void *phdr)
if (!p_memsz)
return 0;
+ /*
+ * Calculate destination address:
+ * - If reloc_offset is set (custom load address or ET_DYN):
+ * dst = reloc_offset + p_vaddr
+ * - Otherwise (ET_EXEC, no custom address):
+ * dst = p_paddr (original behavior)
+ */
+ dst = elf_phdr_relocated_paddr(elf, phdr);
+
if (dst < elf->low_addr)
elf->low_addr = dst;
if (dst + p_memsz > elf->high_addr)
@@ -124,7 +195,8 @@ static int load_elf_to_memory(struct elf_image *elf)
p_offset = elf_phdr_p_offset(elf, r->phdr);
p_filesz = elf_phdr_p_filesz(elf, r->phdr);
p_memsz = elf_phdr_p_memsz(elf, r->phdr);
- dst = (void *) (phys_addr_t) elf_phdr_p_paddr(elf, r->phdr);
+
+ dst = elf_phdr_relocated_paddr(elf, r->phdr);
pr_debug("Loading phdr offset 0x%llx to 0x%p (%llu bytes)\n",
p_offset, dst, p_filesz);
@@ -168,6 +240,11 @@ static int load_elf_image_segments(struct elf_image *elf)
if (!list_empty(&elf->list))
return -EINVAL;
+ /* Calculate load offset for ET_DYN */
+ ret = elf_compute_load_offset(elf);
+ if (ret)
+ return ret;
+
elf_for_each_segment(phdr, elf, buf) {
ret = request_elf_segment(elf, phdr);
if (ret)
@@ -194,6 +271,8 @@ static int load_elf_image_segments(struct elf_image *elf)
static int elf_check_image(struct elf_image *elf, void *buf)
{
+ u16 e_type;
+
if (memcmp(buf, ELFMAG, SELFMAG)) {
pr_err("ELF magic not found.\n");
return -EINVAL;
@@ -201,14 +280,17 @@ static int elf_check_image(struct elf_image *elf, void *buf)
elf->class = ((char *) buf)[EI_CLASS];
- if (elf_hdr_e_type(elf, buf) != ET_EXEC) {
- pr_err("Non EXEC ELF image.\n");
+ e_type = elf_hdr_e_type(elf, buf);
+ if (e_type != ET_EXEC && e_type != ET_DYN) {
+ pr_err("Unsupported ELF type: %u (only ET_EXEC and ET_DYN supported)\n", e_type);
return -ENOEXEC;
}
if (elf->class != ELF_CLASS)
return -EINVAL;
+ elf->type = e_type;
+
if (!elf_hdr_e_phnum(elf, buf)) {
pr_err("No phdr found.\n");
return -ENOEXEC;
@@ -334,9 +416,199 @@ struct elf_image *elf_open(const char *filename)
return elf_check_init(filename);
}
+void elf_set_load_address(struct elf_image *elf, void *addr)
+{
+ elf->load_address = addr;
+}
+
+static void *elf_find_dynamic_segment(struct elf_image *elf)
+{
+ void *buf = elf->hdr_buf;
+ void *phdr = buf + elf_hdr_e_phoff(elf, buf);
+
+ elf_for_each_segment(phdr, elf, buf) {
+ if (elf_phdr_p_type(elf, phdr) == PT_DYNAMIC)
+ return elf_phdr_relocated_paddr(elf, phdr);
+ }
+
+ return NULL; /* No PT_DYNAMIC segment */
+}
+
+/**
+ * elf_parse_dynamic_section - Parse the dynamic section and extract relocation info
+ * @elf: ELF image structure
+ * @dyn_seg: Pointer to the PT_DYNAMIC segment
+ * @rel_out: Output pointer to the relocation table (either REL or RELA)
+ * @relsz_out: Output size of the relocation table in bytes
+ * @is_rela: flag indicating RELA (true) vs REL (false) format is expected
+ *
+ * This is a generic function that works for both 32-bit and 64-bit ELF files,
+ * and handles both REL and RELA relocation formats.
+ *
+ * Returns: 0 on success, -EINVAL on error
+ */
+static int elf_parse_dynamic_section(struct elf_image *elf, const void *dyn_seg,
+ void **rel_out, u64 *relsz_out, void **symtab,
+ bool is_rela)
+{
+ const void *dyn = dyn_seg;
+ void *rel = NULL, *rela = NULL;
+ u64 relsz = 0, relasz = 0;
+ u64 relent = 0, relaent = 0;
+ phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+ size_t expected_rel_size, expected_rela_size;
+
+ /* Calculate expected sizes based on ELF class */
+ if (ELF_CLASS == ELFCLASS32) {
+ expected_rel_size = sizeof(Elf32_Rel);
+ expected_rela_size = sizeof(Elf32_Rela);
+ } else {
+ expected_rel_size = sizeof(Elf64_Rel);
+ expected_rela_size = sizeof(Elf64_Rela);
+ }
+
+ /* Iterate through dynamic entries until DT_NULL */
+ while (elf_dyn_d_tag(elf, dyn) != DT_NULL) {
+ unsigned long tag = elf_dyn_d_tag(elf, dyn);
+
+ switch (tag) {
+ case DT_REL:
+ /* REL table address - needs to be adjusted by load offset */
+ rel = (void *)(unsigned long)(base + elf_dyn_d_ptr(elf, dyn));
+ break;
+ case DT_RELSZ:
+ relsz = elf_dyn_d_val(elf, dyn);
+ break;
+ case DT_RELENT:
+ relent = elf_dyn_d_val(elf, dyn);
+ break;
+ case DT_RELA:
+ /* RELA table address - needs to be adjusted by load offset */
+ rela = (void *)(unsigned long)(base + elf_dyn_d_ptr(elf, dyn));
+ break;
+ case DT_RELASZ:
+ relasz = elf_dyn_d_val(elf, dyn);
+ break;
+ case DT_RELAENT:
+ relaent = elf_dyn_d_val(elf, dyn);
+ break;
+ case DT_SYMTAB:
+ *symtab = (void *)(unsigned long)(base + elf_dyn_d_val(elf, dyn));
+ break;
+ default:
+ break;
+ }
+
+ dyn += elf_size_of_dyn(elf);
+ }
+
+ /* Check that we found exactly one relocation type */
+ if (rel && rela) {
+ pr_err("ELF has both REL and RELA relocations\n");
+ return -EINVAL;
+ }
+
+ if (rel && !is_rela) {
+ /* REL relocations */
+ if (!relsz || relent != expected_rel_size) {
+ pr_debug("No REL relocations or invalid relocation info\n");
+ return -EINVAL;
+ }
+ *rel_out = rel;
+ *relsz_out = relsz;
+
+ return 0;
+ } else if (rela && is_rela) {
+ /* RELA relocations */
+ if (!relasz || relaent != expected_rela_size) {
+ pr_debug("No RELA relocations or invalid relocation info\n");
+ return -EINVAL;
+ }
+ *rel_out = rela;
+ *relsz_out = relasz;
+
+ return 0;
+ }
+
+ pr_debug("No relocations found in dynamic section\n");
+
+ return -EINVAL;
+}
+
+int elf_parse_dynamic_section_rel(struct elf_image *elf, const void *dyn_seg,
+ void **rel_out, u64 *relsz_out, void **symtab)
+{
+ return elf_parse_dynamic_section(elf, dyn_seg, rel_out, relsz_out, symtab,
+ false);
+}
+
+int elf_parse_dynamic_section_rela(struct elf_image *elf, const void *dyn_seg,
+ void **rel_out, u64 *relsz_out, void **symtab)
+{
+ return elf_parse_dynamic_section(elf, dyn_seg, rel_out, relsz_out, symtab,
+ true);
+}
+
+/*
+ * Weak default implementation for architectures that don't support
+ * ELF relocations yet. Can be overridden by arch-specific implementation.
+ */
+int __weak elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+ pr_warn("ELF relocations not supported for this architecture\n");
+ return -ENOSYS;
+}
+
+static int elf_relocate(struct elf_image *elf)
+{
+ void *dyn_seg;
+
+ /*
+ * Relocations needed if:
+ * - ET_DYN (position-independent), OR
+ * - ET_EXEC with custom load address
+ */
+ if (elf->type == ET_EXEC && !elf->load_address)
+ return 0;
+
+ /* Find PT_DYNAMIC segment */
+ dyn_seg = elf_find_dynamic_segment(elf);
+ if (!dyn_seg) {
+ /*
+ * No PT_DYNAMIC segment found.
+ * For ET_DYN this is unusual but legal.
+ * For ET_EXEC with custom load address, this means no relocations
+ * can be applied - warn the user.
+ */
+ if (elf->type == ET_EXEC && elf->load_address) {
+ pr_warn("ET_EXEC loaded at custom address but no PT_DYNAMIC segment - "
+ "relocations cannot be applied\n");
+ } else {
+ pr_debug("No PT_DYNAMIC segment found\n");
+ }
+ return 0;
+ }
+
+ /* Call architecture-specific relocation handler */
+ return elf_apply_relocations(elf, dyn_seg);
+}
+
int elf_load(struct elf_image *elf)
{
- return load_elf_image_segments(elf);
+ int ret;
+
+ ret = load_elf_image_segments(elf);
+ if (ret)
+ return ret;
+
+ /* Apply relocations if needed */
+ ret = elf_relocate(elf);
+ if (ret) {
+ pr_err("Relocation failed: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
}
void elf_close(struct elf_image *elf)
diff --git a/include/elf.h b/include/elf.h
index dc1aa8d5d1..6f77b53597 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -394,11 +394,15 @@ extern Elf64_Dyn _DYNAMIC [];
struct elf_image {
struct list_head list;
u8 class;
+ u16 type; /* ET_EXEC or ET_DYN */
u64 entry;
void *low_addr;
void *high_addr;
void *hdr_buf;
const char *filename;
+ void *load_address; /* User-specified load address (NULL = use p_paddr) */
+ void *base_load_addr; /* Calculated base address for ET_DYN */
+ unsigned long reloc_offset; /* Offset between p_vaddr and actual load address */
};
static inline size_t elf_get_mem_size(struct elf_image *elf)
@@ -411,6 +415,31 @@ struct elf_image *elf_open(const char *filename);
void elf_close(struct elf_image *elf);
int elf_load(struct elf_image *elf);
+/*
+ * Set the load address for the ELF file.
+ * Must be called before elf_load().
+ * If not set, ET_EXEC uses p_paddr, ET_DYN uses lowest p_paddr.
+ */
+void elf_set_load_address(struct elf_image *elf, void *addr);
+
+/*
+ * Architecture-specific relocation handler.
+ * Returns 0 on success, -ENOSYS if architecture doesn't support relocations,
+ * other negative error codes on failure.
+ */
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg);
+
+/*
+ * Parse the dynamic section and extract relocation information.
+ * This is a generic function that works for both 32-bit and 64-bit ELF files,
+ * and handles both REL and RELA relocation formats.
+ * Returns 0 on success, -EINVAL on error.
+ */
+int elf_parse_dynamic_section_rel(struct elf_image *elf, const void *dyn_seg,
+ void **rel_out, u64 *relsz_out, void **symtab);
+int elf_parse_dynamic_section_rela(struct elf_image *elf, const void *dyn_seg,
+ void **rel_out, u64 *relsz_out, void **symtab);
+
#define ELF_GET_FIELD(__s, __field, __type) \
static inline __type elf_##__s##_##__field(struct elf_image *elf, void *arg) { \
if (elf->class == ELFCLASS32) \
@@ -426,10 +455,12 @@ ELF_GET_FIELD(hdr, e_phentsize, u16)
ELF_GET_FIELD(hdr, e_type, u16)
ELF_GET_FIELD(hdr, e_machine, u16)
ELF_GET_FIELD(phdr, p_paddr, u64)
+ELF_GET_FIELD(phdr, p_vaddr, u64)
ELF_GET_FIELD(phdr, p_filesz, u64)
ELF_GET_FIELD(phdr, p_memsz, u64)
ELF_GET_FIELD(phdr, p_type, u32)
ELF_GET_FIELD(phdr, p_offset, u64)
+ELF_GET_FIELD(phdr, p_flags, u32)
static inline unsigned long elf_size_of_phdr(struct elf_image *elf)
{
@@ -445,4 +476,37 @@ static inline unsigned long elf_size_of_phdr(struct elf_image *elf)
elf_hdr_e_phnum(elf, buf) * elf_size_of_phdr(elf); \
phdr = (void *)phdr + elf_size_of_phdr(elf))
+/* Dynamic section accessors */
+static inline s64 elf_dyn_d_tag(struct elf_image *elf, const void *arg)
+{
+ if (elf->class == ELFCLASS32)
+ return (s64)((Elf32_Dyn *)arg)->d_tag;
+ else
+ return (s64)((Elf64_Dyn *)arg)->d_tag;
+}
+
+static inline u64 elf_dyn_d_val(struct elf_image *elf, const void *arg)
+{
+ if (elf->class == ELFCLASS32)
+ return (u64)((Elf32_Dyn *)arg)->d_un.d_val;
+ else
+ return (u64)((Elf64_Dyn *)arg)->d_un.d_val;
+}
+
+static inline u64 elf_dyn_d_ptr(struct elf_image *elf, const void *arg)
+{
+ if (elf->class == ELFCLASS32)
+ return (u64)((Elf32_Dyn *)arg)->d_un.d_ptr;
+ else
+ return (u64)((Elf64_Dyn *)arg)->d_un.d_ptr;
+}
+
+static inline unsigned long elf_size_of_dyn(struct elf_image *elf)
+{
+ if (elf->class == ELFCLASS32)
+ return sizeof(Elf32_Dyn);
+ else
+ return sizeof(Elf64_Dyn);
+}
+
#endif /* _LINUX_ELF_H */
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 06/22] ARM: implement elf_apply_relocations() for ELF relocation support
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (4 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 05/22] elf: add dynamic relocation support Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 07/22] riscv: define generic relocate_image Sascha Hauer
` (16 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Implement architecture-specific ELF relocation handlers for ARM32 and ARM64.
The implementation reuses the existing relocate_image().
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/include/asm/elf.h | 7 +++++++
arch/arm/lib32/reloc.c | 19 +++++++++++++++++++
arch/arm/lib64/reloc.c | 21 +++++++++++++++++++--
3 files changed, 45 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/elf.h b/arch/arm/include/asm/elf.h
index 0b4704a4a5..630c85f2b4 100644
--- a/arch/arm/include/asm/elf.h
+++ b/arch/arm/include/asm/elf.h
@@ -36,6 +36,13 @@ typedef struct user_fp elf_fpregset_t;
#define R_ARM_THM_CALL 10
#define R_ARM_THM_JUMP24 30
+/* Additional relocation types for dynamic linking */
+#define R_ARM_RELATIVE 23
+
+#define R_AARCH64_NONE 0
+#define R_AARCH64_ABS64 257
+#define R_AARCH64_RELATIVE 1027
+
/*
* These are used to set parameters in the core dumps.
*/
diff --git a/arch/arm/lib32/reloc.c b/arch/arm/lib32/reloc.c
index 378ba95b2f..edd3d7eb48 100644
--- a/arch/arm/lib32/reloc.c
+++ b/arch/arm/lib32/reloc.c
@@ -54,3 +54,22 @@ void __prereloc relocate_image(unsigned long offset,
if (dynend)
__memset(dynsym, 0, (unsigned long)dynend - (unsigned long)dynsym);
}
+
+/*
+ * Apply ARM32 ELF relocations
+ */
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+ void *rel_ptr = NULL, *symtab = NULL;
+ u64 relsz;
+ phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+ int ret;
+
+ ret = elf_parse_dynamic_section_rel(elf, dyn_seg, &rel_ptr, &relsz, &symtab);
+ if (ret)
+ return ret;
+
+ relocate_image(base, rel_ptr, rel_ptr + relsz, symtab, NULL);
+
+ return 0;
+}
diff --git a/arch/arm/lib64/reloc.c b/arch/arm/lib64/reloc.c
index 2288f9e2e3..b498157874 100644
--- a/arch/arm/lib64/reloc.c
+++ b/arch/arm/lib64/reloc.c
@@ -8,8 +8,6 @@
#include <debug_ll.h>
#include <asm/reloc.h>
-#define R_AARCH64_RELATIVE 1027
-
/*
* relocate binary to the currently running address
*/
@@ -45,3 +43,22 @@ void __prereloc relocate_image(unsigned long offset,
dstart += sizeof(*rel);
}
}
+
+/*
+ * Apply ARM64 ELF relocations
+ */
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+ void *rela_ptr = NULL, *symtab = NULL;
+ u64 relasz;
+ phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+ int ret;
+
+ ret = elf_parse_dynamic_section_rela(elf, dyn_seg, &rela_ptr, &relasz, &symtab);
+ if (ret)
+ return ret;
+
+ relocate_image(base, rela_ptr, rela_ptr + relasz, symtab, NULL);
+
+ return 0;
+}
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 07/22] riscv: define generic relocate_image
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (5 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 06/22] ARM: implement elf_apply_relocations() for ELF " Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 08/22] riscv: implement elf_apply_relocations() for ELF relocation support Sascha Hauer
` (15 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
For use by the ELF loader in PBL to relocate barebox proper, export a
new relocate_image capable of relocating barebox and implement
relocate_to_current_adr() in terms of it.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/lib/reloc.c | 27 +++++++++++++--------------
1 file changed, 13 insertions(+), 14 deletions(-)
diff --git a/arch/riscv/lib/reloc.c b/arch/riscv/lib/reloc.c
index 0c1ec8b488..18b13a7013 100644
--- a/arch/riscv/lib/reloc.c
+++ b/arch/riscv/lib/reloc.c
@@ -30,26 +30,15 @@ void sync_caches_for_execution(void)
local_flush_icache_all();
}
-void relocate_to_current_adr(void)
+static void relocate_image(unsigned long offset,
+ void *dstart, void *dend,
+ long *dynsym, long *dynend)
{
- unsigned long offset;
- unsigned long *dynsym;
- void *dstart, *dend;
Elf_Rela *rela;
- /* Get offset between linked address and runtime address */
- offset = get_runtime_offset();
if (!offset)
return;
- /*
- * We have yet to relocate, so using runtime_address
- * to compute the relocated address
- */
- dstart = runtime_address(__rel_dyn_start);
- dend = runtime_address(__rel_dyn_end);
- dynsym = runtime_address(__dynsym_start);
-
for (rela = dstart; (void *)rela < dend; rela++) {
unsigned long *fixup;
@@ -74,5 +63,15 @@ void relocate_to_current_adr(void)
}
}
+}
+
+void relocate_to_current_adr(void)
+{
+ relocate_image(get_runtime_offset(),
+ runtime_address(__rel_dyn_start),
+ runtime_address(__rel_dyn_end),
+ runtime_address(__dynsym_start),
+ NULL);
+
sync_caches_for_execution();
}
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 08/22] riscv: implement elf_apply_relocations() for ELF relocation support
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (6 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 07/22] riscv: define generic relocate_image Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 09/22] elf: implement elf_load_inplace() Sascha Hauer
` (14 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Add architecture-specific ELF relocation support for RISC-V,
enabling dynamic relocation of position-independent ELF binaries.
The implemetation reuses the existing relocate_image().
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/lib/reloc.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/arch/riscv/lib/reloc.c b/arch/riscv/lib/reloc.c
index 18b13a7013..71d59d4ab6 100644
--- a/arch/riscv/lib/reloc.c
+++ b/arch/riscv/lib/reloc.c
@@ -75,3 +75,19 @@ void relocate_to_current_adr(void)
sync_caches_for_execution();
}
+
+int elf_apply_relocations(struct elf_image *elf, const void *dyn_seg)
+{
+ void *rela_ptr = NULL, *symtab = NULL;
+ u64 relasz;
+ phys_addr_t base = (phys_addr_t)elf->reloc_offset;
+ int ret;
+
+ ret = elf_parse_dynamic_section_rela(elf, dyn_seg, &rela_ptr, &relasz, &symtab);
+ if (ret)
+ return ret;
+
+ relocate_image(base, rela_ptr, rela_ptr + relasz, symtab, NULL);
+
+ return 0;
+}
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 09/22] elf: implement elf_load_inplace()
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (7 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 08/22] riscv: implement elf_apply_relocations() for ELF relocation support Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 10/22] elf: create elf_open_binary_into() Sascha Hauer
` (13 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Implement elf_load_inplace() to apply dynamic relocations to an ELF binary
that is already loaded in memory. Unlike elf_load(), this function does not
allocate memory or copy segments - it only modifies the existing image in
place.
This is useful for self-relocating loaders or when the ELF has been loaded
by external means (e.g., firmware or another bootloader).
For ET_DYN (position-independent) binaries, the relocation offset is
calculated relative to the first executable PT_LOAD segment (.text section),
taking into account the difference between the segment's virtual address
and its file offset.
The entry point is also adjusted to point to the relocated image.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Acked-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
common/elf.c | 127 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
include/elf.h | 8 ++++
2 files changed, 135 insertions(+)
diff --git a/common/elf.c b/common/elf.c
index 9f96943988..6f173daaa6 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -622,3 +622,130 @@ void elf_close(struct elf_image *elf)
free(elf);
}
+
+/**
+ * elf_load_inplace() - Apply dynamic relocations to an ELF binary in place
+ * @elf: ELF image previously opened with elf_open_binary()
+ *
+ * This function applies dynamic relocations to an ELF binary that is already
+ * loaded at its target address in memory. Unlike elf_load(), this does not
+ * allocate memory or copy segments - it only modifies the existing image.
+ *
+ * This is useful for self-relocating loaders or when the ELF has been loaded
+ * by external means (e.g., loaded by firmware or another bootloader).
+ *
+ * The ELF image must have been previously opened with elf_open_binary().
+ *
+ * For ET_DYN (position-independent) binaries, the relocation offset is
+ * calculated relative to the first executable PT_LOAD segment (.text section).
+ *
+ * For ET_EXEC binaries, no relocation is applied as they are expected to
+ * be at their link-time addresses.
+ *
+ * Returns: 0 on success, negative error code on failure
+ */
+int elf_load_inplace(struct elf_image *elf)
+{
+ const void *dyn_seg;
+ void *buf, *phdr;
+ void *elf_buf;
+ int ret;
+ u64 text_vaddr = U64_MAX;
+ u64 text_vaddr_min = U64_MAX;
+ u64 text_offset = U64_MAX;
+
+ buf = elf->hdr_buf;
+ elf_buf = elf->hdr_buf;
+
+ /*
+ * First pass: Clear BSS segments (p_memsz > p_filesz) and find lowest
+ * virtual address.
+ * BSS clearing must be done before relocations as uninitialized data
+ * must be zeroed per C standard.
+ */
+ phdr = buf + elf_hdr_e_phoff(elf, buf);
+ elf_for_each_segment(phdr, elf, buf) {
+ if (elf_phdr_p_type(elf, phdr) == PT_LOAD) {
+ u64 p_offset = elf_phdr_p_offset(elf, phdr);
+ u64 p_filesz = elf_phdr_p_filesz(elf, phdr);
+ u64 p_memsz = elf_phdr_p_memsz(elf, phdr);
+
+ /* Clear BSS (uninitialized data) */
+ if (p_filesz < p_memsz) {
+ void *bss_start = elf_buf + p_offset + p_filesz;
+ size_t bss_size = p_memsz - p_filesz;
+ memset(bss_start, 0x00, bss_size);
+ }
+
+ text_vaddr = elf_phdr_p_vaddr(elf, phdr);
+
+ if (text_vaddr < text_vaddr_min) {
+ text_vaddr_min = text_vaddr;
+ text_offset = p_offset;
+ }
+ }
+ }
+
+ text_vaddr = text_vaddr_min;
+
+ /*
+ * Calculate relocation offset for the in-place binary.
+ * For ET_DYN, we need to find the first PT_LOAD segment
+ * and use it as the relocation base.
+ */
+ if (elf->type == ET_DYN) {
+
+ if (text_vaddr == U64_MAX) {
+ pr_err("No PT_LOAD segment found\n");
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /*
+ * Calculate relocation offset relative to .text section:
+ * - .text is at file offset text_offset, so in memory at: elf_buf + text_offset
+ * - .text has virtual address text_vaddr
+ * - reloc_offset = (actual .text address) - (virtual .text address)
+ */
+ elf->reloc_offset = ((unsigned long)elf_buf + text_offset) - text_vaddr;
+
+ pr_debug("In-place ELF relocation: text_vaddr=0x%llx, text_offset=0x%llx, "
+ "load_addr=%p, offset=0x%08lx\n",
+ text_vaddr, text_offset, elf_buf, elf->reloc_offset);
+
+ /* Adjust entry point to point to relocated image */
+ elf->entry += elf->reloc_offset;
+ } else {
+ /*
+ * ET_EXEC binaries are at their link-time addresses,
+ * no relocation needed
+ */
+ elf->reloc_offset = 0;
+ }
+
+ /* Find PT_DYNAMIC segment */
+ dyn_seg = elf_find_dynamic_segment(elf);
+ if (!dyn_seg) {
+ /*
+ * No PT_DYNAMIC segment found.
+ * This is fine for statically-linked binaries or
+ * binaries without relocations.
+ */
+ pr_debug("No PT_DYNAMIC segment found\n");
+ ret = 0;
+ goto out;
+ }
+
+ /* Apply architecture-specific relocations */
+ ret = elf_apply_relocations(elf, dyn_seg);
+ if (ret) {
+ pr_err("In-place relocation failed: %d\n", ret);
+ goto out;
+ }
+
+ pr_debug("In-place ELF relocation completed successfully\n");
+ return 0;
+
+out:
+ return ret;
+}
diff --git a/include/elf.h b/include/elf.h
index 6f77b53597..8b5eb9584e 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -422,6 +422,14 @@ int elf_load(struct elf_image *elf);
*/
void elf_set_load_address(struct elf_image *elf, void *addr);
+/*
+ * Apply dynamic relocations to an ELF binary already loaded in memory.
+ * This modifies the ELF image in place without allocating new memory.
+ * Useful for self-relocating loaders or externally loaded binaries.
+ * The elf parameter must have been previously opened with elf_open_binary().
+ */
+int elf_load_inplace(struct elf_image *elf);
+
/*
* Architecture-specific relocation handler.
* Returns 0 on success, -ENOSYS if architecture doesn't support relocations,
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 10/22] elf: create elf_open_binary_into()
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (8 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 09/22] elf: implement elf_load_inplace() Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 11/22] Makefile: add vmbarebox build target Sascha Hauer
` (12 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
elf_open_binary() returns a dynamically allocated struct elf_image *. We
do not have malloc in the PBL, so for better PBL support create
elf_open_binary_into() which takes a struct elf_image * as argument.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
common/elf.c | 26 +++++++++++++++++++-------
include/elf.h | 1 +
2 files changed, 20 insertions(+), 7 deletions(-)
diff --git a/common/elf.c b/common/elf.c
index 6f173daaa6..62b60c082d 100644
--- a/common/elf.c
+++ b/common/elf.c
@@ -307,6 +307,23 @@ static void elf_init_struct(struct elf_image *elf)
elf->filename = NULL;
}
+int elf_open_binary_into(struct elf_image *elf, void *buf)
+{
+ int ret;
+
+ memset(elf, 0, sizeof(*elf));
+ elf_init_struct(elf);
+
+ elf->hdr_buf = buf;
+ ret = elf_check_image(elf, buf);
+ if (ret)
+ return ret;
+
+ elf->entry = elf_hdr_e_entry(elf, elf->hdr_buf);
+
+ return 0;
+}
+
struct elf_image *elf_open_binary(void *buf)
{
int ret;
@@ -316,17 +333,12 @@ struct elf_image *elf_open_binary(void *buf)
if (!elf)
return ERR_PTR(-ENOMEM);
- elf_init_struct(elf);
-
- elf->hdr_buf = buf;
- ret = elf_check_image(elf, buf);
+ ret = elf_open_binary_into(elf, buf);
if (ret) {
free(elf);
- return ERR_PTR(-EINVAL);
+ return ERR_PTR(ret);
}
- elf->entry = elf_hdr_e_entry(elf, elf->hdr_buf);
-
return elf;
}
diff --git a/include/elf.h b/include/elf.h
index 8b5eb9584e..f54145e4af 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -410,6 +410,7 @@ static inline size_t elf_get_mem_size(struct elf_image *elf)
return elf->high_addr - elf->low_addr;
}
+int elf_open_binary_into(struct elf_image *elf, void *buf);
struct elf_image *elf_open_binary(void *buf);
struct elf_image *elf_open(const char *filename);
void elf_close(struct elf_image *elf);
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 11/22] Makefile: add vmbarebox build target
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (9 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 10/22] elf: create elf_open_binary_into() Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 12/22] PBL: allow to link ELF image into PBL Sascha Hauer
` (11 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Add a build target to create vmbarebox, which provides an ELF format
version of barebox that will be used later to link into the PBL
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
Makefile | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/Makefile b/Makefile
index 3b31cecc22..2350592cde 100644
--- a/Makefile
+++ b/Makefile
@@ -1096,6 +1096,14 @@ barebox.fit: images/barebox-$(CONFIG_ARCH_LINUX_NAME).fit
barebox.srec: barebox
$(OBJCOPY) -O srec $< $@
+OBJCOPYFLAGS_vmbarebox = $(call objcopy-option,--strip-section-headers,--strip-all) \
+ --remove-section=.comment \
+ --remove-section=.note* \
+ --remove-section=.gnu.hash
+
+vmbarebox: barebox FORCE
+ $(call if_changed,objcopy)
+
quiet_cmd_barebox_proper__ = CC $@
cmd_barebox_proper__ = $(CC) -r -o $@ -Wl,--whole-archive $(BAREBOX_OBJS)
@@ -1378,7 +1386,7 @@ CLEAN_FILES += barebox System.map include/generated/barebox_default_env.h \
.tmp_version .tmp_barebox* barebox.bin barebox.map \
.tmp_kallsyms* compile_commands.json \
.tmp_barebox.o barebox.o barebox-flash-image \
- barebox.srec barebox.efi
+ barebox.srec barebox.efi vmbarebox
CLEAN_FILES += scripts/bareboxenv-target scripts/kernel-install-target \
scripts/bareboxcrc32-target scripts/bareboximd-target \
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 12/22] PBL: allow to link ELF image into PBL
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (10 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 11/22] Makefile: add vmbarebox build target Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 13/22] mmu: add MAP_CACHED_RO mapping type Sascha Hauer
` (10 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Some architectures want to link the barebox proper ELF image into the
PBL. Allow that and provide a Kconfig option to select the ELF image.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
Makefile | 4 ++++
pbl/Kconfig | 9 +++++++++
2 files changed, 13 insertions(+)
diff --git a/Makefile b/Makefile
index 2350592cde..25b548712c 100644
--- a/Makefile
+++ b/Makefile
@@ -831,7 +831,11 @@ export KBUILD_BINARY ?= barebox.bin
# Also any assignments in arch/$(SRCARCH)/Makefile take precedence over
# the default value.
+ifeq ($(CONFIG_PBL_IMAGE_ELF),y)
+export BAREBOX_PROPER ?= vmbarebox
+else
export BAREBOX_PROPER ?= barebox.bin
+endif
barebox-flash-images: $(KBUILD_IMAGE)
@echo $^ > $@
diff --git a/pbl/Kconfig b/pbl/Kconfig
index cab9325d16..63f29cd613 100644
--- a/pbl/Kconfig
+++ b/pbl/Kconfig
@@ -21,6 +21,15 @@ config PBL_IMAGE_NO_PIGGY
want to use the piggy mechanism to load barebox proper.
It's so far only intended for sandbox.
+config PBL_IMAGE_ELF
+ bool
+ depends on PBL_IMAGE
+ select ELF
+ help
+ If yes, link ELF image into the PBL, otherwise a raw binary
+ is linked into the PBL. This must match the loader code in the
+ PBL.
+
config PBL_MULTI_IMAGES
bool
select PBL_IMAGE
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 13/22] mmu: add MAP_CACHED_RO mapping type
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (11 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 12/22] PBL: allow to link ELF image into PBL Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 14/22] ARM: drop arm_fixup_vectors() Sascha Hauer
` (9 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
ARM32 and ARM64 have ARCH_MAP_CACHED_RO. We'll move parts of the MMU
initialization to generic code later, so add a new mapping type to
include/mmu.h.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu-common.c | 4 ++--
arch/arm/cpu/mmu-common.h | 3 +--
arch/arm/cpu/mmu_32.c | 4 ++--
arch/arm/cpu/mmu_64.c | 2 +-
include/mmu.h | 3 ++-
5 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index a1431c0ff4..67317f127c 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -22,7 +22,7 @@ const char *map_type_tostr(maptype_t map_type)
switch (map_type) {
case ARCH_MAP_CACHED_RWX: return "RWX";
- case ARCH_MAP_CACHED_RO: return "RO";
+ case MAP_CACHED_RO: return "RO";
case MAP_CACHED: return "CACHED";
case MAP_UNCACHED: return "UNCACHED";
case MAP_CODE: return "CODE";
@@ -158,7 +158,7 @@ static void mmu_remap_memory_banks(void)
}
remap_range((void *)code_start, code_size, MAP_CODE);
- remap_range((void *)rodata_start, rodata_size, ARCH_MAP_CACHED_RO);
+ remap_range((void *)rodata_start, rodata_size, MAP_CACHED_RO);
setup_trap_pages();
}
diff --git a/arch/arm/cpu/mmu-common.h b/arch/arm/cpu/mmu-common.h
index a111e15a21..b42c421ffd 100644
--- a/arch/arm/cpu/mmu-common.h
+++ b/arch/arm/cpu/mmu-common.h
@@ -12,7 +12,6 @@
#include <linux/bits.h>
#define ARCH_MAP_CACHED_RWX MAP_ARCH(2)
-#define ARCH_MAP_CACHED_RO MAP_ARCH(3)
#define ARCH_MAP_FLAG_PAGEWISE BIT(31)
@@ -32,7 +31,7 @@ static inline maptype_t arm_mmu_maybe_skip_permissions(maptype_t map_type)
switch (map_type & MAP_TYPE_MASK) {
case MAP_CODE:
case MAP_CACHED:
- case ARCH_MAP_CACHED_RO:
+ case MAP_CACHED_RO:
return ARCH_MAP_CACHED_RWX;
default:
return map_type;
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index caa41e1beb..eee1385a8f 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -304,7 +304,7 @@ static uint32_t get_pte_flags(maptype_t map_type)
switch (map_type & MAP_TYPE_MASK) {
case ARCH_MAP_CACHED_RWX:
return PTE_FLAGS_CACHED_V7_RWX;
- case ARCH_MAP_CACHED_RO:
+ case MAP_CACHED_RO:
return PTE_FLAGS_CACHED_RO_V7;
case MAP_CACHED:
return PTE_FLAGS_CACHED_V7;
@@ -320,7 +320,7 @@ static uint32_t get_pte_flags(maptype_t map_type)
}
} else {
switch (map_type & MAP_TYPE_MASK) {
- case ARCH_MAP_CACHED_RO:
+ case MAP_CACHED_RO:
case MAP_CODE:
return PTE_FLAGS_CACHED_RO_V4;
case ARCH_MAP_CACHED_RWX:
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index 56c6a21f2b..ddf1373ec0 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -159,7 +159,7 @@ static unsigned long get_pte_attrs(maptype_t map_type)
return attrs_xn() | MEM_ALLOC_WRITECOMBINE;
case MAP_CODE:
return CACHED_MEM | PTE_BLOCK_RO;
- case ARCH_MAP_CACHED_RO:
+ case MAP_CACHED_RO:
return attrs_xn() | CACHED_MEM | PTE_BLOCK_RO;
case ARCH_MAP_CACHED_RWX:
return CACHED_MEM;
diff --git a/include/mmu.h b/include/mmu.h
index f796198088..9f582f25e1 100644
--- a/include/mmu.h
+++ b/include/mmu.h
@@ -9,9 +9,10 @@
#define MAP_CACHED 1
#define MAP_FAULT 2
#define MAP_CODE 3
+#define MAP_CACHED_RO 4
#ifdef CONFIG_ARCH_HAS_DMA_WRITE_COMBINE
-#define MAP_WRITECOMBINE 4
+#define MAP_WRITECOMBINE 5
#else
#define MAP_WRITECOMBINE MAP_UNCACHED
#endif
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 14/22] ARM: drop arm_fixup_vectors()
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (12 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 13/22] mmu: add MAP_CACHED_RO mapping type Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 15/22] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
` (8 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Add the missing "ax" flag for the exception table. With this the jumps
in the exception table are correctly relocated and we no longer have to
fix them up during runtime. Remove the now unnecessary
arm_fixup_vectors().
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/exceptions_32.S | 54 +++++---------------------------------
arch/arm/cpu/interrupts_32.c | 5 +---
arch/arm/cpu/mmu_32.c | 2 --
arch/arm/cpu/no-mmu.c | 2 --
arch/arm/include/asm/barebox-arm.h | 4 ---
5 files changed, 8 insertions(+), 59 deletions(-)
diff --git a/arch/arm/cpu/exceptions_32.S b/arch/arm/cpu/exceptions_32.S
index dc3d42663c..68eb696fc7 100644
--- a/arch/arm/cpu/exceptions_32.S
+++ b/arch/arm/cpu/exceptions_32.S
@@ -127,58 +127,18 @@ fiq:
bad_save_user_regs
bl do_fiq
-#ifdef CONFIG_ARM_EXCEPTIONS
-/*
- * With relocatable binary support the runtime exception vectors do not match
- * the addresses in the binary. We have to fix them up during runtime
- */
-ENTRY(arm_fixup_vectors)
- ldr r0, =undefined_instruction
- ldr r1, =_undefined_instruction
- str r0, [r1]
- ldr r0, =software_interrupt
- ldr r1, =_software_interrupt
- str r0, [r1]
- ldr r0, =prefetch_abort
- ldr r1, =_prefetch_abort
- str r0, [r1]
- ldr r0, =data_abort
- ldr r1, =_data_abort
- str r0, [r1]
- ldr r0, =irq
- ldr r1, =_irq
- str r0, [r1]
- ldr r0, =fiq
- ldr r1, =_fiq
- str r0, [r1]
- bx lr
-ENDPROC(arm_fixup_vectors)
-#endif
-
-.section .text_exceptions
+.section .text_exceptions, "ax"
.globl extable
extable:
1: b 1b /* barebox_arm_reset_vector */
#ifdef CONFIG_ARM_EXCEPTIONS
- ldr pc, _undefined_instruction /* undefined instruction */
- ldr pc, _software_interrupt /* software interrupt (SWI) */
- ldr pc, _prefetch_abort /* prefetch abort */
- ldr pc, _data_abort /* data abort */
+ ldr pc, =undefined_instruction /* undefined instruction */
+ ldr pc, =software_interrupt /* software interrupt (SWI) */
+ ldr pc, =prefetch_abort /* prefetch abort */
+ ldr pc, =data_abort /* data abort */
1: b 1b /* (reserved) */
- ldr pc, _irq /* irq (interrupt) */
- ldr pc, _fiq /* fiq (fast interrupt) */
-.globl _undefined_instruction
-_undefined_instruction: .word undefined_instruction
-.globl _software_interrupt
-_software_interrupt: .word software_interrupt
-.globl _prefetch_abort
-_prefetch_abort: .word prefetch_abort
-.globl _data_abort
-_data_abort: .word data_abort
-.globl _irq
-_irq: .word irq
-.globl _fiq
-_fiq: .word fiq
+ ldr pc, =irq /* irq (interrupt) */
+ ldr pc, =fiq /* fiq (fast interrupt) */
#else
1: b 1b /* undefined instruction */
1: b 1b /* software interrupt (SWI) */
diff --git a/arch/arm/cpu/interrupts_32.c b/arch/arm/cpu/interrupts_32.c
index 0b88db10fe..af2231f2a7 100644
--- a/arch/arm/cpu/interrupts_32.c
+++ b/arch/arm/cpu/interrupts_32.c
@@ -231,10 +231,8 @@ static __maybe_unused int arm_init_vectors(void)
* First try to use the vectors where they actually are, works
* on ARMv7 and later.
*/
- if (!set_vector_table((unsigned long)__exceptions_start)) {
- arm_fixup_vectors();
+ if (!set_vector_table((unsigned long)__exceptions_start))
return 0;
- }
/*
* Next try high vectors at 0xffff0000.
@@ -265,6 +263,5 @@ void arm_pbl_init_exceptions(void)
return;
set_vbar((unsigned long)__exceptions_start);
- arm_fixup_vectors();
}
#endif
diff --git a/arch/arm/cpu/mmu_32.c b/arch/arm/cpu/mmu_32.c
index eee1385a8f..eae5a878e8 100644
--- a/arch/arm/cpu/mmu_32.c
+++ b/arch/arm/cpu/mmu_32.c
@@ -528,8 +528,6 @@ void create_vector_table(unsigned long adr)
get_pte_flags(MAP_CACHED), true);
}
- arm_fixup_vectors();
-
memset(vectors, 0, PAGE_SIZE);
memcpy(vectors, __exceptions_start, __exceptions_stop - __exceptions_start);
}
diff --git a/arch/arm/cpu/no-mmu.c b/arch/arm/cpu/no-mmu.c
index c4ef5d1f9d..8e00cffebf 100644
--- a/arch/arm/cpu/no-mmu.c
+++ b/arch/arm/cpu/no-mmu.c
@@ -58,8 +58,6 @@ static int nommu_v7_vectors_init(void)
cr &= ~CR_V;
set_cr(cr);
- arm_fixup_vectors();
-
vectors = xmemalign(PAGE_SIZE, PAGE_SIZE);
memset(vectors, 0, PAGE_SIZE);
memcpy(vectors, __exceptions_start, __exceptions_size);
diff --git a/arch/arm/include/asm/barebox-arm.h b/arch/arm/include/asm/barebox-arm.h
index e1d89d5684..99f8231194 100644
--- a/arch/arm/include/asm/barebox-arm.h
+++ b/arch/arm/include/asm/barebox-arm.h
@@ -45,12 +45,8 @@ unsigned long arm_mem_membase_get(void);
unsigned long arm_mem_endmem_get(void);
#ifdef CONFIG_ARM_EXCEPTIONS
-void arm_fixup_vectors(void);
ulong arm_get_vector_table(void);
#else
-static inline void arm_fixup_vectors(void)
-{
-}
static inline ulong arm_get_vector_table(void)
{
return ~0;
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 15/22] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (13 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 14/22] ARM: drop arm_fixup_vectors() Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 16/22] ARM: link ELF image into PBL Sascha Hauer
` (7 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Fix the linker scripts to generate three distinct PT_LOAD segments with
correct permissions instead of combining .rodata with .data.
Before this fix, the linker auto-generated only two PT_LOAD segments:
1. Text segment (PF_R|PF_X)
2. Data segment (PF_R|PF_W) - containing .rodata, .data, .bss, etc.
With explicit PHDRS directives, we now generate three segments:
1. text segment (PF_R|PF_X): .text and related code sections
2. rodata segment (PF_R): .rodata and unwind tables
3. data segment (PF_R|PF_W): .data, .bss, and related sections
With this we can setup the MMU properly from outside the ELF binary.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/lib32/barebox.lds.S | 35 +++++++++++++++++++++++------------
arch/arm/lib64/barebox.lds.S | 31 ++++++++++++++++++++-----------
2 files changed, 43 insertions(+), 23 deletions(-)
diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
index c704dd6d70..c098993615 100644
--- a/arch/arm/lib32/barebox.lds.S
+++ b/arch/arm/lib32/barebox.lds.S
@@ -7,14 +7,23 @@
OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
ENTRY(start)
+
+PHDRS
+{
+ text PT_LOAD FLAGS(5); /* PF_R | PF_X */
+ rodata PT_LOAD FLAGS(4); /* PF_R */
+ dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
+ data PT_LOAD FLAGS(6); /* PF_R | PF_W */
+}
+
SECTIONS
{
. = 0x0;
- .image_start : { *(.__image_start) }
+ .image_start : { *(.__image_start) } :text
. = ALIGN(4);
- ._text : { *(._text) }
+ ._text : { *(._text) } :text
.text :
{
_stext = .;
@@ -27,7 +36,7 @@ SECTIONS
KEEP(*(.text_exceptions*))
__exceptions_stop = .;
*(.text*)
- }
+ } :text
BAREBOX_BARE_INIT_SIZE
. = ALIGN(4096);
@@ -35,7 +44,7 @@ SECTIONS
.rodata : {
*(.rodata*)
RO_DATA_SECTION
- }
+ } :rodata
#ifdef CONFIG_ARM_UNWIND
/*
@@ -46,20 +55,22 @@ SECTIONS
__start_unwind_idx = .;
*(.ARM.exidx*)
__stop_unwind_idx = .;
- }
+ } :rodata
.ARM.unwind_tab : {
__start_unwind_tab = .;
*(.ARM.extab*)
__stop_unwind_tab = .;
- }
+ } :rodata
#endif
+ .dynamic : { *(.dynamic) } :rodata :dynamic
+
. = ALIGN(4096);
+
__end_rodata = .;
_etext = .;
_sdata = .;
- . = ALIGN(4);
- .data : { *(.data*) }
+ .data : { *(.data*) } :data
. = .;
@@ -69,12 +80,12 @@ SECTIONS
BAREBOX_EFI_RUNTIME
- .image_end : { *(.__image_end) }
+ .image_end : { *(.__image_end) } :data
. = ALIGN(4);
- .__bss_start : { *(.__bss_start) }
- .bss : { *(.bss*) }
- .__bss_stop : { *(.__bss_stop) }
+ .__bss_start : { *(.__bss_start) } :data
+ .bss : { *(.bss*) } :data
+ .__bss_stop : { *(.__bss_stop) } :data
#ifdef CONFIG_ARM_SECURE_MONITOR
. = ALIGN(16);
diff --git a/arch/arm/lib64/barebox.lds.S b/arch/arm/lib64/barebox.lds.S
index 5ee5fbc374..1ce2c67df0 100644
--- a/arch/arm/lib64/barebox.lds.S
+++ b/arch/arm/lib64/barebox.lds.S
@@ -6,14 +6,23 @@
OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
ENTRY(start)
+
+PHDRS
+{
+ text PT_LOAD FLAGS(5); /* PF_R | PF_X */
+ rodata PT_LOAD FLAGS(4); /* PF_R */
+ dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
+ data PT_LOAD FLAGS(6); /* PF_R | PF_W */
+}
+
SECTIONS
{
. = 0x0;
- .image_start : { *(.__image_start) }
+ .image_start : { *(.__image_start) } :text
. = ALIGN(4);
- ._text : { *(._text) }
+ ._text : { *(._text) } :text
.text :
{
_stext = .;
@@ -22,7 +31,7 @@ SECTIONS
*(.text_bare_init*)
__bare_init_end = .;
*(.text*)
- }
+ } :text
BAREBOX_BARE_INIT_SIZE
. = ALIGN(4096);
@@ -30,7 +39,9 @@ SECTIONS
.rodata : {
*(.rodata*)
RO_DATA_SECTION
- }
+ } :rodata
+
+ .dynamic : { *(.dynamic) } :rodata :dynamic
. = ALIGN(4096);
@@ -38,20 +49,18 @@ SECTIONS
_etext = .;
_sdata = .;
- .data : { *(.data*) }
-
- BAREBOX_RELOCATION_TABLE
+ .data : { *(.data*) } :data
_edata = .;
BAREBOX_EFI_RUNTIME
- .image_end : { *(.__image_end) }
+ .image_end : { *(.__image_end) } :data
. = ALIGN(4);
- .__bss_start : { *(.__bss_start) }
- .bss : { *(.bss*) }
- .__bss_stop : { *(.__bss_stop) }
+ .__bss_start : { *(.__bss_start) } :data
+ .bss : { *(.bss*) } :data
+ .__bss_stop : { *(.__bss_stop) } :data
_end = .;
_barebox_image_size = __bss_start;
}
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 16/22] ARM: link ELF image into PBL
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (14 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 15/22] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 17/22] ARM: cleanup barebox proper entry Sascha Hauer
` (6 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Instead of linking the raw binary barebox proper image into the PBL link
the ELF image into the PBL. With this barebox proper starts with a properly
linked and fully initialized C environment, so the calls to
relocate_to_adr() and setup_c() can be removed from barebox proper.
Also we no longer have to link the entry point to the beginning of the
binary as we can jump to the correct ELF entry point. With this we can
remove the text_entry section.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/Kconfig | 1 +
arch/arm/cpu/start.c | 13 ++++---------
arch/arm/cpu/uncompress.c | 24 +++++++++++++++++-------
3 files changed, 22 insertions(+), 16 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 5123e9b140..65856977ab 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -18,6 +18,7 @@ config ARM
select HW_HAS_PCI
select ARCH_HAS_DMA_WRITE_COMBINE
select HAVE_EFI_LOADER if MMU # for payload unaligned accesses
+ select PBL_IMAGE_ELF
default y
config ARCH_LINUX_NAME
diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
index f7d4507e71..2498bdb894 100644
--- a/arch/arm/cpu/start.c
+++ b/arch/arm/cpu/start.c
@@ -127,8 +127,9 @@ static int barebox_memory_areas_init(void)
}
device_initcall(barebox_memory_areas_init);
-__noreturn __prereloc void barebox_non_pbl_start(unsigned long membase,
- unsigned long memsize, struct handoff_data *hd)
+__noreturn void barebox_non_pbl_start(unsigned long membase,
+ unsigned long memsize,
+ struct handoff_data *hd)
{
unsigned long endmem = membase + memsize;
unsigned long malloc_start, malloc_end;
@@ -139,12 +140,6 @@ __noreturn __prereloc void barebox_non_pbl_start(unsigned long membase,
if (IS_ENABLED(CONFIG_CPU_V7))
armv7_hyp_install();
- relocate_to_adr(barebox_base);
-
- setup_c();
-
- barrier();
-
pbl_barebox_break();
pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
@@ -200,7 +195,7 @@ void start(unsigned long membase, unsigned long memsize, struct handoff_data *hd
* First function in the uncompressed image. We get here from
* the pbl. The stack already has been set up by the pbl.
*/
-void NAKED __prereloc __section(.text_entry) start(unsigned long membase,
+void __section(.text_entry) start(unsigned long membase,
unsigned long memsize, struct handoff_data *hd)
{
barebox_non_pbl_start(membase, memsize, hd);
diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
index b9fc1d04db..10df5fcba9 100644
--- a/arch/arm/cpu/uncompress.c
+++ b/arch/arm/cpu/uncompress.c
@@ -20,6 +20,7 @@
#include <asm/mmu.h>
#include <asm/unaligned.h>
#include <compressed-dtb.h>
+#include <elf.h>
#include <debug_ll.h>
@@ -41,6 +42,8 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
void *pg_start, *pg_end;
unsigned long pc = get_pc();
void *handoff_data;
+ struct elf_image elf;
+ int ret;
/* piggy data is not relocated, so determine the bounds now */
pg_start = runtime_address(input_data);
@@ -85,21 +88,28 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
else if (IS_ENABLED(CONFIG_ARMV7R_MPU))
set_cr(get_cr() | CR_C);
- pr_debug("uncompressing barebox binary at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
+ pr_debug("uncompressing barebox ELF at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
pg_start, pg_len, barebox_base, uncompressed_len);
pbl_barebox_uncompress((void*)barebox_base, pg_start, pg_len);
+ pr_debug("relocating ELF in place\n");
+
+ ret = elf_open_binary_into(&elf, (void *)barebox_base);
+ if (ret)
+ panic("Failed to open ELF binary: %d\n", ret);
+
+ ret = elf_load_inplace(&elf);
+ if (ret)
+ panic("Failed to relocate ELF: %d\n", ret);
+
+ barebox = (void *)(unsigned long)elf.entry;
+
handoff_data_move(handoff_data);
sync_caches_for_execution();
- if (IS_ENABLED(CONFIG_THUMB2_BAREBOX))
- barebox = (void *)(barebox_base + 1);
- else
- barebox = (void *)barebox_base;
-
- pr_debug("jumping to uncompressed image at 0x%p\n", barebox);
+ pr_debug("jumping to ELF entry point at 0x%p\n", barebox);
if (IS_ENABLED(CONFIG_CPU_V7) && boot_cpu_mode() == HYP_MODE)
armv7_switch_to_hyp();
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 17/22] ARM: cleanup barebox proper entry
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (15 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 16/22] ARM: link ELF image into PBL Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 18/22] ARM: PBL: setup MMU with proper permissions from ELF segments Sascha Hauer
` (5 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
As barebox proper is now an ELF file we no longer need to map the entry
function to the start of the image. Just link it to wherever the linker
wants it and drop the text_entry section. Also, remove the start()
function and set the ELF entry to barebox_non_pbl_start() directly.
While at it also remove the bare_init stuff from the barebox proper
linker script as it's only relevant to the PBL linker script which
is a separate script.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/start.c | 11 -----------
arch/arm/lib32/barebox.lds.S | 7 +------
arch/arm/lib64/barebox.lds.S | 7 +------
3 files changed, 2 insertions(+), 23 deletions(-)
diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c
index 2498bdb894..c2f14736da 100644
--- a/arch/arm/cpu/start.c
+++ b/arch/arm/cpu/start.c
@@ -189,14 +189,3 @@ __noreturn void barebox_non_pbl_start(unsigned long membase,
start_barebox();
}
-
-void start(unsigned long membase, unsigned long memsize, struct handoff_data *hd);
-/*
- * First function in the uncompressed image. We get here from
- * the pbl. The stack already has been set up by the pbl.
- */
-void __section(.text_entry) start(unsigned long membase,
- unsigned long memsize, struct handoff_data *hd)
-{
- barebox_non_pbl_start(membase, memsize, hd);
-}
diff --git a/arch/arm/lib32/barebox.lds.S b/arch/arm/lib32/barebox.lds.S
index c098993615..02db3b9790 100644
--- a/arch/arm/lib32/barebox.lds.S
+++ b/arch/arm/lib32/barebox.lds.S
@@ -6,7 +6,7 @@
OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
-ENTRY(start)
+ENTRY(barebox_non_pbl_start)
PHDRS
{
@@ -27,17 +27,12 @@ SECTIONS
.text :
{
_stext = .;
- *(.text_entry*)
- __bare_init_start = .;
- *(.text_bare_init*)
- __bare_init_end = .;
. = ALIGN(0x20);
__exceptions_start = .;
KEEP(*(.text_exceptions*))
__exceptions_stop = .;
*(.text*)
} :text
- BAREBOX_BARE_INIT_SIZE
. = ALIGN(4096);
__start_rodata = .;
diff --git a/arch/arm/lib64/barebox.lds.S b/arch/arm/lib64/barebox.lds.S
index 1ce2c67df0..bd76a0ca96 100644
--- a/arch/arm/lib64/barebox.lds.S
+++ b/arch/arm/lib64/barebox.lds.S
@@ -5,7 +5,7 @@
OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
-ENTRY(start)
+ENTRY(barebox_non_pbl_start)
PHDRS
{
@@ -26,13 +26,8 @@ SECTIONS
.text :
{
_stext = .;
- *(.text_entry*)
- __bare_init_start = .;
- *(.text_bare_init*)
- __bare_init_end = .;
*(.text*)
} :text
- BAREBOX_BARE_INIT_SIZE
. = ALIGN(4096);
__start_rodata = .;
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 18/22] ARM: PBL: setup MMU with proper permissions from ELF segments
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (16 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 17/22] ARM: cleanup barebox proper entry Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 19/22] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
` (4 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Move complete MMU setup into PBL by leveraging ELF segment information
to apply correct memory permissions before jumping to barebox proper.
After ELF relocation, parse PT_LOAD segments and map each with
permissions derived from p_flags:
- Text segments (PF_R|PF_X): Read-only + executable (MAP_CODE)
- Data segments (PF_R|PF_W): Read-write (MAP_CACHED)
- RO data segments (PF_R): Read-only (MAP_CACHED_RO)
This ensures barebox proper starts with full W^X protection already
in place, eliminating the need for complex remapping in barebox proper.
The framework is portable - common ELF parsing in pbl/mmu.c uses
architecture-specific early_remap_range() exported from mmu_*.c.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/arm/cpu/mmu-common.c | 31 +++++++-------
arch/arm/cpu/uncompress.c | 10 +++++
include/pbl/mmu.h | 38 +++++++++++++++++
pbl/Makefile | 1 +
pbl/mmu.c | 105 ++++++++++++++++++++++++++++++++++++++++++++++
5 files changed, 169 insertions(+), 16 deletions(-)
diff --git a/arch/arm/cpu/mmu-common.c b/arch/arm/cpu/mmu-common.c
index 67317f127c..4f3ad4a794 100644
--- a/arch/arm/cpu/mmu-common.c
+++ b/arch/arm/cpu/mmu-common.c
@@ -109,28 +109,26 @@ static inline void remap_range_end(unsigned long start, unsigned long end,
remap_range((void *)start, end - start, map_type);
}
-static inline void remap_range_end_sans_text(unsigned long start, unsigned long end,
+static inline void remap_range_end_sans_image(unsigned long start, unsigned long end,
unsigned map_type)
{
- unsigned long text_start = (unsigned long)&_stext;
- unsigned long text_end = (unsigned long)&_etext;
+ unsigned long image_start = (unsigned long)&__image_start;
+ unsigned long image_end = (unsigned long)&__image_end;
- if (region_overlap_end_exclusive(start, end, text_start, text_end)) {
- remap_range_end(start, text_start, MAP_CACHED);
- /* skip barebox segments here, will be mapped later */
- start = text_end;
+ if (region_overlap_end_exclusive(start, end, image_start, image_end)) {
+ remap_range_end(start, image_start, MAP_CACHED);
+ /* skip barebox segments here, PBL has already mapped them */
+ start = image_end;
}
+ start = ALIGN(start, PAGE_SIZE);
+
remap_range_end(start, end, MAP_CACHED);
}
static void mmu_remap_memory_banks(void)
{
struct memory_bank *bank;
- unsigned long code_start = (unsigned long)&_stext;
- unsigned long code_size = (unsigned long)&__start_rodata - (unsigned long)&_stext;
- unsigned long rodata_start = (unsigned long)&__start_rodata;
- unsigned long rodata_size = (unsigned long)&__end_rodata - rodata_start;
/*
* Early mmu init will have mapped everything but the initial memory area
@@ -138,6 +136,10 @@ static void mmu_remap_memory_banks(void)
* all memory banks, so let's map all pages, excluding reserved memory areas
* and barebox text area cacheable.
*
+ * PBL has already set up the MMU with proper permissions for text, data
+ * and rodata based on ELF segment information, so we don't need to remap
+ * those here.
+ *
* This code will become much less complex once we switch over to using
* CONFIG_MEMORY_ATTRIBUTES for MMU as well.
*/
@@ -150,16 +152,13 @@ static void mmu_remap_memory_banks(void)
/* Skip reserved regions */
for_each_reserved_region(bank, rsv) {
if (pos != rsv->start)
- remap_range_end_sans_text(pos, rsv->start, MAP_CACHED);
+ remap_range_end_sans_image(pos, rsv->start, MAP_CACHED);
pos = rsv->end + 1;
}
- remap_range_end_sans_text(pos, bank->res->end + 1, MAP_CACHED);
+ remap_range_end_sans_image(pos, bank->res->end + 1, MAP_CACHED);
}
- remap_range((void *)code_start, code_size, MAP_CODE);
- remap_range((void *)rodata_start, rodata_size, MAP_CACHED_RO);
-
setup_trap_pages();
}
diff --git a/arch/arm/cpu/uncompress.c b/arch/arm/cpu/uncompress.c
index 10df5fcba9..9a9f391022 100644
--- a/arch/arm/cpu/uncompress.c
+++ b/arch/arm/cpu/uncompress.c
@@ -21,6 +21,7 @@
#include <asm/unaligned.h>
#include <compressed-dtb.h>
#include <elf.h>
+#include <pbl/mmu.h>
#include <debug_ll.h>
@@ -103,6 +104,15 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
if (ret)
panic("Failed to relocate ELF: %d\n", ret);
+ /*
+ * Now that the ELF image is relocated, we know the exact addresses
+ * of all segments. Set up MMU with proper permissions based on
+ * ELF segment flags (PF_R/W/X).
+ */
+ ret = pbl_mmu_setup_from_elf(&elf, membase, memsize);
+ if (ret)
+ panic("Failed to setup MMU from ELF: %d\n", ret);
+
barebox = (void *)(unsigned long)elf.entry;
handoff_data_move(handoff_data);
diff --git a/include/pbl/mmu.h b/include/pbl/mmu.h
new file mode 100644
index 0000000000..72537604e2
--- /dev/null
+++ b/include/pbl/mmu.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __PBL_MMU_H
+#define __PBL_MMU_H
+
+#include <linux/types.h>
+
+struct elf_image;
+
+/**
+ * pbl_mmu_setup_from_elf() - Configure MMU using ELF segment information
+ * @elf: ELF image structure from elf_open_binary_into()
+ * @membase: Base address of RAM
+ * @memsize: Size of RAM
+ *
+ * This function sets up the MMU with proper permissions based on ELF
+ * segment flags. It should be called after elf_load_inplace() has
+ * relocated the barebox image.
+ *
+ * Segment permissions are mapped as follows:
+ * PF_R | PF_X -> Read-only + executable (text)
+ * PF_R | PF_W -> Read-write (data, bss)
+ * PF_R -> Read-only (rodata)
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+#if IS_ENABLED(CONFIG_MMU)
+int pbl_mmu_setup_from_elf(struct elf_image *elf, unsigned long membase,
+ unsigned long memsize);
+#else
+static inline int pbl_mmu_setup_from_elf(struct elf_image *elf,
+ unsigned long membase,
+ unsigned long memsize)
+{
+ return 0;
+}
+#endif
+
+#endif /* __PBL_MMU_H */
diff --git a/pbl/Makefile b/pbl/Makefile
index f66391be7b..b78124cdcd 100644
--- a/pbl/Makefile
+++ b/pbl/Makefile
@@ -9,3 +9,4 @@ pbl-$(CONFIG_HAVE_IMAGE_COMPRESSION) += decomp.o
pbl-$(CONFIG_LIBFDT) += fdt.o
pbl-$(CONFIG_PBL_CONSOLE) += console.o
obj-pbl-y += handoff-data.o
+obj-pbl-$(CONFIG_MMU) += mmu.o
diff --git a/pbl/mmu.c b/pbl/mmu.c
new file mode 100644
index 0000000000..744a76afa6
--- /dev/null
+++ b/pbl/mmu.c
@@ -0,0 +1,105 @@
+// SPDX-License-Identifier: GPL-2.0-only
+// SPDX-FileCopyrightText: 2025 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix
+
+#define pr_fmt(fmt) "pbl-mmu: " fmt
+
+#include <common.h>
+#include <elf.h>
+#include <mmu.h>
+#include <pbl/mmu.h>
+#include <asm/mmu.h>
+#include <linux/bits.h>
+#include <linux/sizes.h>
+
+/*
+ * Map ELF segment permissions (p_flags) to architecture MMU flags
+ */
+static unsigned int elf_flags_to_mmu_flags(u32 p_flags)
+{
+ bool readable = p_flags & PF_R;
+ bool writable = p_flags & PF_W;
+ bool executable = p_flags & PF_X;
+
+ if (readable && writable) {
+ /* Data, BSS: Read-write, cached, non-executable */
+ return MAP_CACHED;
+ } else if (readable && executable) {
+ /* Text: Read-only, cached, executable */
+ return MAP_CODE;
+ } else if (readable) {
+ /* Read-only data: Read-only, cached, non-executable */
+ return MAP_CACHED_RO;
+ } else {
+ /*
+ * Unusual: segment with no read permission.
+ * Map as uncached, non-executable for safety.
+ */
+ pr_warn("Segment with unusual permissions: flags=0x%x\n", p_flags);
+ return MAP_UNCACHED;
+ }
+}
+
+int pbl_mmu_setup_from_elf(struct elf_image *elf, unsigned long membase,
+ unsigned long memsize)
+{
+ void *phdr;
+ int i = -1;
+
+ pr_debug("Setting up MMU from ELF segments\n");
+ pr_debug("ELF loaded at: 0x%p - 0x%p\n", elf->low_addr, elf->high_addr);
+
+ /*
+ * Iterate through all PT_LOAD segments and set up MMU permissions
+ * based on the segment's p_flags
+ */
+ elf_for_each_segment(phdr, elf, elf->hdr_buf) {
+ i++;
+
+ if (elf_phdr_p_type(elf, phdr) != PT_LOAD)
+ continue;
+
+ u64 p_vaddr = elf_phdr_p_vaddr(elf, phdr);
+ u64 p_memsz = elf_phdr_p_memsz(elf, phdr);
+ u32 p_flags = elf_phdr_p_flags(elf, phdr);
+
+ /*
+ * Calculate actual address after relocation.
+ * For ET_EXEC: reloc_offset is 0, use p_vaddr directly
+ * For ET_DYN: reloc_offset adjusts virtual to actual address
+ */
+ unsigned long addr = p_vaddr + elf->reloc_offset;
+ unsigned long size = p_memsz;
+ unsigned long segment_end = addr + size;
+
+ /* Validate segment is within available memory */
+ if (segment_end < addr || /* overflow check */
+ addr < membase ||
+ segment_end > membase + memsize) {
+ pr_err("Segment %d outside memory bounds\n", i);
+ return -EINVAL;
+ }
+
+ /* Validate alignment - warn and round if needed */
+ if (!IS_ALIGNED(size, PAGE_SIZE)) {
+ pr_debug("Segment %d not page-aligned, rounding\n", i);
+ size = ALIGN(size, PAGE_SIZE);
+ }
+
+ unsigned int mmu_flags = elf_flags_to_mmu_flags(p_flags);
+
+ pr_debug("Segment %d: addr=0x%08lx size=0x%08lx flags=0x%x [%c%c%c] -> mmu_flags=0x%x\n",
+ i, addr, size, p_flags,
+ (p_flags & PF_R) ? 'R' : '-',
+ (p_flags & PF_W) ? 'W' : '-',
+ (p_flags & PF_X) ? 'X' : '-',
+ mmu_flags);
+
+ /*
+ * Remap this segment with proper permissions.
+ */
+ remap_range((void *)addr, size, mmu_flags);
+ }
+
+ pr_debug("MMU setup from ELF complete\n");
+ return 0;
+}
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 19/22] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (17 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 18/22] ARM: PBL: setup MMU with proper permissions from ELF segments Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 20/22] riscv: link ELF image into PBL Sascha Hauer
` (3 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Fix the linker script to generate three distinct PT_LOAD segments with
correct permissions instead of combining .rodata with .data.
Before this fix, the linker auto-generated only two PT_LOAD segments:
1. Text segment (PF_R|PF_X)
2. Data segment (PF_R|PF_W) - containing .rodata, .data, .bss, etc.
With explicit PHDRS directives, we now generate three segments:
1. text segment (PF_R|PF_X): .text and related code sections
2. rodata segment (PF_R): .rodata and related read-only sections
3. data segment (PF_R|PF_W): .data, .bss, and related sections
With this we can setup the MMU properly from outside the ELF binary.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/lib/barebox.lds.S | 38 +++++++++++++++++++++++++-------------
1 file changed, 25 insertions(+), 13 deletions(-)
diff --git a/arch/riscv/lib/barebox.lds.S b/arch/riscv/lib/barebox.lds.S
index 03b3a96719..0cfa4bff57 100644
--- a/arch/riscv/lib/barebox.lds.S
+++ b/arch/riscv/lib/barebox.lds.S
@@ -16,14 +16,23 @@
OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
ENTRY(start)
OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
+
+PHDRS
+{
+ text PT_LOAD FLAGS(5); /* PF_R | PF_X */
+ rodata PT_LOAD FLAGS(4); /* PF_R */
+ dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
+ data PT_LOAD FLAGS(6); /* PF_R | PF_W */
+}
+
SECTIONS
{
. = 0x0;
- .image_start : { *(.__image_start) }
+ .image_start : { *(.__image_start) } :text
. = ALIGN(4);
- ._text : { *(._text) }
+ ._text : { *(._text) } :text
.text :
{
_stext = .;
@@ -35,44 +44,47 @@ SECTIONS
KEEP(*(.text_exceptions*))
__exceptions_stop = .;
*(.text*)
- }
+ } :text
BAREBOX_BARE_INIT_SIZE
- . = ALIGN(4);
+ . = ALIGN(4096);
__start_rodata = .;
.rodata : {
*(.rodata*)
RO_DATA_SECTION
- }
+ } :rodata
+
+ .dynamic : { *(.dynamic) } :rodata :dynamic
__end_rodata = .;
_etext = .;
_sdata = .;
- . = ALIGN(4);
- .data : { *(.data*) }
+ . = ALIGN(4096);
+
+ .data : { *(.data*) } :data
/DISCARD/ : { *(.rela.plt*) }
.rela.dyn : {
__rel_dyn_start = .;
*(.rel*)
__rel_dyn_end = .;
- }
+ } :data
.dynsym : {
__dynsym_start = .;
*(.dynsym)
__dynsym_end = .;
- }
+ } :data
_edata = .;
- .image_end : { *(.__image_end) }
+ .image_end : { *(.__image_end) } :data
. = ALIGN(4);
- .__bss_start : { *(.__bss_start) }
- .bss : { *(.bss*) }
- .__bss_stop : { *(.__bss_stop) }
+ .__bss_start : { *(.__bss_start) } :data
+ .bss : { *(.bss*) } :data
+ .__bss_stop : { *(.__bss_stop) } :data
_end = .;
_barebox_image_size = __bss_start;
}
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 20/22] riscv: link ELF image into PBL
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (18 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 19/22] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 21/22] riscv: Allwinner D1: Drop M-Mode Sascha Hauer
` (2 subsequent siblings)
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
Instead of linking the raw binary barebox proper image into the PBL link
the ELF image into the PBL. With this barebox proper starts with a properly
linked and fully initialized C environment, so the calls to
relocate_to_adr() and setup_c() can be removed from barebox proper.
As barebox proper is now an ELF file we no longer need to map the entry
function to the start of the image. Just link it to wherever the linker
wants it and drop the text_entry section. Also, remove the start()
function and set the ELF entry to barebox_non_pbl_start() directly.
While at it also remove the bare_init stuff from the barebox proper
linker script as it's only relevant to the PBL linker script which
is a separate script.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/Kconfig | 1 +
arch/riscv/boot/start.c | 19 +------------------
arch/riscv/boot/uncompress.c | 17 ++++++++++++++++-
arch/riscv/lib/barebox.lds.S | 7 +------
4 files changed, 19 insertions(+), 25 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 96d013d851..d9794354f4 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -17,6 +17,7 @@ config RISCV
select HAS_KALLSYMS
select RISCV_TIMER if RISCV_SBI
select HW_HAS_PCI
+ select PBL_IMAGE_ELF
select HAVE_ARCH_BOARD_GENERIC_DT
select HAVE_ARCH_BOOTM_OFTREE
diff --git a/arch/riscv/boot/start.c b/arch/riscv/boot/start.c
index 5091340c8a..15bb91ac1b 100644
--- a/arch/riscv/boot/start.c
+++ b/arch/riscv/boot/start.c
@@ -114,7 +114,7 @@ device_initcall(barebox_memory_areas_init);
* First function in the uncompressed image. We get here from
* the pbl. The stack already has been set up by the pbl.
*/
-__noreturn __no_sanitize_address __section(.text_entry)
+__noreturn
void barebox_non_pbl_start(unsigned long membase, unsigned long memsize,
void *boarddata)
{
@@ -123,12 +123,6 @@ void barebox_non_pbl_start(unsigned long membase, unsigned long memsize,
unsigned long barebox_size = barebox_image_size + MAX_BSS_SIZE;
unsigned long barebox_base = riscv_mem_barebox_image(membase, endmem, barebox_size);
- relocate_to_current_adr();
-
- setup_c();
-
- barrier();
-
irq_init_vector(riscv_mode());
pr_debug("memory at 0x%08lx, size 0x%08lx\n", membase, memsize);
@@ -183,14 +177,3 @@ void barebox_non_pbl_start(unsigned long membase, unsigned long memsize,
start_barebox();
}
-
-void start(unsigned long membase, unsigned long memsize, void *boarddata);
-/*
- * First function in the uncompressed image. We get here from
- * the pbl. The stack already has been set up by the pbl.
- */
-void __no_sanitize_address __section(.text_entry) start(unsigned long membase,
- unsigned long memsize, void *boarddata)
-{
- barebox_non_pbl_start(membase, memsize, boarddata);
-}
diff --git a/arch/riscv/boot/uncompress.c b/arch/riscv/boot/uncompress.c
index 84142acf9c..9527dd1d7d 100644
--- a/arch/riscv/boot/uncompress.c
+++ b/arch/riscv/boot/uncompress.c
@@ -32,6 +32,8 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
unsigned long barebox_base;
void *pg_start, *pg_end;
unsigned long pc = get_pc();
+ struct elf_image elf;
+ int ret;
irq_init_vector(riscv_mode());
@@ -68,7 +70,20 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
sync_caches_for_execution();
- barebox = (void *)barebox_base;
+ ret = elf_open_binary_into(&elf, (void *)barebox_base);
+ if (ret)
+ panic("Failed to open ELF binary: %d\n", ret);
+
+ ret = elf_load_inplace(&elf);
+ if (ret)
+ panic("Failed to relocate ELF: %d\n", ret);
+
+ /*
+ * TODO: Add pbl_mmu_setup_from_elf() call when RISC-V PBL
+ * MMU support is implemented, similar to ARM
+ */
+
+ barebox = (void *)(unsigned long)elf.entry;
pr_debug("jumping to uncompressed image at 0x%p. dtb=0x%p\n", barebox, fdt);
diff --git a/arch/riscv/lib/barebox.lds.S b/arch/riscv/lib/barebox.lds.S
index 0cfa4bff57..e266693809 100644
--- a/arch/riscv/lib/barebox.lds.S
+++ b/arch/riscv/lib/barebox.lds.S
@@ -14,7 +14,7 @@
#include <asm/barebox.lds.h>
OUTPUT_ARCH(BAREBOX_OUTPUT_ARCH)
-ENTRY(start)
+ENTRY(barebox_non_pbl_start)
OUTPUT_FORMAT(BAREBOX_OUTPUT_FORMAT)
PHDRS
@@ -36,16 +36,11 @@ SECTIONS
.text :
{
_stext = .;
- *(.text_entry*)
- __bare_init_start = .;
- *(.text_bare_init*)
- __bare_init_end = .;
__exceptions_start = .;
KEEP(*(.text_exceptions*))
__exceptions_stop = .;
*(.text*)
} :text
- BAREBOX_BARE_INIT_SIZE
. = ALIGN(4096);
__start_rodata = .;
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 21/22] riscv: Allwinner D1: Drop M-Mode
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (19 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 20/22] riscv: link ELF image into PBL Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-16 8:12 ` [PATCH v5 22/22] riscv: add ELF segment-based memory protection with MMU Sascha Hauer
2026-01-19 12:23 ` [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
The Allwinner D1 selects both RISCV_M_MODE and RISCV_S_MODE. The board
code uses barebox_riscv_machine_entry() and not barebox_riscv_machine_entry()
which indicates RISCV_M_MODE was only selected by accident. Remove it.
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/Kconfig.socs | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/riscv/Kconfig.socs b/arch/riscv/Kconfig.socs
index 4a3b56b5ff..0d9984dd28 100644
--- a/arch/riscv/Kconfig.socs
+++ b/arch/riscv/Kconfig.socs
@@ -123,7 +123,6 @@ if SOC_ALLWINNER_SUN20I
config BOARD_ALLWINNER_D1
bool "Allwinner D1 Nezha"
select RISCV_S_MODE
- select RISCV_M_MODE
def_bool y
endif
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v5 22/22] riscv: add ELF segment-based memory protection with MMU
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (20 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 21/22] riscv: Allwinner D1: Drop M-Mode Sascha Hauer
@ 2026-01-16 8:12 ` Sascha Hauer
2026-01-19 12:23 ` [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-16 8:12 UTC (permalink / raw)
To: BAREBOX; +Cc: Claude Sonnet 4.5
Enable hardware-enforced W^X (Write XOR Execute) memory protection through
ELF segment-based permissions using the RISC-V MMU.
This implementation provides memory protection for RISC-V S-mode using
Sv39 (RV64) or Sv32 (RV32) page tables.
S-mode MMU Implementation (mmu.c):
- Implement page table walking for Sv39/Sv32
- pbl_remap_range(): remap segments with ELF-derived permissions
- mmu_early_enable(): create identity mapping and enable SATP CSR
- Map ELF flags to PTE bits:
* MAP_CODE → PTE_R | PTE_X (read + execute)
* MAP_CACHED_RO → PTE_R (read only)
* MAP_CACHED → PTE_R | PTE_W (read + write)
Integration:
- Update uncompress.c to call mmu_early_enable() before decompression
(enables caching for faster decompression)
- Call pbl_mmu_setup_from_elf() after ELF relocation to apply final
segment-based permissions
- Uses portable pbl/mmu.c infrastructure to parse PT_LOAD segments
Configuration:
- Add CONFIG_MMU option (default y for RISCV_S_MODE)
- Update asm/mmu.h with ARCH_HAS_REMAP and function declarations
Security Benefits:
- Text sections are read-only and executable (cannot be modified)
- Read-only data sections are read-only and non-executable
- Data sections are read-write and non-executable (cannot be executed)
- Hardware-enforced W^X prevents code injection attacks
This is based on the current ARM implementation.
As we are not confident enough with the implementation do not enable
MMU by default, but add generated virt32_mmu_defconfig and rv64i_mmu_defconfig
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
arch/riscv/Kconfig | 16 ++
arch/riscv/Makefile | 7 +
arch/riscv/boot/uncompress.c | 19 +-
arch/riscv/cpu/Makefile | 1 +
arch/riscv/cpu/mmu.c | 354 ++++++++++++++++++++++++++++++++
arch/riscv/cpu/mmu.h | 120 +++++++++++
arch/riscv/include/asm/asm.h | 3 +-
arch/riscv/include/asm/mmu.h | 36 ++++
common/boards/configs/enable_mmu.config | 1 +
9 files changed, 554 insertions(+), 3 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index d9794354f4..cc9b15ce64 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -129,4 +129,20 @@ config RISCV_MULTI_MODE
config RISCV_SBI
def_bool RISCV_S_MODE
+config MMU
+ bool "MMU-based memory protection"
+ help
+ Enable MMU (Memory Management Unit) support for RISC-V S-mode.
+ This provides hardware-enforced W^X (Write XOR Execute) memory
+ protection using page tables (Sv39 for RV64, Sv32 for RV32).
+
+ The PBL sets up page table entries based on ELF segment permissions,
+ ensuring that:
+ - Text sections are read-only and executable
+ - Read-only data sections are read-only and non-executable
+ - Data sections are read-write and non-executable
+
+ Say Y if running in S-mode (supervisor mode) with virtual memory.
+ Say N if running in M-mode or if you don't need memory protection.
+
endmenu
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
index 71ca82fe8d..c7682d4bd0 100644
--- a/arch/riscv/Makefile
+++ b/arch/riscv/Makefile
@@ -2,6 +2,13 @@
KBUILD_DEFCONFIG := rv64i_defconfig
+generated_configs += virt32_mmu_defconfig rv64i_mmu_defconfig
+
+virt32_mmu_defconfig:
+ $(call merge_into_defconfig,virt32_defconfig,enable_mmu)
+rv64i_mmu_defconfig:
+ $(call merge_into_defconfig,rv64i_defconfig,enable_mmu)
+
KBUILD_CPPFLAGS += -fno-strict-aliasing
ifeq ($(CONFIG_ARCH_RV32I),y)
diff --git a/arch/riscv/boot/uncompress.c b/arch/riscv/boot/uncompress.c
index 9527dd1d7d..e51f1b0121 100644
--- a/arch/riscv/boot/uncompress.c
+++ b/arch/riscv/boot/uncompress.c
@@ -10,11 +10,14 @@
#include <init.h>
#include <linux/sizes.h>
#include <pbl.h>
+#include <pbl/mmu.h>
#include <asm/barebox-riscv.h>
#include <asm-generic/memory_layout.h>
#include <asm/sections.h>
#include <asm/unaligned.h>
+#include <asm/mmu.h>
#include <asm/irq.h>
+#include <elf.h>
#include <debug_ll.h>
@@ -63,6 +66,14 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
free_mem_ptr = riscv_mem_early_malloc(membase, endmem);
free_mem_end_ptr = riscv_mem_early_malloc_end(membase, endmem);
+ /*
+ * Enable MMU early to enable caching for faster decompression.
+ * This creates an initial identity mapping that will be refined
+ * later based on ELF segments.
+ */
+ if (IS_ENABLED(CONFIG_MMU))
+ mmu_early_enable(membase, memsize, barebox_base);
+
pr_debug("uncompressing barebox binary at 0x%p (size 0x%08x) to 0x%08lx (uncompressed size: 0x%08x)\n",
pg_start, pg_len, barebox_base, uncompressed_len);
@@ -79,9 +90,13 @@ void __noreturn barebox_pbl_start(unsigned long membase, unsigned long memsize,
panic("Failed to relocate ELF: %d\n", ret);
/*
- * TODO: Add pbl_mmu_setup_from_elf() call when RISC-V PBL
- * MMU support is implemented, similar to ARM
+ * Now that the ELF image is relocated, we know the exact addresses
+ * of all segments. Set up MMU with proper permissions based on
+ * ELF segment flags (PF_R/W/X).
*/
+ ret = pbl_mmu_setup_from_elf(&elf, membase, memsize);
+ if (ret)
+ panic("Failed to setup memory protection from ELF: %d\n", ret);
barebox = (void *)(unsigned long)elf.entry;
diff --git a/arch/riscv/cpu/Makefile b/arch/riscv/cpu/Makefile
index d79bafc6f1..6bf31b574c 100644
--- a/arch/riscv/cpu/Makefile
+++ b/arch/riscv/cpu/Makefile
@@ -7,3 +7,4 @@ obj-pbl-$(CONFIG_RISCV_M_MODE) += mtrap.o
obj-pbl-$(CONFIG_RISCV_S_MODE) += strap.o
obj-pbl-y += interrupts.o
endif
+obj-pbl-$(CONFIG_MMU) += mmu.o
diff --git a/arch/riscv/cpu/mmu.c b/arch/riscv/cpu/mmu.c
new file mode 100644
index 0000000000..bafd597b69
--- /dev/null
+++ b/arch/riscv/cpu/mmu.c
@@ -0,0 +1,354 @@
+// SPDX-License-Identifier: GPL-2.0-only
+// SPDX-FileCopyrightText: 2026 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix
+
+#define pr_fmt(fmt) "mmu: " fmt
+
+#include <common.h>
+#include <init.h>
+#include <mmu.h>
+#include <errno.h>
+#include <linux/sizes.h>
+#include <linux/bitops.h>
+#include <asm/sections.h>
+#include <asm/csr.h>
+
+#include "mmu.h"
+
+/*
+ * Page table storage for early MMU setup in PBL.
+ * Static allocation before BSS is available.
+ */
+static char early_pt_storage[RISCV_EARLY_PAGETABLE_SIZE] __aligned(RISCV_PGSIZE);
+static unsigned int early_pt_idx;
+
+/*
+ * Allocate a page table from the early PBL storage
+ */
+static pte_t *alloc_pte(void)
+{
+ pte_t *pt;
+
+ if ((early_pt_idx + 1) * RISCV_PGSIZE >= RISCV_EARLY_PAGETABLE_SIZE) {
+ pr_err("Out of early page table memory (need more than %d KB)\n",
+ RISCV_EARLY_PAGETABLE_SIZE / 1024);
+ hang();
+ }
+
+ pt = (pte_t *)(early_pt_storage + early_pt_idx * RISCV_PGSIZE);
+ early_pt_idx++;
+
+ /* Clear the page table */
+ memset(pt, 0, RISCV_PGSIZE);
+
+ return pt;
+}
+
+/*
+ * split_pte - Split a megapage/gigapage PTE into a page table
+ * @pte: Pointer to the PTE to split
+ * @level: Current page table level (0-2 for Sv39)
+ *
+ * This function takes a leaf PTE (megapage/gigapage) and converts it into
+ * a page table pointer with 512 entries, each covering 1/512th of the
+ * original range with identical permissions.
+ *
+ * Example: A 2MB megapage at Level 1 becomes a Level 2 page table with
+ * 512 × 4KB pages, all with the same R/W/X attributes.
+ */
+static void split_pte(pte_t *pte, int level)
+{
+ pte_t old_pte = *pte;
+ pte_t *new_table;
+ pte_t phys_base;
+ pte_t attrs;
+ unsigned long granularity;
+ int i;
+
+ /* If already a table pointer (no RWX bits), nothing to do */
+ if (!(*pte & (PTE_R | PTE_W | PTE_X)))
+ return;
+
+ /* Allocate new page table (512 entries × 8 bytes = 4KB) */
+ new_table = alloc_pte();
+
+ /* Extract physical base address from old PTE */
+ phys_base = (old_pte >> PTE_PPN_SHIFT) << RISCV_PGSHIFT;
+
+ /* Extract permission attributes to replicate */
+ attrs = old_pte & (PTE_R | PTE_W | PTE_X | PTE_A | PTE_D | PTE_U | PTE_G);
+
+ /*
+ * Calculate granularity of child level.
+ * Level 0 (1GB) → Level 1 (2MB): granularity = 2MB = 1 << 21
+ * Level 1 (2MB) → Level 2 (4KB): granularity = 4KB = 1 << 12
+ *
+ * Formula: granularity = 1 << (12 + 9 * (Levels - 2 - level))
+ * For Sv39 (3 levels):
+ * level=0: 1 << (12 + 9*1) = 2MB
+ * level=1: 1 << (12 + 9*0) = 4KB
+ */
+ granularity = 1UL << (RISCV_PGSHIFT + RISCV_PGLEVEL_BITS *
+ (RISCV_PGTABLE_LEVELS - 2 - level));
+
+ /* Populate new table: replicate old mapping across 512 entries */
+ for (i = 0; i < RISCV_PTE_ENTRIES; i++) {
+ unsigned long new_phys = phys_base + (i * granularity);
+ pte_t new_pte = ((new_phys >> RISCV_PGSHIFT) << PTE_PPN_SHIFT) |
+ attrs | PTE_V;
+ new_table[i] = new_pte;
+ }
+
+ /*
+ * Replace old leaf PTE with table pointer.
+ * No RWX bits = pointer to next level.
+ */
+ *pte = (((unsigned long)new_table >> RISCV_PGSHIFT) << PTE_PPN_SHIFT) | PTE_V;
+
+ pr_debug("Split level %d PTE at phys=0x%llx (granularity=%lu KB)\n",
+ level, (unsigned long long)phys_base, granularity / 1024);
+}
+
+/*
+ * Get the root page table base
+ */
+static pte_t *get_ttb(void)
+{
+ return (pte_t *)early_pt_storage;
+}
+
+/*
+ * Convert maptype flags to PTE permission bits
+ */
+static unsigned long flags_to_pte(maptype_t flags)
+{
+ unsigned long pte = PTE_V; /* Valid bit always set */
+
+ /*
+ * Map barebox memory types to RISC-V PTE flags:
+ * - ARCH_MAP_CACHED_RWX: read + write + execute (early boot, full RAM access)
+ * - MAP_CODE: read + execute (text sections)
+ * - MAP_CACHED_RO: read only (rodata sections)
+ * - MAP_CACHED: read + write (data/bss sections)
+ * - MAP_UNCACHED: read + write, uncached (device memory)
+ */
+ switch (flags & MAP_TYPE_MASK) {
+ case ARCH_MAP_CACHED_RWX:
+ /* Full access for early boot: R + W + X */
+ pte |= PTE_R | PTE_W | PTE_X;
+ break;
+ case MAP_CACHED_RO:
+ /* Read-only data: R, no W, no X */
+ pte |= PTE_R;
+ break;
+ case MAP_CODE:
+ /* Code: R + X, no W */
+ pte |= PTE_R | PTE_X;
+ break;
+ case MAP_CACHED: /* TODO: implement */
+ case MAP_UNCACHED:
+ default:
+ /* Data or uncached: R + W, no X */
+ pte |= PTE_R | PTE_W;
+ break;
+ }
+
+ /* Set accessed and dirty bits to avoid hardware updates */
+ pte |= PTE_A | PTE_D;
+
+ return pte;
+}
+
+/*
+ * Walk page tables and get/create PTE for given address at specified level
+ */
+static pte_t *walk_pgtable(unsigned long addr, int target_level)
+{
+ pte_t *table = get_ttb();
+ int level;
+
+ for (level = 0; level < target_level; level++) {
+ unsigned int index = VPN(addr, RISCV_PGTABLE_LEVELS - 1 - level);
+ pte_t *pte = &table[index];
+
+ if (!(*pte & PTE_V)) {
+ /* Entry not valid - allocate new page table */
+ pte_t *new_table = alloc_pte();
+ pte_t new_pte = ((unsigned long)new_table >> RISCV_PGSHIFT) << PTE_PPN_SHIFT;
+ new_pte |= PTE_V;
+ *pte = new_pte;
+ table = new_table;
+ } else if (*pte & (PTE_R | PTE_W | PTE_X)) {
+ /* This is a leaf PTE - split it before descending */
+ split_pte(pte, level);
+ /* After split, PTE is now a table pointer - follow it */
+ table = (pte_t *)(((*pte >> PTE_PPN_SHIFT) << RISCV_PGSHIFT));
+ } else {
+ /* Valid non-leaf PTE - follow to next level */
+ table = (pte_t *)(((*pte >> PTE_PPN_SHIFT) << RISCV_PGSHIFT));
+ }
+ }
+
+ return table;
+}
+
+/*
+ * Create a page table entry mapping virt -> phys with given permissions
+ */
+static void create_pte(unsigned long virt, phys_addr_t phys, maptype_t flags)
+{
+ pte_t *table;
+ unsigned int index;
+ pte_t pte;
+
+ /* Walk to leaf level page table */
+ table = walk_pgtable(virt, RISCV_PGTABLE_LEVELS - 1);
+
+ /* Get index for this address at leaf level */
+ index = VPN(virt, 0);
+
+ /* Build PTE: PPN + flags */
+ pte = (phys >> RISCV_PGSHIFT) << PTE_PPN_SHIFT;
+ pte |= flags_to_pte(flags);
+
+ /* Write PTE */
+ table[index] = pte;
+}
+
+/*
+ * create_megapage - Create a 2MB megapage mapping
+ * @virt: Virtual address (should be 2MB-aligned)
+ * @phys: Physical address (should be 2MB-aligned)
+ * @flags: Mapping flags (MAP_CACHED, etc.)
+ *
+ * Creates a leaf PTE at Level 1 covering 2MB. This is identical to a 4KB
+ * PTE except it's placed at Level 1 instead of Level 2, saving page tables.
+ */
+static void create_megapage(unsigned long virt, phys_addr_t phys, maptype_t flags)
+{
+ pte_t *table;
+ unsigned int index;
+ pte_t pte;
+
+ /* Walk to Level 1 (one level above 4KB leaf) */
+ table = walk_pgtable(virt, RISCV_PGTABLE_LEVELS - 2);
+
+ /* Get VPN[1] index for this address at Level 1 */
+ index = VPN(virt, 1);
+
+ /* Build leaf PTE at Level 1: PPN + RWX flags make it a megapage */
+ pte = (phys >> RISCV_PGSHIFT) << PTE_PPN_SHIFT;
+ pte |= flags_to_pte(flags);
+
+ /* Write megapage PTE */
+ table[index] = pte;
+}
+
+/*
+ * mmu_early_enable - Set up initial MMU with identity mapping
+ *
+ * Called before barebox decompression to enable caching for faster decompression.
+ * Creates a simple identity map of all RAM with RWX permissions.
+ */
+void mmu_early_enable(unsigned long membase, unsigned long memsize,
+ unsigned long barebox_base)
+{
+ unsigned long addr;
+ unsigned long end = membase + memsize;
+ unsigned long satp;
+
+ pr_debug("Enabling MMU: mem=0x%08lx-0x%08lx barebox=0x%08lx\n",
+ membase, end, barebox_base);
+
+ /* Reset page table allocator */
+ early_pt_idx = 0;
+
+ /* Allocate root page table */
+ (void)alloc_pte();
+
+ pr_debug("Creating flat identity mapping...\n");
+
+ /*
+ * Create a flat identity mapping of the lower address space as uncached.
+ * This ensures I/O devices (UART, etc.) are accessible after MMU is enabled.
+ * RV64: Map lower 4GB using 2MB megapages (2048 entries).
+ * RV32: Map entire 4GB using 4MB superpages (1024 entries in root table).
+ */
+ addr = 0;
+ do {
+ create_megapage(addr, addr, MAP_UNCACHED);
+ addr += RISCV_L1_SIZE;
+ } while (lower_32_bits(addr) != 0); /* Wraps around to 0 after 0xFFFFFFFF */
+
+ /*
+ * Remap RAM as cached with RWX permissions using superpages.
+ * This overwrites the uncached mappings for RAM regions, providing
+ * better performance. Later, pbl_mmu_setup_from_elf() will split
+ * superpages as needed to set fine-grained permissions based on ELF segments.
+ */
+ pr_debug("Remapping RAM 0x%08lx-0x%08lx as cached RWX...\n", membase, end);
+ for (addr = membase; addr < end; addr += RISCV_L1_SIZE)
+ create_megapage(addr, addr, ARCH_MAP_CACHED_RWX);
+
+ pr_debug("Page table setup complete, used %lu KB\n",
+ (early_pt_idx * RISCV_PGSIZE) / 1024);
+
+ /*
+ * Enable MMU by setting SATP CSR:
+ * - MODE field: Sv39 (RV64) or Sv32 (RV32)
+ * - ASID: 0 (no address space ID)
+ * - PPN: physical address of root page table
+ */
+ satp = SATP_MODE | (((unsigned long)get_ttb() >> RISCV_PGSHIFT) & SATP_PPN_MASK);
+
+ pr_debug("Enabling MMU: SATP=0x%08lx\n", satp);
+
+ /* Synchronize before enabling MMU */
+ sfence_vma();
+
+ /* Enable MMU */
+ csr_write(satp, satp);
+
+ /* Synchronize after enabling MMU */
+ sfence_vma();
+
+ pr_debug("MMU enabled with %lu %spages for RAM\n",
+ (memsize / RISCV_L1_SIZE),
+ IS_ENABLED(CONFIG_64BIT) ? "2MB mega" : "4MB super");
+}
+
+/*
+ * arch_remap_range - Remap a virtual address range (barebox proper)
+ *
+ * This is the non-PBL version used in barebox proper after full relocation.
+ * Currently provides basic remapping support. For full MMU management in
+ * barebox proper, this would need to be extended with:
+ * - Dynamic page table allocation
+ * - Cache flushing for non-cached mappings
+ * - TLB management
+ * - Support for MAP_FAULT (guard pages)
+ */
+int arch_remap_range(void *virt, phys_addr_t phys, size_t size,
+ maptype_t map_type)
+{
+ unsigned long addr = (unsigned long)virt;
+ unsigned long end = addr + size;
+
+ pr_debug("Remapping 0x%p-0x%08lx -> 0x%pap (flags=0x%x)\n",
+ virt, end, &phys, map_type);
+
+ /* Align to page boundaries */
+ addr &= ~(RISCV_PGSIZE - 1);
+ end = ALIGN(end, RISCV_PGSIZE);
+
+ /* Create page table entries for each page in the range */
+ while (addr < end) {
+ create_pte(addr, phys, map_type);
+ addr += RISCV_PGSIZE;
+ phys += RISCV_PGSIZE;
+ }
+
+ /* Flush TLB for the remapped range */
+ sfence_vma();
+
+ return 0;
+}
diff --git a/arch/riscv/cpu/mmu.h b/arch/riscv/cpu/mmu.h
new file mode 100644
index 0000000000..0222c97fc1
--- /dev/null
+++ b/arch/riscv/cpu/mmu.h
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* SPDX-FileCopyrightText: 2026 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix */
+
+#ifndef __RISCV_CPU_MMU_H
+#define __RISCV_CPU_MMU_H
+
+#include <linux/types.h>
+
+/*
+ * RISC-V MMU constants for Sv39 (RV64) and Sv32 (RV32) page tables
+ */
+
+/* Page table configuration */
+#define RISCV_PGSHIFT 12
+#define RISCV_PGSIZE (1UL << RISCV_PGSHIFT) /* 4KB */
+
+#ifdef CONFIG_64BIT
+/* Sv39: 9-bit VPN fields, 512 entries per table */
+#define RISCV_PGLEVEL_BITS 9
+#define RISCV_PGTABLE_ENTRIES 512
+#else
+/* Sv32: 10-bit VPN fields, 1024 entries per table */
+#define RISCV_PGLEVEL_BITS 10
+#define RISCV_PGTABLE_ENTRIES 1024
+#endif
+
+/* Page table entry (PTE) bit definitions */
+#define PTE_V (1UL << 0) /* Valid */
+#define PTE_R (1UL << 1) /* Read */
+#define PTE_W (1UL << 2) /* Write */
+#define PTE_X (1UL << 3) /* Execute */
+#define PTE_U (1UL << 4) /* User accessible */
+#define PTE_G (1UL << 5) /* Global mapping */
+#define PTE_A (1UL << 6) /* Accessed */
+#define PTE_D (1UL << 7) /* Dirty */
+#define PTE_RSW_MASK (3UL << 8) /* Reserved for software */
+
+/* PTE physical page number (PPN) field position */
+#define PTE_PPN_SHIFT 10
+
+#ifdef CONFIG_64BIT
+/*
+ * Sv39: 39-bit virtual addressing, 3-level page tables
+ * Virtual address format: [38:30] VPN[2], [29:21] VPN[1], [20:12] VPN[0], [11:0] offset
+ */
+#define RISCV_PGTABLE_LEVELS 3
+#define VA_BITS 39
+#else
+/*
+ * Sv32: 32-bit virtual addressing, 2-level page tables
+ * Virtual address format: [31:22] VPN[1], [21:12] VPN[0], [11:0] offset
+ */
+#define RISCV_PGTABLE_LEVELS 2
+#define VA_BITS 32
+#endif
+
+/* SATP register fields */
+#ifdef CONFIG_64BIT
+#define SATP_PPN_MASK ((1ULL << 44) - 1) /* Physical page number (Sv39) */
+#else
+#define SATP_PPN_MASK ((1UL << 22) - 1) /* Physical page number (Sv32) */
+#endif
+
+/* Extract VPN (Virtual Page Number) from virtual address */
+#define VPN_MASK ((1UL << RISCV_PGLEVEL_BITS) - 1)
+#define VPN(addr, level) (((addr) >> (RISCV_PGSHIFT + (level) * RISCV_PGLEVEL_BITS)) & VPN_MASK)
+
+/* RISC-V page sizes by level */
+#ifdef CONFIG_64BIT
+/* Sv39: 3-level page tables */
+#define RISCV_L2_SHIFT 30 /* 1GB gigapages (Level 0 in Sv39) */
+#define RISCV_L1_SHIFT 21 /* 2MB megapages (Level 1 in Sv39) */
+#define RISCV_L0_SHIFT 12 /* 4KB pages (Level 2 in Sv39) */
+#else
+/* Sv32: 2-level page tables */
+#define RISCV_L1_SHIFT 22 /* 4MB superpages (Level 0 in Sv32) */
+#define RISCV_L0_SHIFT 12 /* 4KB pages (Level 1 in Sv32) */
+#endif
+
+#ifdef CONFIG_64BIT
+#define RISCV_L2_SIZE (1UL << RISCV_L2_SHIFT) /* 1GB (RV64 only) */
+#endif
+#define RISCV_L1_SIZE (1UL << RISCV_L1_SHIFT) /* 2MB (RV64) or 4MB (RV32) */
+#define RISCV_L0_SIZE (1UL << RISCV_L0_SHIFT) /* 4KB */
+
+/* Number of entries per page table (use RISCV_PGTABLE_ENTRIES instead) */
+#define RISCV_PTE_ENTRIES RISCV_PGTABLE_ENTRIES
+
+/* PTE type - 64-bit on RV64, 32-bit on RV32 */
+#ifdef CONFIG_64BIT
+typedef uint64_t pte_t;
+#else
+typedef uint32_t pte_t;
+#endif
+
+/* Early page table allocation size (PBL) */
+#ifdef CONFIG_64BIT
+/* Sv39: 3 levels, allocate space for root + worst case intermediate tables */
+#define RISCV_EARLY_PAGETABLE_SIZE (64 * 1024) /* 64KB */
+#else
+/* Sv32: 2 levels, smaller allocation */
+#define RISCV_EARLY_PAGETABLE_SIZE (32 * 1024) /* 32KB */
+#endif
+
+#ifndef __ASSEMBLY__
+
+/* SFENCE.VMA - Synchronize updates to page tables */
+static inline void sfence_vma(void)
+{
+ __asm__ __volatile__ ("sfence.vma" : : : "memory");
+}
+
+static inline void sfence_vma_addr(unsigned long addr)
+{
+ __asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory");
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __RISCV_CPU_MMU_H */
diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
index 9c992a88d8..23f60b615e 100644
--- a/arch/riscv/include/asm/asm.h
+++ b/arch/riscv/include/asm/asm.h
@@ -9,7 +9,8 @@
#ifdef __ASSEMBLY__
#define __ASM_STR(x) x
#else
-#define __ASM_STR(x) #x
+#define __ASM_STR_HELPER(x) #x
+#define __ASM_STR(x) __ASM_STR_HELPER(x)
#endif
#if __riscv_xlen == 64
diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
index 1c2646ebb3..f487b9c700 100644
--- a/arch/riscv/include/asm/mmu.h
+++ b/arch/riscv/include/asm/mmu.h
@@ -3,6 +3,42 @@
#ifndef __ASM_MMU_H
#define __ASM_MMU_H
+#include <linux/types.h>
+
+/*
+ * RISC-V supports memory protection through two mechanisms:
+ * - S-mode: Virtual memory with page tables (MMU)
+ * - M-mode: Physical Memory Protection (PMP) regions
+ */
+
+#ifdef CONFIG_MMU
+#define ARCH_HAS_REMAP
+#define MAP_ARCH_DEFAULT MAP_CACHED
+
+/* Architecture-specific memory type flags */
+#define ARCH_MAP_CACHED_RWX MAP_ARCH(2) /* Cached, RWX (early boot) */
+#define ARCH_MAP_FLAG_PAGEWISE (1 << 16) /* Force page-wise mapping */
+
+/*
+ * Early MMU/PMP setup - called before decompression for performance.
+ * S-mode: Sets up basic page tables and enables MMU via SATP CSR.
+ * M-mode: Configures initial PMP regions.
+ */
+void mmu_early_enable(unsigned long membase, unsigned long memsize,
+ unsigned long barebox_base);
+
+/*
+ * Remap a virtual address range with specified memory type (barebox proper).
+ * Used by the generic remap infrastructure after barebox is fully relocated.
+ * Implementation is in arch/riscv/cpu/mmu.c (S-mode) or pmp.c (M-mode).
+ */
+int arch_remap_range(void *virt_addr, phys_addr_t phys_addr, size_t size,
+ maptype_t map_type);
+
+#else
#define MAP_ARCH_DEFAULT MAP_UNCACHED
+#endif
+
+#include <mmu.h>
#endif /* __ASM_MMU_H */
diff --git a/common/boards/configs/enable_mmu.config b/common/boards/configs/enable_mmu.config
new file mode 100644
index 0000000000..3dec296304
--- /dev/null
+++ b/common/boards/configs/enable_mmu.config
@@ -0,0 +1 @@
+CONFIG_MMU=y
--
2.47.3
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations
2026-01-16 8:12 [PATCH v5 00/22] PBL: Add PBL ELF loading support with dynamic relocations Sascha Hauer
` (21 preceding siblings ...)
2026-01-16 8:12 ` [PATCH v5 22/22] riscv: add ELF segment-based memory protection with MMU Sascha Hauer
@ 2026-01-19 12:23 ` Sascha Hauer
22 siblings, 0 replies; 24+ messages in thread
From: Sascha Hauer @ 2026-01-19 12:23 UTC (permalink / raw)
To: BAREBOX, Sascha Hauer; +Cc: Claude Sonnet 4.5, Ahmad Fatoum
On Fri, 16 Jan 2026 09:12:11 +0100, Sascha Hauer wrote:
> Until now we linked the raw barebox proper binary into the PBL which
> comes with a number of disadvantages. We rely on self-modifying code
> to in barebox proper (relocate_to_current_adr()) and have no initialized
> bss segment (setup_c()). Also we can only mark the .text and .rodata as
> readonly during runtime of barebox proper.
>
> This series overcomes this by linking a ELF image into the PBL. This
> image is properly layed out, linked and initialized in the PBL. With
> this barebox proper has a proper C environment and text/rodata
> protection from the start.
>
> [...]
Applied, thanks!
[01/22] Makefile.compiler: add objcopy-option
https://git.pengutronix.de/cgit/barebox/commit/?id=19a380e98624 (link may not be stable)
[02/22] elf: only accept images matching the native ELF_CLASS
https://git.pengutronix.de/cgit/barebox/commit/?id=b53168361c5a (link may not be stable)
[03/22] elf: build for PBL as well
https://git.pengutronix.de/cgit/barebox/commit/?id=027452b87aa8 (link may not be stable)
[04/22] elf: add elf segment iterator
https://git.pengutronix.de/cgit/barebox/commit/?id=ebb729561f1c (link may not be stable)
[05/22] elf: add dynamic relocation support
https://git.pengutronix.de/cgit/barebox/commit/?id=fbb6a6eec9c9 (link may not be stable)
[06/22] ARM: implement elf_apply_relocations() for ELF relocation support
https://git.pengutronix.de/cgit/barebox/commit/?id=487027fd354d (link may not be stable)
[07/22] riscv: define generic relocate_image
https://git.pengutronix.de/cgit/barebox/commit/?id=89d48cf83187 (link may not be stable)
[08/22] riscv: implement elf_apply_relocations() for ELF relocation support
https://git.pengutronix.de/cgit/barebox/commit/?id=4a68784a2ce9 (link may not be stable)
[09/22] elf: implement elf_load_inplace()
https://git.pengutronix.de/cgit/barebox/commit/?id=ee11f1624d20 (link may not be stable)
[10/22] elf: create elf_open_binary_into()
https://git.pengutronix.de/cgit/barebox/commit/?id=6610142f48bf (link may not be stable)
[11/22] Makefile: add vmbarebox build target
https://git.pengutronix.de/cgit/barebox/commit/?id=2337f2c3d413 (link may not be stable)
[12/22] PBL: allow to link ELF image into PBL
https://git.pengutronix.de/cgit/barebox/commit/?id=008191cfab24 (link may not be stable)
[13/22] mmu: add MAP_CACHED_RO mapping type
https://git.pengutronix.de/cgit/barebox/commit/?id=f28e767645dc (link may not be stable)
[14/22] ARM: drop arm_fixup_vectors()
https://git.pengutronix.de/cgit/barebox/commit/?id=aeffa0386702 (link may not be stable)
[15/22] ARM: linker script: create separate PT_LOAD segments for text, rodata, and data
https://git.pengutronix.de/cgit/barebox/commit/?id=93120ce68ced (link may not be stable)
[16/22] ARM: link ELF image into PBL
https://git.pengutronix.de/cgit/barebox/commit/?id=dbbca16895f5 (link may not be stable)
[17/22] ARM: cleanup barebox proper entry
https://git.pengutronix.de/cgit/barebox/commit/?id=e06e94ae2e3c (link may not be stable)
[18/22] ARM: PBL: setup MMU with proper permissions from ELF segments
https://git.pengutronix.de/cgit/barebox/commit/?id=6cb7fc2edbb6 (link may not be stable)
[19/22] riscv: linker script: create separate PT_LOAD segments for text, rodata, and data
https://git.pengutronix.de/cgit/barebox/commit/?id=bfd483b0995f (link may not be stable)
[20/22] riscv: link ELF image into PBL
https://git.pengutronix.de/cgit/barebox/commit/?id=ec8b2ce7c2ac (link may not be stable)
[21/22] riscv: Allwinner D1: Drop M-Mode
https://git.pengutronix.de/cgit/barebox/commit/?id=29881d661727 (link may not be stable)
[22/22] riscv: add ELF segment-based memory protection with MMU
https://git.pengutronix.de/cgit/barebox/commit/?id=384faa803e6b (link may not be stable)
Best regards,
--
Sascha Hauer <s.hauer@pengutronix.de>
^ permalink raw reply [flat|nested] 24+ messages in thread