mail archive of the barebox mailing list
 help / color / mirror / Atom feed
* [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations
@ 2023-09-11 15:24 Ahmad Fatoum
  2023-09-11 15:24 ` [PATCH v2 1/7] tlsf: turn static const variables into compiletime constant expressions Ahmad Fatoum
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Ahmad Fatoum @ 2023-09-11 15:24 UTC (permalink / raw)
  To: barebox

TLSF currently uses only 4-byte alignment on 32-bit platforms, which isn't
enough for ldrd/strd on ARMv7. This series reworks TLSF a bit, so we always
have at least 8 byte alignment.  dlmalloc already has 8 byte alignment
minimum, so nothing to do there.

This has the added benefit of giving TLSF the same alignment as KASAN,
which can make debugging easier.

v1 didn't actually manage to boot on an i.MX6, which v2 fixes. This
also boots normally on i.MX8M. I suggest this going into next after
v2023.09.0.

v1 -> v2:
  - drop switch of test_ffs_fls to bselftest. This function should just
    be replaced with barebox' own implementation in future
  - keep block size a size_t and just ensure the block metadata is
    correctly aligned.

Ahmad Fatoum (7):
  tlsf: turn static const variables into compiletime constant
    expressions
  tlsf: ensure malloc pool is aligned
  tlsf: fix sizeof(size_t) == sizeof(void *) assumption
  tlsf: give malloc 8-byte alignment on 32-bit as well
  common: malloc: ensure alignment is always at least 8 byte
  test: self: refactor to allow alignment check
  test: self: malloc: fix memory leaks

 common/Kconfig         |  5 +++
 common/dlmalloc.c      |  3 ++
 common/dummy_malloc.c  |  2 +-
 common/tlsf.c          | 43 ++++++++++---------
 include/linux/bitops.h |  1 +
 test/self/malloc.c     | 96 ++++++++++++++++++++++++++++++------------
 6 files changed, 102 insertions(+), 48 deletions(-)

-- 
2.39.2




^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/7] tlsf: turn static const variables into compiletime constant expressions
  2023-09-11 15:24 [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Ahmad Fatoum
@ 2023-09-11 15:24 ` Ahmad Fatoum
  2023-09-11 15:24 ` [PATCH v2 2/7] tlsf: ensure malloc pool is aligned Ahmad Fatoum
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Ahmad Fatoum @ 2023-09-11 15:24 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

static const is not a compiletime expression in C, unlike C++ and
treating them the same just means that we just restrict where we can
use the constants, e.g. they are not appropriate for static_assert.

Turn them into proper macros to fix this. To keep the code easier to
sync with other TLSF implementations we maintain the same lowercase
naming, despite it being at odds with the general kernel coding style.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 common/tlsf.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/common/tlsf.c b/common/tlsf.c
index 3ca58e3abbfb..3673b424a4c5 100644
--- a/common/tlsf.c
+++ b/common/tlsf.c
@@ -134,27 +134,25 @@ typedef struct block_header_t
 ** - bit 0: whether block is busy or free
 ** - bit 1: whether previous block is busy or free
 */
-static const size_t block_header_free_bit = 1 << 0;
-static const size_t block_header_prev_free_bit = 1 << 1;
+#define block_header_free_bit		(1 << 0)
+#define block_header_prev_free_bit	(1 << 1)
 
 /*
 ** The size of the block header exposed to used blocks is the size field.
 ** The prev_phys_block field is stored *inside* the previous free block.
 */
-static const size_t block_header_overhead = sizeof(size_t);
+#define block_header_overhead		sizeof(size_t)
 
 /* User data starts directly after the size field in a used block. */
-static const size_t block_start_offset =
-	offsetof(block_header_t, size) + sizeof(size_t);
+#define block_start_offset		(offsetof(block_header_t, size) + sizeof(size_t))
 
 /*
 ** A free block must be large enough to store its header minus the size of
 ** the prev_phys_block field, and no larger than the number of addressable
 ** bits for FL_INDEX.
 */
-static const size_t block_size_min =
-	sizeof(block_header_t) - sizeof(block_header_t*);
-static const size_t block_size_max = tlsf_cast(size_t, 1) << FL_INDEX_MAX;
+#define block_size_min			(sizeof(block_header_t) - sizeof(block_header_t*))
+#define block_size_max			(tlsf_cast(size_t, 1) << FL_INDEX_MAX)
 
 
 /* The TLSF control structure. */
-- 
2.39.2




^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 2/7] tlsf: ensure malloc pool is aligned
  2023-09-11 15:24 [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Ahmad Fatoum
  2023-09-11 15:24 ` [PATCH v2 1/7] tlsf: turn static const variables into compiletime constant expressions Ahmad Fatoum
@ 2023-09-11 15:24 ` Ahmad Fatoum
  2023-09-11 15:24 ` [PATCH v2 3/7] tlsf: fix sizeof(size_t) == sizeof(void *) assumption Ahmad Fatoum
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Ahmad Fatoum @ 2023-09-11 15:24 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

The struct control_t describing a pool is allocated at its very start
and then directly followed by the first block. To ensure the first block
is suitably aligned, align_up the size in tlsf_size(). So far, TLSF on
32-bit and 64-bit happened to be aligned, so this introduces no
functional change just yet. With upcoming changes to the block header to
increase alignment on 32-bit systems, this realignment will become required.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 common/tlsf.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/common/tlsf.c b/common/tlsf.c
index 3673b424a4c5..784012a1f7ab 100644
--- a/common/tlsf.c
+++ b/common/tlsf.c
@@ -763,11 +763,11 @@ int tlsf_check_pool(pool_t pool)
 
 /*
 ** Size of the TLSF structures in a given memory block passed to
-** tlsf_create, equal to the size of a control_t
+** tlsf_create, equal to aligned size of a control_t
 */
 size_t tlsf_size(void)
 {
-	return sizeof(control_t);
+	return align_up(sizeof(control_t), ALIGN_SIZE);
 }
 
 size_t tlsf_align_size(void)
-- 
2.39.2




^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 3/7] tlsf: fix sizeof(size_t) == sizeof(void *) assumption
  2023-09-11 15:24 [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Ahmad Fatoum
  2023-09-11 15:24 ` [PATCH v2 1/7] tlsf: turn static const variables into compiletime constant expressions Ahmad Fatoum
  2023-09-11 15:24 ` [PATCH v2 2/7] tlsf: ensure malloc pool is aligned Ahmad Fatoum
@ 2023-09-11 15:24 ` Ahmad Fatoum
  2023-09-11 15:24 ` [PATCH v2 4/7] tlsf: give malloc 8-byte alignment on 32-bit as well Ahmad Fatoum
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Ahmad Fatoum @ 2023-09-11 15:24 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

TLSF struct block_header_t doesn't describe a single block, but
instead its first member covers the previous block:

                              .~~~~~~~~~~~~~~~~~~~.
                              |  prev_phys_block  |
    End of previous block --> |———————————————————| <-- Start of a free block
                              |       size        |
                              |— — — — — — — — — —|
                              | < Start of Data > |
                              '———————————————————'

This works because if the previous block is free, there is no harm in
using its last word to store the prev_phys_block.

We thus need pointer arithmetic to:

  - arrive from start of data to size, i.e. decrement offset
    by sizeof(size_t)

  - arrive from size to prev_phys_block, i.e. decrement offset
    by sizeof(struct block_header_t *)

Across the TLSF implementation, we conflate the two though and use
block_header_shift to mean both.  This works as long as
sizeof(size_t) == sizeof(struct block_header_t *), which is true
for both 32-bit and 64-bit configuration currently.

To facilitate having an 8-byte minimum allocation alignment for 32-bit
systems as well, we will increase sizeof(struct block_header_t::size)
to 8 bytes, which will break the implicit assumption. Fix it by adding
an additional const block_header_shift and use it where appropriate.

No functional change just yet.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 common/tlsf.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/common/tlsf.c b/common/tlsf.c
index 784012a1f7ab..0986c7c457e3 100644
--- a/common/tlsf.c
+++ b/common/tlsf.c
@@ -141,10 +141,11 @@ typedef struct block_header_t
 ** The size of the block header exposed to used blocks is the size field.
 ** The prev_phys_block field is stored *inside* the previous free block.
 */
+#define block_header_shift		offsetof(block_header_t, size)
 #define block_header_overhead		sizeof(size_t)
 
 /* User data starts directly after the size field in a used block. */
-#define block_start_offset		(offsetof(block_header_t, size) + sizeof(size_t))
+#define block_start_offset		(block_header_shift + block_header_overhead)
 
 /*
 ** A free block must be large enough to store its header minus the size of
@@ -251,7 +252,7 @@ static block_header_t* block_prev(const block_header_t* block)
 static block_header_t* block_next(const block_header_t* block)
 {
 	block_header_t* next = offset_to_block(block_to_ptr(block),
-		block_size(block) - block_header_overhead);
+		block_size(block) - block_header_shift);
 	tlsf_assert(!block_is_last(block));
 	return next;
 }
@@ -462,7 +463,7 @@ static block_header_t* block_split(block_header_t* block, size_t size)
 {
 	/* Calculate the amount of space left in the remaining block. */
 	block_header_t* remaining =
-		offset_to_block(block_to_ptr(block), size - block_header_overhead);
+		offset_to_block(block_to_ptr(block), size - block_header_shift);
 
 	const size_t remain_size = block_size(block) - (size + block_header_overhead);
 
@@ -728,7 +729,7 @@ void tlsf_walk_pool(pool_t pool, tlsf_walker walker, void* user)
 {
 	tlsf_walker pool_walker = walker ? walker : default_walker;
 	block_header_t* block =
-		offset_to_block(pool, -(int)block_header_overhead);
+		offset_to_block(pool, -(int)block_header_shift);
 
 	while (block && !block_is_last(block))
 	{
@@ -834,7 +835,7 @@ pool_t tlsf_add_pool(tlsf_t tlsf, void* mem, size_t bytes)
 	** so that the prev_phys_block field falls outside of the pool -
 	** it will never be used.
 	*/
-	block = offset_to_block(mem, -(tlsfptr_t)block_header_overhead);
+	block = offset_to_block(mem, -(tlsfptr_t)block_header_shift);
 	block_set_size(block, pool_bytes);
 	block_set_free(block);
 	block_set_prev_used(block);
@@ -852,7 +853,7 @@ pool_t tlsf_add_pool(tlsf_t tlsf, void* mem, size_t bytes)
 void tlsf_remove_pool(tlsf_t tlsf, pool_t pool)
 {
 	control_t* control = tlsf_cast(control_t*, tlsf);
-	block_header_t* block = offset_to_block(pool, -(int)block_header_overhead);
+	block_header_t* block = offset_to_block(pool, -(int)block_header_shift);
 
 	int fl = 0, sl = 0;
 
@@ -980,7 +981,7 @@ void* tlsf_memalign(tlsf_t tlsf, size_t align, size_t size)
 	block = block_locate_free(control, aligned_size);
 
 	/* This can't be a static assert. */
-	tlsf_assert(sizeof(block_header_t) == block_size_min + block_header_overhead);
+	tlsf_assert(sizeof(block_header_t) == block_size_min + block_header_shift);
 
 	if (block)
 	{
-- 
2.39.2




^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 4/7] tlsf: give malloc 8-byte alignment on 32-bit as well
  2023-09-11 15:24 [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Ahmad Fatoum
                   ` (2 preceding siblings ...)
  2023-09-11 15:24 ` [PATCH v2 3/7] tlsf: fix sizeof(size_t) == sizeof(void *) assumption Ahmad Fatoum
@ 2023-09-11 15:24 ` Ahmad Fatoum
  2023-09-11 15:24 ` [PATCH v2 5/7] common: malloc: ensure alignment is always at least 8 byte Ahmad Fatoum
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Ahmad Fatoum @ 2023-09-11 15:24 UTC (permalink / raw)
  To: barebox; +Cc: Enrico Scholz, Ahmad Fatoum

The current alignment of 4 bytes is too low. Access to 64-bit data via
ldrd/strd requires at least an eight byte alignment:

  | Prior to ARMv6, if the memory address is not 64-bit aligned, the
  | data read from memory is UNPREDICTABLE. Alignment checking (taking
  | a data abort), and support for a big-endian (BE-32) data format are
  | implementation options.

We already have at least an 8 byte alignment for dlmalloc, so have TLSF
follow suit by aligning the accounting structures appropriately.

Instead of adding manual padding, we could also enlarge block_header_t::size
to an uint64_t unconditionally, but mark block_header_t __packed. This
comes with a runtime cost though or ugly __builtin_assume_aligned
annotations, so we stick to the simpler version.

Reported-by: Enrico Scholz <enrico.scholz@sigma-chemnitz.de>
Link: https://lore.barebox.org/barebox/ly7d1z1qvs.fsf@ensc-pc.intern.sigma-chemnitz.de/
Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 common/tlsf.c          | 12 ++++++------
 include/linux/bitops.h |  1 +
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/common/tlsf.c b/common/tlsf.c
index 0986c7c457e3..692dabbdedd9 100644
--- a/common/tlsf.c
+++ b/common/tlsf.c
@@ -30,13 +30,8 @@ enum tlsf_public
 /* Private constants: do not modify. */
 enum tlsf_private
 {
-#if defined (TLSF_64BIT)
 	/* All allocation sizes and addresses are aligned to 8 bytes. */
 	ALIGN_SIZE_LOG2 = 3,
-#else
-	/* All allocation sizes and addresses are aligned to 4 bytes. */
-	ALIGN_SIZE_LOG2 = 2,
-#endif
 	ALIGN_SIZE = (1 << ALIGN_SIZE_LOG2),
 
 	/*
@@ -122,6 +117,7 @@ typedef struct block_header_t
 
 	/* The size of this block, excluding the block header. */
 	size_t size;
+	u32 : BYTES_TO_BITS(ALIGN_SIZE - sizeof(size_t));
 
 	/* Next and previous free blocks. */
 	struct block_header_t* next_free;
@@ -142,7 +138,7 @@ typedef struct block_header_t
 ** The prev_phys_block field is stored *inside* the previous free block.
 */
 #define block_header_shift		offsetof(block_header_t, size)
-#define block_header_overhead		sizeof(size_t)
+#define block_header_overhead		ALIGN_SIZE
 
 /* User data starts directly after the size field in a used block. */
 #define block_start_offset		(block_header_shift + block_header_overhead)
@@ -155,6 +151,8 @@ typedef struct block_header_t
 #define block_size_min			(sizeof(block_header_t) - sizeof(block_header_t*))
 #define block_size_max			(tlsf_cast(size_t, 1) << FL_INDEX_MAX)
 
+tlsf_static_assert(block_size_min % ALIGN_SIZE == 0);
+tlsf_static_assert(block_size_max % ALIGN_SIZE == 0);
 
 /* The TLSF control structure. */
 typedef struct control_t
@@ -165,10 +163,12 @@ typedef struct control_t
 	/* Bitmaps for free lists. */
 	unsigned int fl_bitmap;
 	unsigned int sl_bitmap[FL_INDEX_COUNT];
+	u32 : BYTES_TO_BITS(ALIGN_SIZE - sizeof(size_t));
 
 	/* Head of free lists. */
 	block_header_t* blocks[FL_INDEX_COUNT][SL_INDEX_COUNT];
 } control_t;
+tlsf_static_assert(sizeof(control_t) % ALIGN_SIZE == 0);
 
 /* A type used for casting when doing pointer arithmetic. */
 typedef ptrdiff_t tlsfptr_t;
diff --git a/include/linux/bitops.h b/include/linux/bitops.h
index a5f6ac6545ee..b0d6ca6ac87f 100644
--- a/include/linux/bitops.h
+++ b/include/linux/bitops.h
@@ -19,6 +19,7 @@
 #define BITS_TO_U64(nr)		DIV_ROUND_UP(nr, BITS_PER_TYPE(u64))
 #define BITS_TO_U32(nr)		DIV_ROUND_UP(nr, BITS_PER_TYPE(u32))
 #define BITS_TO_BYTES(nr)	DIV_ROUND_UP(nr, BITS_PER_TYPE(char))
+#define BYTES_TO_BITS(nb)	(((BITS_PER_LONG * (nb)) / sizeof(long)))
 #endif
 
 /*
-- 
2.39.2




^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 5/7] common: malloc: ensure alignment is always at least 8 byte
  2023-09-11 15:24 [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Ahmad Fatoum
                   ` (3 preceding siblings ...)
  2023-09-11 15:24 ` [PATCH v2 4/7] tlsf: give malloc 8-byte alignment on 32-bit as well Ahmad Fatoum
@ 2023-09-11 15:24 ` Ahmad Fatoum
  2023-09-11 15:24 ` [PATCH v2 6/7] test: self: refactor to allow alignment check Ahmad Fatoum
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Ahmad Fatoum @ 2023-09-11 15:24 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

We used to have following alignments:
		32-bit CPU	64-bit CPU
   dummy	8 bytes		 8 bytes
   dlmalloc	8 bytes		16 bytes
   tlsf		4 bytes		 8 bytes

With recent change to TLSF, we now always have at least 8 bytes as
alignment. To make this clearer, define a new CONFIG_MALLOC_ALIGNMENT
and either use it as the alignment (as done for dummy) or add static
asserts to ensure we have at least this alignment.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 common/Kconfig        | 5 +++++
 common/dlmalloc.c     | 3 +++
 common/dummy_malloc.c | 2 +-
 common/tlsf.c         | 2 ++
 4 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/common/Kconfig b/common/Kconfig
index afa591cb76ff..15c648de5d79 100644
--- a/common/Kconfig
+++ b/common/Kconfig
@@ -277,6 +277,11 @@ config MALLOC_SIZE
 	hex
 	default 0x400000
 	prompt "malloc area size"
+
+config MALLOC_ALIGNMENT
+	hex
+	default 8
+
 endmenu
 
 config BROKEN
diff --git a/common/dlmalloc.c b/common/dlmalloc.c
index ae10c9ae30dd..16ea3cafb624 100644
--- a/common/dlmalloc.c
+++ b/common/dlmalloc.c
@@ -3,6 +3,7 @@
 #include <malloc.h>
 #include <string.h>
 #include <memory.h>
+#include <linux/build_bug.h>
 
 #include <stdio.h>
 #include <module.h>
@@ -884,6 +885,8 @@ static mbinptr av_[NAV * 2 + 2] = {
 
 /*  Other static bookkeeping data */
 
+static_assert(MALLOC_ALIGNMENT >= CONFIG_MALLOC_ALIGNMENT);
+
 /* variables holding tunable values */
 #ifndef __BAREBOX__
 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
diff --git a/common/dummy_malloc.c b/common/dummy_malloc.c
index d99b5059cf91..7a96afec76e0 100644
--- a/common/dummy_malloc.c
+++ b/common/dummy_malloc.c
@@ -23,7 +23,7 @@ void *memalign(size_t alignment, size_t bytes)
 
 void *malloc(size_t size)
 {
-	return memalign(8, size);
+	return memalign(CONFIG_MALLOC_ALIGNMENT, size);
 }
 
 void free(void *ptr)
diff --git a/common/tlsf.c b/common/tlsf.c
index 692dabbdedd9..ba2ed367c0b9 100644
--- a/common/tlsf.c
+++ b/common/tlsf.c
@@ -96,6 +96,8 @@ tlsf_static_assert(sizeof(unsigned int) * CHAR_BIT >= SL_INDEX_COUNT);
 /* Ensure we've properly tuned our sizes. */
 tlsf_static_assert(ALIGN_SIZE == SMALL_BLOCK_SIZE / SL_INDEX_COUNT);
 
+tlsf_static_assert(ALIGN_SIZE >= CONFIG_MALLOC_ALIGNMENT);
+
 /*
 ** Data structures and associated constants.
 */
-- 
2.39.2




^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 6/7] test: self: refactor to allow alignment check
  2023-09-11 15:24 [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Ahmad Fatoum
                   ` (4 preceding siblings ...)
  2023-09-11 15:24 ` [PATCH v2 5/7] common: malloc: ensure alignment is always at least 8 byte Ahmad Fatoum
@ 2023-09-11 15:24 ` Ahmad Fatoum
  2023-09-11 15:24 ` [PATCH v2 7/7] test: self: malloc: fix memory leaks Ahmad Fatoum
  2023-09-26 10:57 ` [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Sascha Hauer
  7 siblings, 0 replies; 9+ messages in thread
From: Ahmad Fatoum @ 2023-09-11 15:24 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

Have The expect_alloc_* functions currently only know whether the
pointer is NULL or not. Have them get the full pointer value and return
it instead.

No functional change.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 test/self/malloc.c | 82 +++++++++++++++++++++++++++++++++-------------
 1 file changed, 59 insertions(+), 23 deletions(-)

diff --git a/test/self/malloc.c b/test/self/malloc.c
index c25b416b9751..6b3002ff0996 100644
--- a/test/self/malloc.c
+++ b/test/self/malloc.c
@@ -7,26 +7,62 @@
 #include <malloc.h>
 #include <memory.h>
 #include <linux/sizes.h>
+#include <linux/bitops.h>
 
 BSELFTEST_GLOBALS();
 
-static void __expect(bool cond, bool expect,
-		     const char *condstr, const char *func, int line)
+#define get_alignment(val) \
+	BIT(__builtin_constant_p(val) ?  __builtin_ffsll(val) : __ffs64(val))
+
+static_assert(get_alignment(0x1)	!= 1);
+static_assert(get_alignment(0x2)	!= 2);
+static_assert(get_alignment(0x3)	!= 1);
+static_assert(get_alignment(0x4)	!= 4);
+static_assert(get_alignment(0x5)	!= 1);
+static_assert(get_alignment(0x6)	!= 2);
+static_assert(get_alignment(0x8)	!= 8);
+static_assert(get_alignment(0x99)	!= 0x1);
+static_assert(get_alignment(0xDEADBEE0)	!= 0x10);
+
+static bool __expect_cond(bool cond, bool expect,
+			  const char *condstr, const char *func, int line)
 {
 	total_tests++;
-	if (cond != expect) {
-		failed_tests++;
-		printf("%s:%d: %s to %s\n", func, line,
-		       expect ? "failed" : "unexpectedly succeeded",
-		       condstr);
-	}
+	if (cond == expect)
+		return true;
+
+	failed_tests++;
+	printf("%s:%d: %s to %s\n", func, line,
+	       expect ? "failed" : "unexpectedly succeeded",
+	       condstr);
+	return false;
+
 }
 
-#define expect_alloc_ok(cond) \
-	__expect((cond), true, #cond, __func__, __LINE__)
+static void *__expect(void *ptr, bool expect,
+		     const char *condstr, const char *func, int line)
+{
+	bool ok;
+	total_tests++;
 
-#define expect_alloc_fail(cond) \
-	__expect((cond), false, #cond, __func__, __LINE__)
+	ok = __expect_cond(ptr != NULL, expect, condstr, func, line);
+	if (ok && ptr) {
+		unsigned alignment = get_alignment((uintptr_t)ptr);
+		if (alignment < CONFIG_MALLOC_ALIGNMENT) {
+			failed_tests++;
+			printf("%s:%d: invalid alignment of %u in %s = %p\n", func, line,
+			       alignment, condstr, ptr);
+		}
+	}
+
+	return ptr;
+}
+
+#define expect_alloc_ok(ptr) \
+	__expect((ptr), true, #ptr, __func__, __LINE__)
+
+#define expect_alloc_fail(ptr) \
+	__expect((ptr), false, #ptr, __func__, __LINE__)
 
 static void test_malloc(void)
 {
@@ -45,14 +81,14 @@ static void test_malloc(void)
 		mem_malloc_size = 0;
 	}
 
-	expect_alloc_ok(p = malloc(1));
+	p = expect_alloc_ok(malloc(1));
 	free(p);
 
 	if (mem_malloc_size) {
 		expect_alloc_fail(malloc(SIZE_MAX));
 
 		if (0xf0000000 > mem_malloc_size) {
-			expect_alloc_fail((tmp = malloc(0xf0000000)));
+			tmp = expect_alloc_fail(malloc(0xf0000000));
 			free(tmp);
 		}
 	} else {
@@ -60,22 +96,22 @@ static void test_malloc(void)
 	}
 
 	p = realloc(NULL, 1);
-	expect_alloc_ok(p = realloc(NULL, 1));
+	p = expect_alloc_ok(realloc(NULL, 1));
 
 	*p = 0x42;
 
-	expect_alloc_ok(tmp = realloc(p, 2));
+	tmp = expect_alloc_ok(realloc(p, 2));
 
 	p = tmp;
-	__expect(*p == 0x42, true, "reread after realloc", __func__, __LINE__);
+	__expect_cond(*p == 0x42, true, "reread after realloc", __func__, __LINE__);
 
 	if (mem_malloc_size) {
-		expect_alloc_fail(tmp = realloc(p, mem_malloc_size));
+		tmp = expect_alloc_fail(realloc(p, mem_malloc_size));
 
 		if (0xf0000000 > mem_malloc_size)
-			expect_alloc_fail((tmp = realloc(p, 0xf0000000)));
+			tmp = expect_alloc_fail(realloc(p, 0xf0000000));
 
-		expect_alloc_fail(tmp = realloc(p, SIZE_MAX));
+		tmp = expect_alloc_fail(realloc(p, SIZE_MAX));
 
 	} else {
 		skipped_tests += 3;
@@ -83,9 +119,9 @@ static void test_malloc(void)
 
 	free(p);
 
-	expect_alloc_ok(p = malloc(0));
-	expect_alloc_ok(tmp = malloc(0));
+	p = expect_alloc_ok(malloc(0));
+	tmp = expect_alloc_ok(malloc(0));
 
-	__expect(p != tmp, true, "allocate distinct 0-size buffers", __func__, __LINE__);
+	__expect_cond(p != tmp, true, "allocate distinct 0-size buffers", __func__, __LINE__);
 }
 bselftest(core, test_malloc);
-- 
2.39.2




^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 7/7] test: self: malloc: fix memory leaks
  2023-09-11 15:24 [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Ahmad Fatoum
                   ` (5 preceding siblings ...)
  2023-09-11 15:24 ` [PATCH v2 6/7] test: self: refactor to allow alignment check Ahmad Fatoum
@ 2023-09-11 15:24 ` Ahmad Fatoum
  2023-09-26 10:57 ` [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Sascha Hauer
  7 siblings, 0 replies; 9+ messages in thread
From: Ahmad Fatoum @ 2023-09-11 15:24 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

The test shouldn't leak memory, even if it fails. Fix the leaks.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 test/self/malloc.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/test/self/malloc.c b/test/self/malloc.c
index 6b3002ff0996..47c225ac6a10 100644
--- a/test/self/malloc.c
+++ b/test/self/malloc.c
@@ -85,7 +85,8 @@ static void test_malloc(void)
 	free(p);
 
 	if (mem_malloc_size) {
-		expect_alloc_fail(malloc(SIZE_MAX));
+		tmp = expect_alloc_fail(malloc(SIZE_MAX));
+		free(tmp);
 
 		if (0xf0000000 > mem_malloc_size) {
 			tmp = expect_alloc_fail(malloc(0xf0000000));
@@ -95,7 +96,7 @@ static void test_malloc(void)
 		skipped_tests += 2;
 	}
 
-	p = realloc(NULL, 1);
+	free(realloc(NULL, 1));
 	p = expect_alloc_ok(realloc(NULL, 1));
 
 	*p = 0x42;
@@ -107,11 +108,15 @@ static void test_malloc(void)
 
 	if (mem_malloc_size) {
 		tmp = expect_alloc_fail(realloc(p, mem_malloc_size));
+		free(tmp);
 
-		if (0xf0000000 > mem_malloc_size)
+		if (0xf0000000 > mem_malloc_size) {
 			tmp = expect_alloc_fail(realloc(p, 0xf0000000));
+			free(tmp);
+		}
 
 		tmp = expect_alloc_fail(realloc(p, SIZE_MAX));
+		free(tmp);
 
 	} else {
 		skipped_tests += 3;
@@ -123,5 +128,8 @@ static void test_malloc(void)
 	tmp = expect_alloc_ok(malloc(0));
 
 	__expect_cond(p != tmp, true, "allocate distinct 0-size buffers", __func__, __LINE__);
+
+	free(p);
+	free(tmp);
 }
 bselftest(core, test_malloc);
-- 
2.39.2




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations
  2023-09-11 15:24 [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Ahmad Fatoum
                   ` (6 preceding siblings ...)
  2023-09-11 15:24 ` [PATCH v2 7/7] test: self: malloc: fix memory leaks Ahmad Fatoum
@ 2023-09-26 10:57 ` Sascha Hauer
  7 siblings, 0 replies; 9+ messages in thread
From: Sascha Hauer @ 2023-09-26 10:57 UTC (permalink / raw)
  To: Ahmad Fatoum; +Cc: barebox

On Mon, Sep 11, 2023 at 05:24:26PM +0200, Ahmad Fatoum wrote:
> TLSF currently uses only 4-byte alignment on 32-bit platforms, which isn't
> enough for ldrd/strd on ARMv7. This series reworks TLSF a bit, so we always
> have at least 8 byte alignment.  dlmalloc already has 8 byte alignment
> minimum, so nothing to do there.
> 
> This has the added benefit of giving TLSF the same alignment as KASAN,
> which can make debugging easier.
> 
> v1 didn't actually manage to boot on an i.MX6, which v2 fixes. This
> also boots normally on i.MX8M. I suggest this going into next after
> v2023.09.0.
> 
> v1 -> v2:
>   - drop switch of test_ffs_fls to bselftest. This function should just
>     be replaced with barebox' own implementation in future
>   - keep block size a size_t and just ensure the block metadata is
>     correctly aligned.

Applied, thanks

Sascha

> 
> Ahmad Fatoum (7):
>   tlsf: turn static const variables into compiletime constant
>     expressions
>   tlsf: ensure malloc pool is aligned
>   tlsf: fix sizeof(size_t) == sizeof(void *) assumption
>   tlsf: give malloc 8-byte alignment on 32-bit as well
>   common: malloc: ensure alignment is always at least 8 byte
>   test: self: refactor to allow alignment check
>   test: self: malloc: fix memory leaks
> 
>  common/Kconfig         |  5 +++
>  common/dlmalloc.c      |  3 ++
>  common/dummy_malloc.c  |  2 +-
>  common/tlsf.c          | 43 ++++++++++---------
>  include/linux/bitops.h |  1 +
>  test/self/malloc.c     | 96 ++++++++++++++++++++++++++++++------------
>  6 files changed, 102 insertions(+), 48 deletions(-)
> 
> -- 
> 2.39.2
> 
> 
> 

-- 
Pengutronix e.K.                           |                             |
Steuerwalder Str. 21                       | http://www.pengutronix.de/  |
31137 Hildesheim, Germany                  | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-09-26 10:58 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-11 15:24 [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Ahmad Fatoum
2023-09-11 15:24 ` [PATCH v2 1/7] tlsf: turn static const variables into compiletime constant expressions Ahmad Fatoum
2023-09-11 15:24 ` [PATCH v2 2/7] tlsf: ensure malloc pool is aligned Ahmad Fatoum
2023-09-11 15:24 ` [PATCH v2 3/7] tlsf: fix sizeof(size_t) == sizeof(void *) assumption Ahmad Fatoum
2023-09-11 15:24 ` [PATCH v2 4/7] tlsf: give malloc 8-byte alignment on 32-bit as well Ahmad Fatoum
2023-09-11 15:24 ` [PATCH v2 5/7] common: malloc: ensure alignment is always at least 8 byte Ahmad Fatoum
2023-09-11 15:24 ` [PATCH v2 6/7] test: self: refactor to allow alignment check Ahmad Fatoum
2023-09-11 15:24 ` [PATCH v2 7/7] test: self: malloc: fix memory leaks Ahmad Fatoum
2023-09-26 10:57 ` [PATCH v2 0/7] tlsf: use 8-byte alignment for normal malloc allocations Sascha Hauer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox