mail archive of the barebox mailing list
 help / color / mirror / Atom feed
* [PATCH v2 0/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation
@ 2023-05-26  6:37 Ahmad Fatoum
  2023-05-26  6:37 ` [PATCH v2 1/7] crypto: digest: match driver name if no algo name matches Ahmad Fatoum
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2023-05-26  6:37 UTC (permalink / raw)
  To: barebox

This shaves 400ms of a FIT image boot that uses sha256 as digest for
the images referenced by the selected configuration:

  barebox@imx8mn-old:/ time bootm -d kernel-a
  Dryrun. Aborted.
  time: 998ms

  barebox@imx8mn-new:/ time bootm -d kernel-a
  Dryrun. Aborted.
  time: 601ms

We also use SHA256 to verify barebox proper when in high assurance boot,
but that still uses the software implementation.

v1 -> v2:
  - use dedicated CONFIG_ symbols for SHA224/SHA256 selftest (Sascha)
  - check against CONFIG_ symbols for generic algorithm
  - remove #define DEBUG in selftest
  - fix oversight in new <linux/linkage.h> __ALIGN definition,
    which broke build on x86 and PowerPC
  - 

Ahmad Fatoum (7):
  crypto: digest: match driver name if no algo name matches
  test: self: add digest test
  include: sync <linux/linkage.h> with Linux
  ARM: asm: implement CPU_BE/CPU_LE
  ARM: asm: import Linux adr_l/ldr_l assembler.h definitions
  crypto: sha: reorder struct sha*_state into Linux order
  ARM64: crypto: add Crypto Extensions accelerated SHA implementation

 arch/arm/Makefile                |   3 +-
 arch/arm/crypto/Makefile         |   6 +
 arch/arm/crypto/sha1-ce-core.S   | 149 ++++++++++++++
 arch/arm/crypto/sha1-ce-glue.c   |  93 +++++++++
 arch/arm/crypto/sha2-ce-core.S   | 156 +++++++++++++++
 arch/arm/crypto/sha2-ce-glue.c   | 121 ++++++++++++
 arch/arm/include/asm/assembler.h | 230 ++++++++++++++++++++++
 arch/arm/include/asm/neon.h      |   8 +
 commands/digest.c                |   2 +-
 common/Kconfig                   |   6 +
 crypto/Kconfig                   |  21 ++
 crypto/digest.c                  |  11 +-
 include/crypto/sha.h             |  10 +-
 include/crypto/sha1_base.h       | 104 ++++++++++
 include/crypto/sha256_base.h     | 129 +++++++++++++
 include/linux/barebox-wrapper.h  |   1 +
 include/linux/linkage.h          | 321 ++++++++++++++++++++++++++++---
 include/linux/string.h           |  20 ++
 test/self/Kconfig                |   6 +
 test/self/Makefile               |   1 +
 test/self/digest.c               | 213 ++++++++++++++++++++
 21 files changed, 1572 insertions(+), 39 deletions(-)
 create mode 100644 arch/arm/crypto/sha1-ce-core.S
 create mode 100644 arch/arm/crypto/sha1-ce-glue.c
 create mode 100644 arch/arm/crypto/sha2-ce-core.S
 create mode 100644 arch/arm/crypto/sha2-ce-glue.c
 create mode 100644 arch/arm/include/asm/neon.h
 create mode 100644 include/crypto/sha1_base.h
 create mode 100644 include/crypto/sha256_base.h
 create mode 100644 test/self/digest.c

-- 
2.39.2




^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/7] crypto: digest: match driver name if no algo name matches
  2023-05-26  6:37 [PATCH v2 0/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation Ahmad Fatoum
@ 2023-05-26  6:37 ` Ahmad Fatoum
  2023-05-26  6:37 ` [PATCH v2 2/7] test: self: add digest test Ahmad Fatoum
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2023-05-26  6:37 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

The digest command lists all registered digest implementations, but
there's no way to select a specific implementation when another higher
priority one exists for the same algorithm. Let's support this, by
having digest_algo_get_by_name fallback to look up by driver name if an
exact match couldn't be found by algo name.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 commands/digest.c |  2 +-
 crypto/digest.c   | 11 ++++++-----
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/commands/digest.c b/commands/digest.c
index b7ed4d50af1f..e57920e58268 100644
--- a/commands/digest.c
+++ b/commands/digest.c
@@ -190,7 +190,7 @@ static int do_digest(int argc, char *argv[])
 BAREBOX_CMD_HELP_START(digest)
 BAREBOX_CMD_HELP_TEXT("Calculate a digest over a FILE or a memory area.")
 BAREBOX_CMD_HELP_TEXT("Options:")
-BAREBOX_CMD_HELP_OPT ("-a <algo>\t",  "hash or signature algorithm to use")
+BAREBOX_CMD_HELP_OPT ("-a <algo>\t",  "hash or signature algorithm name/driver to use")
 BAREBOX_CMD_HELP_OPT ("-k <key>\t",   "use supplied <key> (ASCII or hex) for MAC")
 BAREBOX_CMD_HELP_OPT ("-K <file>\t",  "use key from <file> (binary) for MAC")
 BAREBOX_CMD_HELP_OPT ("-s <hex>\t",   "verify data against supplied <hex> (hash, MAC or signature)")
diff --git a/crypto/digest.c b/crypto/digest.c
index 621d3841686e..dd2c2ee317ed 100644
--- a/crypto/digest.c
+++ b/crypto/digest.c
@@ -27,8 +27,6 @@
 
 static LIST_HEAD(digests);
 
-static struct digest_algo *digest_algo_get_by_name(const char *name);
-
 static int dummy_init(struct digest *d)
 {
 	return 0;
@@ -106,7 +104,7 @@ EXPORT_SYMBOL(digest_algo_unregister);
 
 static struct digest_algo *digest_algo_get_by_name(const char *name)
 {
-	struct digest_algo *d = NULL;
+	struct digest_algo *d_by_name = NULL, *d_by_driver = NULL;
 	struct digest_algo *tmp;
 	int priority = -1;
 
@@ -114,17 +112,20 @@ static struct digest_algo *digest_algo_get_by_name(const char *name)
 		return NULL;
 
 	list_for_each_entry(tmp, &digests, list) {
+		if (strcmp(tmp->base.driver_name, name) == 0)
+			d_by_driver = tmp;
+
 		if (strcmp(tmp->base.name, name) != 0)
 			continue;
 
 		if (tmp->base.priority <= priority)
 			continue;
 
-		d = tmp;
+		d_by_name = tmp;
 		priority = tmp->base.priority;
 	}
 
-	return d;
+	return d_by_name ?: d_by_driver;
 }
 
 static struct digest_algo *digest_algo_get_by_algo(enum hash_algo algo)
-- 
2.39.2




^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 2/7] test: self: add digest test
  2023-05-26  6:37 [PATCH v2 0/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation Ahmad Fatoum
  2023-05-26  6:37 ` [PATCH v2 1/7] crypto: digest: match driver name if no algo name matches Ahmad Fatoum
@ 2023-05-26  6:37 ` Ahmad Fatoum
  2023-05-26  6:37 ` [PATCH v2 3/7] include: sync <linux/linkage.h> with Linux Ahmad Fatoum
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2023-05-26  6:37 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

Later commits will touch existing digest code, so this is a good
opportunity to add a self test to ensure MD5/SHA continues to work.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 test/self/Kconfig  |   6 ++
 test/self/Makefile |   1 +
 test/self/digest.c | 211 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 218 insertions(+)
 create mode 100644 test/self/digest.c

diff --git a/test/self/Kconfig b/test/self/Kconfig
index c130209748aa..1d6d8ab53a8d 100644
--- a/test/self/Kconfig
+++ b/test/self/Kconfig
@@ -36,6 +36,7 @@ config SELFTEST_ENABLE_ALL
 	imply SELFTEST_FS_RAMFS
 	imply SELFTEST_TFTP
 	imply SELFTEST_JSON
+	imply SELFTEST_DIGEST
 	imply SELFTEST_MMU
 	help
 	  Selects all self-tests compatible with current configuration
@@ -75,4 +76,9 @@ config SELFTEST_MMU
 	select MEMTEST
 	depends on MMU
 
+config SELFTEST_DIGEST
+	bool "Digest selftest"
+	depends on DIGEST
+	select PRINTF_HEXSTR
+
 endif
diff --git a/test/self/Makefile b/test/self/Makefile
index 8c816c4299f6..269de2e10e88 100644
--- a/test/self/Makefile
+++ b/test/self/Makefile
@@ -9,6 +9,7 @@ obj-$(CONFIG_SELFTEST_OF_MANIPULATION) += of_manipulation.o of_manipulation.dtb.
 obj-$(CONFIG_SELFTEST_ENVIRONMENT_VARIABLES) += envvar.o
 obj-$(CONFIG_SELFTEST_FS_RAMFS) += ramfs.o
 obj-$(CONFIG_SELFTEST_JSON) += json.o
+obj-$(CONFIG_SELFTEST_DIGEST) += digest.o
 obj-$(CONFIG_SELFTEST_MMU) += mmu.o
 
 clean-files := *.dtb *.dtb.S .*.dtc .*.pre .*.dts *.dtb.z
diff --git a/test/self/digest.c b/test/self/digest.c
new file mode 100644
index 000000000000..769444ad15ce
--- /dev/null
+++ b/test/self/digest.c
@@ -0,0 +1,211 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <common.h>
+#include <bselftest.h>
+#include <clock.h>
+#include <digest.h>
+
+BSELFTEST_GLOBALS();
+
+struct digest_test_case {
+	const char *name;
+	const void *buf;
+	size_t buf_size;
+	const void *digest_str;
+	u64 time_ns;
+};
+
+#define TEST_CASE(buf, digest_str) \
+	{ #buf, buf, sizeof(buf), digest_str }
+
+#define test_digest(option, algo, ...) do { \
+	struct digest_test_case *t, cases[] = { __VA_ARGS__, { /* sentinel */ } }; \
+	for (t = cases; t->buf; t++) \
+		__test_digest((option), (algo), t, __func__, __LINE__); \
+	if (!__is_defined(DEBUG)) \
+		break; \
+	printf("%s:\t", algo); \
+	for (t = cases; t->buf; t++) \
+		printf(" digest(%zu bytes) = %10lluns", t->buf_size, t->time_ns); \
+	printf("\n"); \
+} while (0)
+
+static inline const char *digest_suffix(const char *str, const char *suffix)
+{
+	static char buf[32];
+
+	if (!*suffix)
+		return str;
+
+	WARN_ON(snprintf(buf, sizeof(buf), "%s-%s", str, suffix) >= sizeof(buf));
+	return buf;
+}
+
+static void __test_digest(bool option,
+			  const char *algo, struct digest_test_case *t,
+			  const char *func, int line)
+{
+	unsigned char *output, *digest;
+	struct digest *d;
+	int hash_len, digest_len;
+	u64 start;
+	int ret;
+
+	total_tests++;
+
+	if (!option) {
+		skipped_tests++;
+		return;
+	}
+
+	d = digest_alloc(algo);
+	if (!d) {
+		printf("%s:%d: failed to allocate %s digest\n", func, line, algo);
+		goto fail;
+	}
+
+	hash_len = digest_length(d);
+	digest_len = strlen(t->digest_str) / 2;
+	if (hash_len != digest_len) {
+		printf("%s:%d: %s digests have length %u, but %u expected\n",
+		       func, line, algo, hash_len, digest_len);
+		goto fail;
+	}
+
+	output = calloc(hash_len, 1);
+	if (WARN_ON(!output))
+		goto fail;
+
+	digest = calloc(digest_len, 1);
+	if (WARN_ON(!digest))
+		goto fail;
+
+	ret = hex2bin(digest, t->digest_str, digest_len);
+	if (WARN_ON(ret))
+		goto fail;
+
+	start = get_time_ns();
+
+	ret = digest_digest(d, t->buf, t->buf_size, output);
+	if (ret) {
+		printf("%s:%d: error calculating %s(%s): %pe\n",
+		       func, line, algo, t->name, ERR_PTR(ret));
+		goto fail;
+	}
+
+	t->time_ns = get_time_ns() - start;
+
+	if (memcmp(output, digest, hash_len)) {
+		printf("%s:%d: mismatch calculating %s(%s):\n\tgot: %*phN\n\tbut: %*phN expected\n",
+		       func, line, algo, t->name, hash_len, output, hash_len, digest);
+		goto fail;
+	}
+
+	return;
+fail:
+	failed_tests++;
+}
+
+static const u8 zeroes7[7] = {};
+static const u8 one32[32] = { 1 };
+static u8 inc4097[4097];
+
+static void test_digest_md5(const char *suffix)
+{
+	bool cond;
+
+	cond = !strcmp(suffix, "generic") ? IS_ENABLED(CONFIG_DIGEST_MD5_GENERIC) :
+	       IS_ENABLED(CONFIG_HAVE_DIGEST_MD5);
+
+	test_digest(cond, digest_suffix("md5", suffix),
+		TEST_CASE(zeroes7, "d310a40483f9399dd7ed1712e0fdd702"),
+		TEST_CASE(one32,   "b39ac6e2aa7e375c38ba7ae921b5ba89"),
+		TEST_CASE(inc4097, "70410aad262cd11e63ae854804c8024b"));
+}
+
+static void test_digests_sha12(const char *suffix)
+{
+	bool cond;
+
+	cond = !strcmp(suffix, "generic") ? IS_ENABLED(CONFIG_DIGEST_SHA1_GENERIC) :
+	       !strcmp(suffix, "asm") ? IS_ENABLED(CONFIG_DIGEST_SHA1_ARM) :
+	       IS_ENABLED(CONFIG_HAVE_DIGEST_SHA1);
+
+	test_digest(cond, digest_suffix("sha1", suffix),
+		TEST_CASE(zeroes7, "77ce0377defbd11b77b1f4ad54ca40ea5ef28490"),
+		TEST_CASE(one32,   "cbd9cbfc20182e4b71e593e7ad598fc383cc6058"),
+		TEST_CASE(inc4097, "c627e736efd8bb0dff1778335c9c79cb1f27e396"));
+
+
+	cond = !strcmp(suffix, "generic") ? IS_ENABLED(CONFIG_DIGEST_SHA224_GENERIC) :
+	       !strcmp(suffix, "asm") ? IS_ENABLED(CONFIG_DIGEST_SHA256_ARM) :
+	       IS_ENABLED(CONFIG_HAVE_DIGEST_SHA224);
+
+	test_digest(cond, digest_suffix("sha224", suffix),
+		TEST_CASE(zeroes7, "fbf6df85218ac5632461a8a17c6f294e6f35264cbfc0a9774a4f665b"),
+		TEST_CASE(one32,   "343cb3950305e6e6331e294b0a4925739d09ecbd2b43a2fc87c09941"),
+		TEST_CASE(inc4097, "6596b5dcfbd857f4246d6b94508b8a1a5b715a4f644a0c1e7d54c4f7"));
+
+
+	cond = !strcmp(suffix, "generic") ? IS_ENABLED(CONFIG_DIGEST_SHA256_GENERIC) :
+	       !strcmp(suffix, "asm") ? IS_ENABLED(CONFIG_DIGEST_SHA256_ARM) :
+	       IS_ENABLED(CONFIG_HAVE_DIGEST_SHA256);
+
+	test_digest(cond, digest_suffix("sha256", suffix),
+		TEST_CASE(zeroes7, "837885c8f8091aeaeb9ec3c3f85a6ff470a415e610b8ba3e49f9b33c9cf9d619"),
+		TEST_CASE(one32,   "01d0fabd251fcbbe2b93b4b927b26ad2a1a99077152e45ded1e678afa45dbec5"),
+		TEST_CASE(inc4097, "1e973d029df2b2c66cb42a942c5edb45966f02abaff29fe99410e44d271d0efc"));
+}
+
+
+static void test_digests_sha35(const char *suffix)
+{
+	bool cond;
+
+	cond = !strcmp(suffix, "generic") ? IS_ENABLED(CONFIG_DIGEST_SHA384_GENERIC) :
+	       IS_ENABLED(CONFIG_HAVE_DIGEST_SHA384);
+
+	test_digest(cond, digest_suffix("sha384", suffix),
+		TEST_CASE(zeroes7, "b56705a73cf280f06d3a6b482c441a3d280c930d0c44b04f364dcdcedcfbc47c"
+				   "f3645a71da7b97f9e5d3a0924f6b9634"),
+		TEST_CASE(one32,   "dd606b49d7658a5eae905d593271c280819f92eb1a9a4986057aedc0a5f2eaea"
+				   "99052904718f6d83f16ad209d793f253"),
+		TEST_CASE(inc4097, "f76046b90890f20ae94066a3ad33010f5b3b2fd46977414636bbc634898b06fd"
+				   "4cb8f85e0926e8817e518300a930529e"));
+
+
+	cond = !strcmp(suffix, "generic") ? IS_ENABLED(CONFIG_DIGEST_SHA512_GENERIC) :
+	       IS_ENABLED(CONFIG_HAVE_DIGEST_SHA512);
+
+	test_digest(cond, digest_suffix("sha512", suffix),
+		TEST_CASE(zeroes7, "76afca18a9b81ffb967ffcf0460ed221c3605d3820057214d785fa88259bb5cb"
+				   "729576178e6edb0134f645d2e2e92cbabf1333462f3b9058692c950f51c64a92"),
+		TEST_CASE(one32,   "ce0c265ecc82dd8cee6e56ce44e45dafd7a0c5750df914b253a1fb7a8af66ddb"
+				   "99763607f0a85d0bd43669194a3a40577a528af395f4f17e06f1defcc6deb2a5"),
+		TEST_CASE(inc4097, "42eb09aca460d79b0c0aeac28187ed055a92e33602b69428461697680ff9f48f"
+				   "60a5a68aa0017e3446433349b42592b74713d7787628a58e400b7f588b9bd69b"));
+}
+
+static void test_digests(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(inc4097); i++)
+		inc4097[i] = i;
+
+	test_digest_md5("generic");
+
+	test_digests_sha12("generic");
+	if (IS_ENABLED(CONFIG_CPU_32))
+		test_digests_sha12("asm");
+
+	test_digests_sha35("generic");
+
+	test_digest_md5("");
+	test_digests_sha12("");
+	test_digests_sha35("");
+
+}
+bselftest(core, test_digests);
-- 
2.39.2




^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 3/7] include: sync <linux/linkage.h> with Linux
  2023-05-26  6:37 [PATCH v2 0/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation Ahmad Fatoum
  2023-05-26  6:37 ` [PATCH v2 1/7] crypto: digest: match driver name if no algo name matches Ahmad Fatoum
  2023-05-26  6:37 ` [PATCH v2 2/7] test: self: add digest test Ahmad Fatoum
@ 2023-05-26  6:37 ` Ahmad Fatoum
  2023-05-26  6:54   ` Sascha Hauer
  2023-05-26  6:37 ` [PATCH v2 4/7] ARM: asm: implement CPU_BE/CPU_LE Ahmad Fatoum
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Ahmad Fatoum @ 2023-05-26  6:37 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

Linux has added new SYM_ macros in the assembly code and deprecated
ENTRY/PROC. Import the necessary definitions to make kernel code
porting easier.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 common/Kconfig          |   6 +
 include/linux/linkage.h | 321 ++++++++++++++++++++++++++++++++++++----
 2 files changed, 298 insertions(+), 29 deletions(-)

diff --git a/common/Kconfig b/common/Kconfig
index ce94718c848a..5346ba5a623c 100644
--- a/common/Kconfig
+++ b/common/Kconfig
@@ -1710,3 +1710,9 @@ config DDR_SPD
 
 config HAVE_ARCH_ASAN
 	bool
+
+config ARCH_USE_SYM_ANNOTATIONS
+	bool
+	help
+	  This is selected by architectures that exclusively use the new SYM_
+	  macros in their assembly code and not the deprecated ENTRY/PROC.
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index efb2d6fa407b..c262c7b36907 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -3,9 +3,16 @@
 #ifndef _LINUX_LINKAGE_H
 #define _LINUX_LINKAGE_H
 
-#include <linux/compiler.h>
+#include <linux/compiler_types.h>
+#include <linux/stringify.h>
+#include <linux/export.h>
 #include <asm/linkage.h>
 
+/* Some toolchains use other characters (e.g. '`') to mark new line in macro */
+#ifndef ASM_NL
+#define ASM_NL		 ;
+#endif
+
 #ifdef __cplusplus
 #define CPP_ASMLINKAGE extern "C"
 #else
@@ -16,12 +23,31 @@
 #define asmlinkage CPP_ASMLINKAGE
 #endif
 
-#ifndef asmregparm
-# define asmregparm
+#ifndef cond_syscall
+#define cond_syscall(x)	asm(				\
+	".weak " __stringify(x) "\n\t"			\
+	".set  " __stringify(x) ","			\
+		 __stringify(sys_ni_syscall))
 #endif
 
-#define __page_aligned_data	__section(.data.page_aligned) __aligned(PAGE_SIZE)
-#define __page_aligned_bss	__section(.bss.page_aligned) __aligned(PAGE_SIZE)
+#ifndef SYSCALL_ALIAS
+#define SYSCALL_ALIAS(alias, name) asm(			\
+	".globl " __stringify(alias) "\n\t"		\
+	".set   " __stringify(alias) ","		\
+		  __stringify(name))
+#endif
+
+#define __page_aligned_data	__section(".data..page_aligned") __aligned(PAGE_SIZE)
+#define __page_aligned_bss	__section(".bss..page_aligned") __aligned(PAGE_SIZE)
+
+/*
+ * For assembly routines.
+ *
+ * Note when using these that you must specify the appropriate
+ * alignment directives yourself
+ */
+#define __PAGE_ALIGNED_DATA	.section ".data..page_aligned", "aw"
+#define __PAGE_ALIGNED_BSS	.section ".bss..page_aligned", "aw"
 
 /*
  * This is used by architectures to keep arguments on the stack
@@ -44,39 +70,69 @@
 #endif
 
 #ifndef __ALIGN
-#define __ALIGN		.align 4,0x90
-#define __ALIGN_STR	".align 4,0x90"
+#define __ALIGN			.balign 4
+#define __ALIGN_STR		__stringify(__ALIGN)
 #endif
 
 #ifdef __ASSEMBLY__
 
+/* SYM_T_FUNC -- type used by assembler to mark functions */
+#ifndef SYM_T_FUNC
+#define SYM_T_FUNC				STT_FUNC
+#endif
+
+/* SYM_T_OBJECT -- type used by assembler to mark data */
+#ifndef SYM_T_OBJECT
+#define SYM_T_OBJECT				STT_OBJECT
+#endif
+
+/* SYM_T_NONE -- type used by assembler to mark entries of unknown type */
+#ifndef SYM_T_NONE
+#define SYM_T_NONE				STT_NOTYPE
+#endif
+
+/* SYM_A_* -- align the symbol? */
+#define SYM_A_ALIGN				ALIGN
+#define SYM_A_NONE				/* nothing */
+
+/* SYM_L_* -- linkage of symbols */
+#define SYM_L_GLOBAL(name)			.globl name
+#define SYM_L_WEAK(name)			.weak name
+#define SYM_L_LOCAL(name)			/* nothing */
+
+#ifndef LINKER_SCRIPT
 #define ALIGN __ALIGN
 #define ALIGN_STR __ALIGN_STR
 
-#ifndef ENTRY
-#define ENTRY(name) \
-  .globl name; \
-  ALIGN; \
-  name:
-#endif
+/* === DEPRECATED annotations === */
 
-#ifndef WEAK
-#define WEAK(name)	   \
-	.weak name;	   \
+#ifndef CONFIG_ARCH_USE_SYM_ANNOTATIONS
+#ifndef GLOBAL
+/* deprecated, use SYM_DATA*, SYM_ENTRY, or similar */
+#define GLOBAL(name) \
+	.globl name ASM_NL \
 	name:
 #endif
 
-#define KPROBE_ENTRY(name) \
-  .pushsection .kprobes.text, "ax"; \
-  ENTRY(name)
+#ifndef ENTRY
+/* deprecated, use SYM_FUNC_START */
+#define ENTRY(name) \
+	SYM_FUNC_START(name)
+#endif
+#endif /* CONFIG_ARCH_USE_SYM_ANNOTATIONS */
+#endif /* LINKER_SCRIPT */
 
-#define KPROBE_END(name) \
-  END(name);		 \
-  .popsection
+#ifndef CONFIG_ARCH_USE_SYM_ANNOTATIONS
+#ifndef WEAK
+/* deprecated, use SYM_FUNC_START_WEAK* */
+#define WEAK(name)	   \
+	SYM_FUNC_START_WEAK(name)
+#endif
 
 #ifndef END
+/* deprecated, use SYM_FUNC_END, SYM_DATA_END, or SYM_END */
 #define END(name) \
-  .size name, .-name
+	.size name, .-name
 #endif
 
 /* If symbol 'name' is treated as a subroutine (gets called, and returns)
@@ -84,15 +140,222 @@
  * static analysis tools such as stack depth analyzer.
  */
 #ifndef ENDPROC
+/* deprecated, use SYM_FUNC_END */
 #define ENDPROC(name) \
-  .type name, @function; \
-  END(name)
+	SYM_FUNC_END(name)
+#endif
+#endif /* CONFIG_ARCH_USE_SYM_ANNOTATIONS */
+
+/* === generic annotations === */
+
+/* SYM_ENTRY -- use only if you have to for non-paired symbols */
+#ifndef SYM_ENTRY
+#define SYM_ENTRY(name, linkage, align...)		\
+	linkage(name) ASM_NL				\
+	align ASM_NL					\
+	name:
 #endif
 
+/* SYM_START -- use only if you have to */
+#ifndef SYM_START
+#define SYM_START(name, linkage, align...)		\
+	SYM_ENTRY(name, linkage, align)
 #endif
 
-#define NORET_TYPE    /**/
-#define ATTRIB_NORET  __attribute__((noreturn))
-#define NORET_AND     noreturn,
-
+/* SYM_END -- use only if you have to */
+#ifndef SYM_END
+#define SYM_END(name, sym_type)				\
+	.type name sym_type ASM_NL			\
+	.set .L__sym_size_##name, .-name ASM_NL		\
+	.size name, .L__sym_size_##name
 #endif
+
+/* SYM_ALIAS -- use only if you have to */
+#ifndef SYM_ALIAS
+#define SYM_ALIAS(alias, name, linkage)			\
+	linkage(alias) ASM_NL				\
+	.set alias, name ASM_NL
+#endif
+
+/* === code annotations === */
+
+/*
+ * FUNC -- C-like functions (proper stack frame etc.)
+ * CODE -- non-C code (e.g. irq handlers with different, special stack etc.)
+ *
+ * Objtool validates stack for FUNC, but not for CODE.
+ * Objtool generates debug info for both FUNC & CODE, but needs special
+ * annotations for each CODE's start (to describe the actual stack frame).
+ *
+ * Objtool requires that all code must be contained in an ELF symbol. Symbol
+ * names that have a  .L prefix do not emit symbol table entries. .L
+ * prefixed symbols can be used within a code region, but should be avoided for
+ * denoting a range of code via ``SYM_*_START/END`` annotations.
+ *
+ * ALIAS -- does not generate debug info -- the aliased function will
+ */
+
+/* SYM_INNER_LABEL_ALIGN -- only for labels in the middle of code */
+#ifndef SYM_INNER_LABEL_ALIGN
+#define SYM_INNER_LABEL_ALIGN(name, linkage)	\
+	.type name SYM_T_NONE ASM_NL			\
+	SYM_ENTRY(name, linkage, SYM_A_ALIGN)
+#endif
+
+/* SYM_INNER_LABEL -- only for labels in the middle of code */
+#ifndef SYM_INNER_LABEL
+#define SYM_INNER_LABEL(name, linkage)		\
+	.type name SYM_T_NONE ASM_NL			\
+	SYM_ENTRY(name, linkage, SYM_A_NONE)
+#endif
+
+/* SYM_FUNC_START -- use for global functions */
+#ifndef SYM_FUNC_START
+#define SYM_FUNC_START(name)				\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+#endif
+
+/* SYM_FUNC_START_NOALIGN -- use for global functions, w/o alignment */
+#ifndef SYM_FUNC_START_NOALIGN
+#define SYM_FUNC_START_NOALIGN(name)			\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
+#endif
+
+/* SYM_FUNC_START_LOCAL -- use for local functions */
+#ifndef SYM_FUNC_START_LOCAL
+#define SYM_FUNC_START_LOCAL(name)			\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
+#endif
+
+/* SYM_FUNC_START_LOCAL_NOALIGN -- use for local functions, w/o alignment */
+#ifndef SYM_FUNC_START_LOCAL_NOALIGN
+#define SYM_FUNC_START_LOCAL_NOALIGN(name)		\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_FUNC_START_WEAK -- use for weak functions */
+#ifndef SYM_FUNC_START_WEAK
+#define SYM_FUNC_START_WEAK(name)			\
+	SYM_START(name, SYM_L_WEAK, SYM_A_ALIGN)
+#endif
+
+/* SYM_FUNC_START_WEAK_NOALIGN -- use for weak functions, w/o alignment */
+#ifndef SYM_FUNC_START_WEAK_NOALIGN
+#define SYM_FUNC_START_WEAK_NOALIGN(name)		\
+	SYM_START(name, SYM_L_WEAK, SYM_A_NONE)
+#endif
+
+/*
+ * SYM_FUNC_END -- the end of SYM_FUNC_START_LOCAL, SYM_FUNC_START,
+ * SYM_FUNC_START_WEAK, ...
+ */
+#ifndef SYM_FUNC_END
+#define SYM_FUNC_END(name)				\
+	SYM_END(name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_ALIAS -- define a global alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS
+#define SYM_FUNC_ALIAS(alias, name)					\
+	SYM_ALIAS(alias, name, SYM_L_GLOBAL)
+#endif
+
+/*
+ * SYM_FUNC_ALIAS_LOCAL -- define a local alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS_LOCAL
+#define SYM_FUNC_ALIAS_LOCAL(alias, name)				\
+	SYM_ALIAS(alias, name, SYM_L_LOCAL)
+#endif
+
+/*
+ * SYM_FUNC_ALIAS_WEAK -- define a weak global alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS_WEAK
+#define SYM_FUNC_ALIAS_WEAK(alias, name)				\
+	SYM_ALIAS(alias, name, SYM_L_WEAK)
+#endif
+
+/* SYM_CODE_START -- use for non-C (special) functions */
+#ifndef SYM_CODE_START
+#define SYM_CODE_START(name)				\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+#endif
+
+/* SYM_CODE_START_NOALIGN -- use for non-C (special) functions, w/o alignment */
+#ifndef SYM_CODE_START_NOALIGN
+#define SYM_CODE_START_NOALIGN(name)			\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
+#endif
+
+/* SYM_CODE_START_LOCAL -- use for local non-C (special) functions */
+#ifndef SYM_CODE_START_LOCAL
+#define SYM_CODE_START_LOCAL(name)			\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
+#endif
+
+/*
+ * SYM_CODE_START_LOCAL_NOALIGN -- use for local non-C (special) functions,
+ * w/o alignment
+ */
+#ifndef SYM_CODE_START_LOCAL_NOALIGN
+#define SYM_CODE_START_LOCAL_NOALIGN(name)		\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_CODE_END -- the end of SYM_CODE_START_LOCAL, SYM_CODE_START, ... */
+#ifndef SYM_CODE_END
+#define SYM_CODE_END(name)				\
+	SYM_END(name, SYM_T_NONE)
+#endif
+
+/* === data annotations === */
+
+/* SYM_DATA_START -- global data symbol */
+#ifndef SYM_DATA_START
+#define SYM_DATA_START(name)				\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
+#endif
+
+/* SYM_DATA_START -- local data symbol */
+#ifndef SYM_DATA_START_LOCAL
+#define SYM_DATA_START_LOCAL(name)			\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_DATA_END -- the end of SYM_DATA_START symbol */
+#ifndef SYM_DATA_END
+#define SYM_DATA_END(name)				\
+	SYM_END(name, SYM_T_OBJECT)
+#endif
+
+/* SYM_DATA_END_LABEL -- the labeled end of SYM_DATA_START symbol */
+#ifndef SYM_DATA_END_LABEL
+#define SYM_DATA_END_LABEL(name, linkage, label)	\
+	linkage(label) ASM_NL				\
+	.type label SYM_T_OBJECT ASM_NL			\
+	label:						\
+	SYM_END(name, SYM_T_OBJECT)
+#endif
+
+/* SYM_DATA -- start+end wrapper around simple global data */
+#ifndef SYM_DATA
+#define SYM_DATA(name, data...)				\
+	SYM_DATA_START(name) ASM_NL				\
+	data ASM_NL						\
+	SYM_DATA_END(name)
+#endif
+
+/* SYM_DATA_LOCAL -- start+end wrapper around simple local data */
+#ifndef SYM_DATA_LOCAL
+#define SYM_DATA_LOCAL(name, data...)			\
+	SYM_DATA_START_LOCAL(name) ASM_NL			\
+	data ASM_NL						\
+	SYM_DATA_END(name)
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _LINUX_LINKAGE_H */
-- 
2.39.2




^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 4/7] ARM: asm: implement CPU_BE/CPU_LE
  2023-05-26  6:37 [PATCH v2 0/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation Ahmad Fatoum
                   ` (2 preceding siblings ...)
  2023-05-26  6:37 ` [PATCH v2 3/7] include: sync <linux/linkage.h> with Linux Ahmad Fatoum
@ 2023-05-26  6:37 ` Ahmad Fatoum
  2023-05-26  6:37 ` [PATCH v2 5/7] ARM: asm: import Linux adr_l/ldr_l assembler.h definitions Ahmad Fatoum
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2023-05-26  6:37 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

This will be required for incoming SHA1/2 ARMv8 Crypto Extensions assembly
routines.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 arch/arm/include/asm/assembler.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
index 95c6768de89c..7f9281b9a1a1 100644
--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -111,3 +111,22 @@
 	.align	3;				\
 	.long	9999b,9001f;			\
 	.previous
+
+
+/*
+ * Select code when configured for BE.
+ */
+#ifdef CONFIG_CPU_BIG_ENDIAN
+#define CPU_BE(code...) code
+#else
+#define CPU_BE(code...)
+#endif
+
+/*
+ * Select code when configured for LE.
+ */
+#ifdef CONFIG_CPU_BIG_ENDIAN
+#define CPU_LE(code...)
+#else
+#define CPU_LE(code...) code
+#endif
-- 
2.39.2




^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 5/7] ARM: asm: import Linux adr_l/ldr_l assembler.h definitions
  2023-05-26  6:37 [PATCH v2 0/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation Ahmad Fatoum
                   ` (3 preceding siblings ...)
  2023-05-26  6:37 ` [PATCH v2 4/7] ARM: asm: implement CPU_BE/CPU_LE Ahmad Fatoum
@ 2023-05-26  6:37 ` Ahmad Fatoum
  2023-05-26  6:37 ` [PATCH v2 6/7] crypto: sha: reorder struct sha*_state into Linux order Ahmad Fatoum
  2023-05-26  6:37 ` [PATCH v2 7/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation Ahmad Fatoum
  6 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2023-05-26  6:37 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

These macros will come in handy to have loads in assembly routines not
generate relocation entries.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 arch/arm/include/asm/assembler.h | 211 +++++++++++++++++++++++++++++++
 1 file changed, 211 insertions(+)

diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
index 7f9281b9a1a1..5db0f692eec6 100644
--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -130,3 +130,214 @@
 #else
 #define CPU_LE(code...) code
 #endif
+
+#ifdef CONFIG_CPU_64
+/*
+ * Pseudo-ops for PC-relative adr/ldr/str <reg>, <symbol> where
+ * <symbol> is within the range +/- 4 GB of the PC.
+ */
+	/*
+	 * @dst: destination register (64 bit wide)
+	 * @sym: name of the symbol
+	 */
+	.macro	adr_l, dst, sym
+	adrp	\dst, \sym
+	add	\dst, \dst, :lo12:\sym
+	.endm
+
+	/*
+	 * @dst: destination register (32 or 64 bit wide)
+	 * @sym: name of the symbol
+	 * @tmp: optional 64-bit scratch register to be used if <dst> is a
+	 *       32-bit wide register, in which case it cannot be used to hold
+	 *       the address
+	 */
+	.macro	ldr_l, dst, sym, tmp=
+	.ifb	\tmp
+	adrp	\dst, \sym
+	ldr	\dst, [\dst, :lo12:\sym]
+	.else
+	adrp	\tmp, \sym
+	ldr	\dst, [\tmp, :lo12:\sym]
+	.endif
+	.endm
+
+	/*
+	 * @src: source register (32 or 64 bit wide)
+	 * @sym: name of the symbol
+	 * @tmp: mandatory 64-bit scratch register to calculate the address
+	 *       while <src> needs to be preserved.
+	 */
+	.macro	str_l, src, sym, tmp
+	adrp	\tmp, \sym
+	str	\src, [\tmp, :lo12:\sym]
+	.endm
+
+#else
+
+	.macro		__adldst_l, op, reg, sym, tmp, c
+	.if		__LINUX_ARM_ARCH__ < 7
+	ldr\c		\tmp, .La\@
+	.subsection	1
+	.align		2
+.La\@:	.long		\sym - .Lpc\@
+	.previous
+	.else
+	.ifnb		\c
+ THUMB(	ittt		\c			)
+	.endif
+	movw\c		\tmp, #:lower16:\sym - .Lpc\@
+	movt\c		\tmp, #:upper16:\sym - .Lpc\@
+	.endif
+
+#ifndef CONFIG_THUMB2_BAREBOX
+	.set		.Lpc\@, . + 8			// PC bias
+	.ifc		\op, add
+	add\c		\reg, \tmp, pc
+	.else
+	\op\c		\reg, [pc, \tmp]
+	.endif
+#else
+.Lb\@:	add\c		\tmp, \tmp, pc
+	/*
+	 * In Thumb-2 builds, the PC bias depends on whether we are currently
+	 * emitting into a .arm or a .thumb section. The size of the add opcode
+	 * above will be 2 bytes when emitting in Thumb mode and 4 bytes when
+	 * emitting in ARM mode, so let's use this to account for the bias.
+	 */
+	.set		.Lpc\@, . + (. - .Lb\@)
+
+	.ifnc		\op, add
+	\op\c		\reg, [\tmp]
+	.endif
+#endif
+	.endm
+
+	/*
+	 * mov_l - move a constant value or [relocated] address into a register
+	 */
+	.macro		mov_l, dst:req, imm:req, cond
+	.if		__LINUX_ARM_ARCH__ < 7
+	ldr\cond	\dst, =\imm
+	.else
+	movw\cond	\dst, #:lower16:\imm
+	movt\cond	\dst, #:upper16:\imm
+	.endif
+	.endm
+
+	/*
+	 * adr_l - adr pseudo-op with unlimited range
+	 *
+	 * @dst: destination register
+	 * @sym: name of the symbol
+	 * @cond: conditional opcode suffix
+	 */
+	.macro		adr_l, dst:req, sym:req, cond
+	__adldst_l	add, \dst, \sym, \dst, \cond
+	.endm
+
+	/*
+	 * ldr_l - ldr <literal> pseudo-op with unlimited range
+	 *
+	 * @dst: destination register
+	 * @sym: name of the symbol
+	 * @cond: conditional opcode suffix
+	 */
+	.macro		ldr_l, dst:req, sym:req, cond
+	__adldst_l	ldr, \dst, \sym, \dst, \cond
+	.endm
+
+	/*
+	 * str_l - str <literal> pseudo-op with unlimited range
+	 *
+	 * @src: source register
+	 * @sym: name of the symbol
+	 * @tmp: mandatory scratch register
+	 * @cond: conditional opcode suffix
+	 */
+	.macro		str_l, src:req, sym:req, tmp:req, cond
+	__adldst_l	str, \src, \sym, \tmp, \cond
+	.endm
+
+	.macro		__ldst_va, op, reg, tmp, sym, cond, offset
+#if __LINUX_ARM_ARCH__ >= 7 || \
+    (defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS))
+	mov_l		\tmp, \sym, \cond
+#else
+	/*
+	 * Avoid a literal load, by emitting a sequence of ADD/LDR instructions
+	 * with the appropriate relocations. The combined sequence has a range
+	 * of -/+ 256 MiB, which should be sufficient for the core kernel and
+	 * for modules loaded into the module region.
+	 */
+	.globl		\sym
+	.reloc		.L0_\@, R_ARM_ALU_PC_G0_NC, \sym
+	.reloc		.L1_\@, R_ARM_ALU_PC_G1_NC, \sym
+	.reloc		.L2_\@, R_ARM_LDR_PC_G2, \sym
+.L0_\@: sub\cond	\tmp, pc, #8 - \offset
+.L1_\@: sub\cond	\tmp, \tmp, #4 - \offset
+.L2_\@:
+#endif
+	\op\cond	\reg, [\tmp, #\offset]
+	.endm
+
+	/*
+	 * ldr_va - load a 32-bit word from the virtual address of \sym
+	 */
+	.macro		ldr_va, rd:req, sym:req, cond, tmp, offset=0
+	.ifnb		\tmp
+	__ldst_va	ldr, \rd, \tmp, \sym, \cond, \offset
+	.else
+	__ldst_va	ldr, \rd, \rd, \sym, \cond, \offset
+	.endif
+	.endm
+
+	/*
+	 * str_va - store a 32-bit word to the virtual address of \sym
+	 */
+	.macro		str_va, rn:req, sym:req, tmp:req, cond
+	__ldst_va	str, \rn, \tmp, \sym, \cond, 0
+	.endm
+
+	/*
+	 * ldr_this_cpu - Load a 32-bit word from the per-CPU variable 'sym'
+	 *		  into register 'rd', which may be the stack pointer,
+	 *		  using 't1' and 't2' as general temp registers. These
+	 *		  are permitted to overlap with 'rd' if != sp
+	 */
+	.macro		ldr_this_cpu, rd:req, sym:req, t1:req, t2:req
+	ldr_va		\rd, \sym, tmp=\t1
+	.endm
+
+	/*
+	 * rev_l - byte-swap a 32-bit value
+	 *
+	 * @val: source/destination register
+	 * @tmp: scratch register
+	 */
+	.macro		rev_l, val:req, tmp:req
+	.if		__LINUX_ARM_ARCH__ < 6
+	eor		\tmp, \val, \val, ror #16
+	bic		\tmp, \tmp, #0x00ff0000
+	mov		\val, \val, ror #8
+	eor		\val, \val, \tmp, lsr #8
+	.else
+	rev		\val, \val
+	.endif
+	.endm
+
+	/*
+	 * bl_r - branch and link to register
+	 *
+	 * @dst: target to branch to
+	 * @c: conditional opcode suffix
+	 */
+	.macro		bl_r, dst:req, c
+	.if		__LINUX_ARM_ARCH__ < 6
+	mov\c		lr, pc
+	mov\c		pc, \dst
+	.else
+	blx\c		\dst
+	.endif
+	.endm
+#endif
-- 
2.39.2




^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 6/7] crypto: sha: reorder struct sha*_state into Linux order
  2023-05-26  6:37 [PATCH v2 0/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation Ahmad Fatoum
                   ` (4 preceding siblings ...)
  2023-05-26  6:37 ` [PATCH v2 5/7] ARM: asm: import Linux adr_l/ldr_l assembler.h definitions Ahmad Fatoum
@ 2023-05-26  6:37 ` Ahmad Fatoum
  2023-05-26  6:37 ` [PATCH v2 7/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation Ahmad Fatoum
  6 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2023-05-26  6:37 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

Incoming Kernel code expects state to be the first member and uses
variables exported from C to get offsets for the two other members.
To import code as-is from kernel, follow suit. No functional change.

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 include/crypto/sha.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/crypto/sha.h b/include/crypto/sha.h
index 17a489bd4a8d..b01d74cd3334 100644
--- a/include/crypto/sha.h
+++ b/include/crypto/sha.h
@@ -67,20 +67,20 @@
 #define SHA512_H7	0x5be0cd19137e2179ULL
 
 struct sha1_state {
-	u64 count;
 	u32 state[SHA1_DIGEST_SIZE / 4];
+	u64 count;
 	u8 buffer[SHA1_BLOCK_SIZE];
 };
 
 struct sha256_state {
-	u64 count;
 	u32 state[SHA256_DIGEST_SIZE / 4];
+	u64 count;
 	u8 buf[SHA256_BLOCK_SIZE];
 };
 
 struct sha512_state {
-	u64 count[2];
 	u64 state[SHA512_DIGEST_SIZE / 8];
+	u64 count[2];
 	u8 buf[SHA512_BLOCK_SIZE];
 };
 
-- 
2.39.2




^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 7/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation
  2023-05-26  6:37 [PATCH v2 0/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation Ahmad Fatoum
                   ` (5 preceding siblings ...)
  2023-05-26  6:37 ` [PATCH v2 6/7] crypto: sha: reorder struct sha*_state into Linux order Ahmad Fatoum
@ 2023-05-26  6:37 ` Ahmad Fatoum
  6 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2023-05-26  6:37 UTC (permalink / raw)
  To: barebox; +Cc: Ahmad Fatoum

This imports the Linux v6.3 state of the ARMv8 Crypto Extensions (CE)
accelerated SHA1/SHA2 routines. This increases hashing rate a tenfold:

  sha1-generic:   digest(7 bytes) = 11750ns digest(4097 bytes) = 59125ns
  sha224-generic: digest(7 bytes) = 12750ns digest(4097 bytes) = 95000ns
  sha256-generic: digest(7 bytes) =  2250ns digest(4097 bytes) = 94875ns
  sha1-ce:        digest(7 bytes) =  2875ns digest(4097 bytes) =  8125ns
  sha224-ce:      digest(7 bytes) =  3125ns digest(4097 bytes) =  7750ns
  sha256-ce:      digest(7 bytes) =   750ns digest(4097 bytes) =  7625ns

This shaves 400ms of a FIT image boot that uses sha256 as digest for
the images referenced by the selected configuration:

  barebox@imx8mn-old:/ time bootm -d kernel-a
  Dryrun. Aborted.
  time: 998ms

  barebox@imx8mn-new:/ time bootm -d kernel-a
  Dryrun. Aborted.
  time: 601ms

Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
---
 arch/arm/Makefile               |   3 +-
 arch/arm/crypto/Makefile        |   6 ++
 arch/arm/crypto/sha1-ce-core.S  | 149 ++++++++++++++++++++++++++++++
 arch/arm/crypto/sha1-ce-glue.c  |  93 +++++++++++++++++++
 arch/arm/crypto/sha2-ce-core.S  | 156 ++++++++++++++++++++++++++++++++
 arch/arm/crypto/sha2-ce-glue.c  | 121 +++++++++++++++++++++++++
 arch/arm/include/asm/neon.h     |   8 ++
 crypto/Kconfig                  |  21 +++++
 include/crypto/sha.h            |   4 +
 include/crypto/sha1_base.h      | 104 +++++++++++++++++++++
 include/crypto/sha256_base.h    | 129 ++++++++++++++++++++++++++
 include/linux/barebox-wrapper.h |   1 +
 include/linux/string.h          |  20 ++++
 test/self/digest.c              |   2 +
 14 files changed, 816 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm/crypto/sha1-ce-core.S
 create mode 100644 arch/arm/crypto/sha1-ce-glue.c
 create mode 100644 arch/arm/crypto/sha2-ce-core.S
 create mode 100644 arch/arm/crypto/sha2-ce-glue.c
 create mode 100644 arch/arm/include/asm/neon.h
 create mode 100644 include/crypto/sha1_base.h
 create mode 100644 include/crypto/sha256_base.h

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 2208b071ac11..35ebc70f44e2 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -195,11 +195,12 @@ endif
 
 common-y += arch/arm/boards/ $(MACH)
 common-y += arch/arm/cpu/
+common-y += arch/arm/crypto/
 
 ifeq ($(CONFIG_CPU_V8), y)
 common-y += arch/arm/lib64/
 else
-common-y += arch/arm/lib32/ arch/arm/crypto/
+common-y += arch/arm/lib32/
 endif
 
 common-$(CONFIG_OFTREE) += arch/arm/dts/
diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile
index 990c0bd609cd..55b3ac0538f6 100644
--- a/arch/arm/crypto/Makefile
+++ b/arch/arm/crypto/Makefile
@@ -9,6 +9,12 @@ obj-$(CONFIG_DIGEST_SHA256_ARM) += sha256-arm.o
 sha1-arm-y	:= sha1-armv4-large.o sha1_glue.o
 sha256-arm-y	:= sha256-core.o sha256_glue.o
 
+obj-$(CONFIG_DIGEST_SHA1_ARM64_CE) += sha1-ce.o
+sha1-ce-y := sha1-ce-glue.o sha1-ce-core.o
+
+obj-$(CONFIG_DIGEST_SHA256_ARM64_CE) += sha2-ce.o
+sha2-ce-y := sha2-ce-glue.o sha2-ce-core.o
+
 quiet_cmd_perl = PERL    $@
       cmd_perl = $(PERL) $(<) > $(@)
 
diff --git a/arch/arm/crypto/sha1-ce-core.S b/arch/arm/crypto/sha1-ce-core.S
new file mode 100644
index 000000000000..dec53c68c814
--- /dev/null
+++ b/arch/arm/crypto/sha1-ce-core.S
@@ -0,0 +1,149 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * sha1-ce-core.S - SHA-1 secure hash using ARMv8 Crypto Extensions
+ *
+ * Copyright (C) 2014 Linaro Ltd <ard.biesheuvel@linaro.org>
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+	.text
+	.arch		armv8-a+crypto
+
+	k0		.req	v0
+	k1		.req	v1
+	k2		.req	v2
+	k3		.req	v3
+
+	t0		.req	v4
+	t1		.req	v5
+
+	dga		.req	q6
+	dgav		.req	v6
+	dgb		.req	s7
+	dgbv		.req	v7
+
+	dg0q		.req	q12
+	dg0s		.req	s12
+	dg0v		.req	v12
+	dg1s		.req	s13
+	dg1v		.req	v13
+	dg2s		.req	s14
+
+	.macro		add_only, op, ev, rc, s0, dg1
+	.ifc		\ev, ev
+	add		t1.4s, v\s0\().4s, \rc\().4s
+	sha1h		dg2s, dg0s
+	.ifnb		\dg1
+	sha1\op		dg0q, \dg1, t0.4s
+	.else
+	sha1\op		dg0q, dg1s, t0.4s
+	.endif
+	.else
+	.ifnb		\s0
+	add		t0.4s, v\s0\().4s, \rc\().4s
+	.endif
+	sha1h		dg1s, dg0s
+	sha1\op		dg0q, dg2s, t1.4s
+	.endif
+	.endm
+
+	.macro		add_update, op, ev, rc, s0, s1, s2, s3, dg1
+	sha1su0		v\s0\().4s, v\s1\().4s, v\s2\().4s
+	add_only	\op, \ev, \rc, \s1, \dg1
+	sha1su1		v\s0\().4s, v\s3\().4s
+	.endm
+
+	.macro		loadrc, k, val, tmp
+	movz		\tmp, :abs_g0_nc:\val
+	movk		\tmp, :abs_g1:\val
+	dup		\k, \tmp
+	.endm
+
+	/*
+	 * int sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
+	 *			 int blocks)
+	 */
+SYM_FUNC_START(sha1_ce_transform)
+	/* load round constants */
+	loadrc		k0.4s, 0x5a827999, w6
+	loadrc		k1.4s, 0x6ed9eba1, w6
+	loadrc		k2.4s, 0x8f1bbcdc, w6
+	loadrc		k3.4s, 0xca62c1d6, w6
+
+	/* load state */
+	ld1		{dgav.4s}, [x0]
+	ldr		dgb, [x0, #16]
+
+	/* load sha1_ce_state::finalize */
+	ldr_l		w4, sha1_ce_offsetof_finalize, x4
+	ldr		w4, [x0, x4]
+
+	/* load input */
+0:	ld1		{v8.4s-v11.4s}, [x1], #64
+	sub		w2, w2, #1
+
+CPU_LE(	rev32		v8.16b, v8.16b		)
+CPU_LE(	rev32		v9.16b, v9.16b		)
+CPU_LE(	rev32		v10.16b, v10.16b	)
+CPU_LE(	rev32		v11.16b, v11.16b	)
+
+1:	add		t0.4s, v8.4s, k0.4s
+	mov		dg0v.16b, dgav.16b
+
+	add_update	c, ev, k0,  8,  9, 10, 11, dgb
+	add_update	c, od, k0,  9, 10, 11,  8
+	add_update	c, ev, k0, 10, 11,  8,  9
+	add_update	c, od, k0, 11,  8,  9, 10
+	add_update	c, ev, k1,  8,  9, 10, 11
+
+	add_update	p, od, k1,  9, 10, 11,  8
+	add_update	p, ev, k1, 10, 11,  8,  9
+	add_update	p, od, k1, 11,  8,  9, 10
+	add_update	p, ev, k1,  8,  9, 10, 11
+	add_update	p, od, k2,  9, 10, 11,  8
+
+	add_update	m, ev, k2, 10, 11,  8,  9
+	add_update	m, od, k2, 11,  8,  9, 10
+	add_update	m, ev, k2,  8,  9, 10, 11
+	add_update	m, od, k2,  9, 10, 11,  8
+	add_update	m, ev, k3, 10, 11,  8,  9
+
+	add_update	p, od, k3, 11,  8,  9, 10
+	add_only	p, ev, k3,  9
+	add_only	p, od, k3, 10
+	add_only	p, ev, k3, 11
+	add_only	p, od
+
+	/* update state */
+	add		dgbv.2s, dgbv.2s, dg1v.2s
+	add		dgav.4s, dgav.4s, dg0v.4s
+
+	cbz		w2, 2f
+	b		0b
+
+	/*
+	 * Final block: add padding and total bit count.
+	 * Skip if the input size was not a round multiple of the block size,
+	 * the padding is handled by the C code in that case.
+	 */
+2:	cbz		x4, 3f
+	ldr_l		w4, sha1_ce_offsetof_count, x4
+	ldr		x4, [x0, x4]
+	movi		v9.2d, #0
+	mov		x8, #0x80000000
+	movi		v10.2d, #0
+	ror		x7, x4, #29		// ror(lsl(x4, 3), 32)
+	fmov		d8, x8
+	mov		x4, #0
+	mov		v11.d[0], xzr
+	mov		v11.d[1], x7
+	b		1b
+
+	/* store new state */
+3:	st1		{dgav.4s}, [x0]
+	str		dgb, [x0, #16]
+	mov		w0, w2
+	ret
+SYM_FUNC_END(sha1_ce_transform)
diff --git a/arch/arm/crypto/sha1-ce-glue.c b/arch/arm/crypto/sha1-ce-glue.c
new file mode 100644
index 000000000000..5b49237573fa
--- /dev/null
+++ b/arch/arm/crypto/sha1-ce-glue.c
@@ -0,0 +1,93 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * sha1-ce-glue.c - SHA-1 secure hash using ARMv8 Crypto Extensions
+ *
+ * Copyright (C) 2014 - 2017 Linaro Ltd <ard.biesheuvel@linaro.org>
+ */
+
+#include <common.h>
+#include <digest.h>
+#include <init.h>
+#include <crypto/sha.h>
+#include <crypto/sha1_base.h>
+#include <crypto/internal.h>
+#include <linux/linkage.h>
+#include <asm/byteorder.h>
+#include <asm/neon.h>
+
+MODULE_DESCRIPTION("SHA1 secure hash using ARMv8 Crypto Extensions");
+MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS_CRYPTO("sha1");
+
+struct sha1_ce_state {
+	struct sha1_state	sst;
+	u32			finalize;
+};
+
+extern const u32 sha1_ce_offsetof_count;
+extern const u32 sha1_ce_offsetof_finalize;
+
+asmlinkage int sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
+				 int blocks);
+
+static void __sha1_ce_transform(struct sha1_state *sst, u8 const *src,
+				int blocks)
+{
+	while (blocks) {
+		int rem;
+
+		kernel_neon_begin();
+		rem = sha1_ce_transform(container_of(sst, struct sha1_ce_state,
+						     sst), src, blocks);
+		kernel_neon_end();
+		src += (blocks - rem) * SHA1_BLOCK_SIZE;
+		blocks = rem;
+	}
+}
+
+const u32 sha1_ce_offsetof_count = offsetof(struct sha1_ce_state, sst.count);
+const u32 sha1_ce_offsetof_finalize = offsetof(struct sha1_ce_state, finalize);
+
+static int sha1_ce_update(struct digest *desc, const void *data,
+			  unsigned long len)
+{
+	struct sha1_ce_state *sctx = digest_ctx(desc);
+
+	sctx->finalize = 0;
+	sha1_base_do_update(desc, data, len, __sha1_ce_transform);
+
+	return 0;
+}
+
+static int sha1_ce_final(struct digest *desc, u8 *out)
+{
+	struct sha1_ce_state *sctx = digest_ctx(desc);
+
+	sctx->finalize = 0;
+	sha1_base_do_finalize(desc, __sha1_ce_transform);
+	return sha1_base_finish(desc, out);
+}
+
+static struct digest_algo m = {
+	.base = {
+		.name		=	"sha1",
+		.driver_name	=	"sha1-ce",
+		.priority	=	200,
+		.algo		=	HASH_ALGO_SHA1,
+	},
+
+	.init	=	sha1_base_init,
+	.update	=	sha1_ce_update,
+	.final	=	sha1_ce_final,
+	.digest	=	digest_generic_digest,
+	.verify	=	digest_generic_verify,
+	.length	=	SHA1_DIGEST_SIZE,
+	.ctx_length =	sizeof(struct sha1_ce_state),
+};
+
+static int sha1_ce_mod_init(void)
+{
+	return digest_algo_register(&m);
+}
+coredevice_initcall(sha1_ce_mod_init);
diff --git a/arch/arm/crypto/sha2-ce-core.S b/arch/arm/crypto/sha2-ce-core.S
new file mode 100644
index 000000000000..5a60b13b87a2
--- /dev/null
+++ b/arch/arm/crypto/sha2-ce-core.S
@@ -0,0 +1,156 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * sha2-ce-core.S - core SHA-224/SHA-256 transform using v8 Crypto Extensions
+ *
+ * Copyright (C) 2014 Linaro Ltd <ard.biesheuvel@linaro.org>
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+	.text
+	.arch		armv8-a+crypto
+
+	dga		.req	q20
+	dgav		.req	v20
+	dgb		.req	q21
+	dgbv		.req	v21
+
+	t0		.req	v22
+	t1		.req	v23
+
+	dg0q		.req	q24
+	dg0v		.req	v24
+	dg1q		.req	q25
+	dg1v		.req	v25
+	dg2q		.req	q26
+	dg2v		.req	v26
+
+	.macro		add_only, ev, rc, s0
+	mov		dg2v.16b, dg0v.16b
+	.ifeq		\ev
+	add		t1.4s, v\s0\().4s, \rc\().4s
+	sha256h		dg0q, dg1q, t0.4s
+	sha256h2	dg1q, dg2q, t0.4s
+	.else
+	.ifnb		\s0
+	add		t0.4s, v\s0\().4s, \rc\().4s
+	.endif
+	sha256h		dg0q, dg1q, t1.4s
+	sha256h2	dg1q, dg2q, t1.4s
+	.endif
+	.endm
+
+	.macro		add_update, ev, rc, s0, s1, s2, s3
+	sha256su0	v\s0\().4s, v\s1\().4s
+	add_only	\ev, \rc, \s1
+	sha256su1	v\s0\().4s, v\s2\().4s, v\s3\().4s
+	.endm
+
+	/*
+	 * The SHA-256 round constants
+	 */
+	.section	".rodata", "a"
+	.align		4
+.Lsha2_rcon:
+	.word		0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5
+	.word		0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5
+	.word		0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3
+	.word		0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174
+	.word		0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc
+	.word		0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da
+	.word		0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7
+	.word		0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967
+	.word		0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13
+	.word		0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85
+	.word		0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3
+	.word		0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070
+	.word		0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5
+	.word		0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3
+	.word		0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208
+	.word		0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
+
+	/*
+	 * void sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src,
+	 *			  int blocks)
+	 */
+	.text
+SYM_FUNC_START(sha2_ce_transform)
+	/* load round constants */
+	adr_l		x8, .Lsha2_rcon
+	ld1		{ v0.4s- v3.4s}, [x8], #64
+	ld1		{ v4.4s- v7.4s}, [x8], #64
+	ld1		{ v8.4s-v11.4s}, [x8], #64
+	ld1		{v12.4s-v15.4s}, [x8]
+
+	/* load state */
+	ld1		{dgav.4s, dgbv.4s}, [x0]
+
+	/* load sha256_ce_state::finalize */
+	ldr_l		w4, sha256_ce_offsetof_finalize, x4
+	ldr		w4, [x0, x4]
+
+	/* load input */
+0:	ld1		{v16.4s-v19.4s}, [x1], #64
+	sub		w2, w2, #1
+
+CPU_LE(	rev32		v16.16b, v16.16b	)
+CPU_LE(	rev32		v17.16b, v17.16b	)
+CPU_LE(	rev32		v18.16b, v18.16b	)
+CPU_LE(	rev32		v19.16b, v19.16b	)
+
+1:	add		t0.4s, v16.4s, v0.4s
+	mov		dg0v.16b, dgav.16b
+	mov		dg1v.16b, dgbv.16b
+
+	add_update	0,  v1, 16, 17, 18, 19
+	add_update	1,  v2, 17, 18, 19, 16
+	add_update	0,  v3, 18, 19, 16, 17
+	add_update	1,  v4, 19, 16, 17, 18
+
+	add_update	0,  v5, 16, 17, 18, 19
+	add_update	1,  v6, 17, 18, 19, 16
+	add_update	0,  v7, 18, 19, 16, 17
+	add_update	1,  v8, 19, 16, 17, 18
+
+	add_update	0,  v9, 16, 17, 18, 19
+	add_update	1, v10, 17, 18, 19, 16
+	add_update	0, v11, 18, 19, 16, 17
+	add_update	1, v12, 19, 16, 17, 18
+
+	add_only	0, v13, 17
+	add_only	1, v14, 18
+	add_only	0, v15, 19
+	add_only	1
+
+	/* update state */
+	add		dgav.4s, dgav.4s, dg0v.4s
+	add		dgbv.4s, dgbv.4s, dg1v.4s
+
+	/* handled all input blocks? */
+	cbz		w2, 2f
+	b		0b
+
+	/*
+	 * Final block: add padding and total bit count.
+	 * Skip if the input size was not a round multiple of the block size,
+	 * the padding is handled by the C code in that case.
+	 */
+2:	cbz		x4, 3f
+	ldr_l		w4, sha256_ce_offsetof_count, x4
+	ldr		x4, [x0, x4]
+	movi		v17.2d, #0
+	mov		x8, #0x80000000
+	movi		v18.2d, #0
+	ror		x7, x4, #29		// ror(lsl(x4, 3), 32)
+	fmov		d16, x8
+	mov		x4, #0
+	mov		v19.d[0], xzr
+	mov		v19.d[1], x7
+	b		1b
+
+	/* store new state */
+3:	st1		{dgav.4s, dgbv.4s}, [x0]
+	mov		w0, w2
+	ret
+SYM_FUNC_END(sha2_ce_transform)
diff --git a/arch/arm/crypto/sha2-ce-glue.c b/arch/arm/crypto/sha2-ce-glue.c
new file mode 100644
index 000000000000..88cbc7993dac
--- /dev/null
+++ b/arch/arm/crypto/sha2-ce-glue.c
@@ -0,0 +1,121 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * sha2-ce-glue.c - SHA-224/SHA-256 using ARMv8 Crypto Extensions
+ *
+ * Copyright (C) 2014 - 2017 Linaro Ltd <ard.biesheuvel@linaro.org>
+ */
+
+#include <common.h>
+#include <digest.h>
+#include <init.h>
+#include <crypto/sha.h>
+#include <crypto/sha256_base.h>
+#include <crypto/internal.h>
+#include <linux/linkage.h>
+#include <asm/byteorder.h>
+#include <asm/neon.h>
+
+#include <asm/neon.h>
+
+MODULE_DESCRIPTION("SHA-224/SHA-256 secure hash using ARMv8 Crypto Extensions");
+MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS_CRYPTO("sha224");
+MODULE_ALIAS_CRYPTO("sha256");
+
+struct sha256_ce_state {
+	struct sha256_state	sst;
+	u32			finalize;
+};
+
+extern const u32 sha256_ce_offsetof_count;
+extern const u32 sha256_ce_offsetof_finalize;
+
+asmlinkage int sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src,
+				 int blocks);
+
+static void __sha2_ce_transform(struct sha256_state *sst, u8 const *src,
+				int blocks)
+{
+	while (blocks) {
+		int rem;
+
+		kernel_neon_begin();
+		rem = sha2_ce_transform(container_of(sst, struct sha256_ce_state,
+						     sst), src, blocks);
+		kernel_neon_end();
+		src += (blocks - rem) * SHA256_BLOCK_SIZE;
+		blocks = rem;
+	}
+}
+
+const u32 sha256_ce_offsetof_count = offsetof(struct sha256_ce_state,
+					      sst.count);
+const u32 sha256_ce_offsetof_finalize = offsetof(struct sha256_ce_state,
+						 finalize);
+
+static int sha256_ce_update(struct digest *desc, const void *data,
+			    unsigned long len)
+{
+	struct sha256_ce_state *sctx = digest_ctx(desc);
+
+	sctx->finalize = 0;
+	sha256_base_do_update(desc, data, len, __sha2_ce_transform);
+
+	return 0;
+}
+
+static int sha256_ce_final(struct digest *desc, u8 *out)
+{
+	struct sha256_ce_state *sctx = digest_ctx(desc);
+
+	sctx->finalize = 0;
+	sha256_base_do_finalize(desc, __sha2_ce_transform);
+	return sha256_base_finish(desc, out);
+}
+
+static struct digest_algo sha224 = {
+	.base = {
+		.name		=	"sha224",
+		.driver_name	=	"sha224-ce",
+		.priority	=	200,
+		.algo		=	HASH_ALGO_SHA224,
+	},
+
+	.length	=	SHA224_DIGEST_SIZE,
+	.init	=	sha224_base_init,
+	.update	=	sha256_ce_update,
+	.final	=	sha256_ce_final,
+	.digest	=	digest_generic_digest,
+	.verify	=	digest_generic_verify,
+	.ctx_length =	sizeof(struct sha256_ce_state),
+};
+
+static int sha224_ce_digest_register(void)
+{
+	return digest_algo_register(&sha224);
+}
+coredevice_initcall(sha224_ce_digest_register);
+
+static struct digest_algo sha256 = {
+	.base = {
+		.name		=	"sha256",
+		.driver_name	=	"sha256-ce",
+		.priority	=	200,
+		.algo		=	HASH_ALGO_SHA256,
+	},
+
+	.length	=	SHA256_DIGEST_SIZE,
+	.init	=	sha256_base_init,
+	.update	=	sha256_ce_update,
+	.final	=	sha256_ce_final,
+	.digest	=	digest_generic_digest,
+	.verify	=	digest_generic_verify,
+	.ctx_length =	sizeof(struct sha256_ce_state),
+};
+
+static int sha256_ce_digest_register(void)
+{
+	return digest_algo_register(&sha256);
+}
+coredevice_initcall(sha256_ce_digest_register);
diff --git a/arch/arm/include/asm/neon.h b/arch/arm/include/asm/neon.h
new file mode 100644
index 000000000000..476462e83e80
--- /dev/null
+++ b/arch/arm/include/asm/neon.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __ARM_ASM_NEON_H__
+#define __ARM_ASM_NEON_H__
+
+#define kernel_neon_begin()	((void)0)
+#define kernel_neon_end()	((void)0)
+
+#endif
diff --git a/crypto/Kconfig b/crypto/Kconfig
index f32accb3d090..629f615de1af 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -86,6 +86,27 @@ config DIGEST_SHA256_ARM
 	  SHA-256 secure hash standard (DFIPS 180-2) implemented
 	  using optimized ARM assembler and NEON, when available.
 
+config DIGEST_SHA1_ARM64_CE
+	tristate "SHA-1 digest algorithm (ARMv8 Crypto Extensions)"
+	depends on CPU_V8
+	select HAVE_DIGEST_SHA1
+	help
+	  SHA-1 secure hash algorithm (FIPS 180)
+
+	  Architecture: arm64 using:
+	  - ARMv8 Crypto Extensions
+
+config DIGEST_SHA256_ARM64_CE
+	tristate "SHA-224/256 digest algorithm (ARMv8 Crypto Extensions)"
+	depends on CPU_V8
+	select HAVE_DIGEST_SHA256
+	select HAVE_DIGEST_SHA224
+	help
+	  SHA-224 and SHA-256 secure hash algorithms (FIPS 180)
+
+	  Architecture: arm64 using:
+	  - ARMv8 Crypto Extensions
+
 endif
 
 config CRYPTO_PBKDF2
diff --git a/include/crypto/sha.h b/include/crypto/sha.h
index b01d74cd3334..e23d7cb76692 100644
--- a/include/crypto/sha.h
+++ b/include/crypto/sha.h
@@ -66,6 +66,10 @@
 #define SHA512_H6	0x1f83d9abfb41bd6bULL
 #define SHA512_H7	0x5be0cd19137e2179ULL
 
+/*
+ * State must be first member for compatibility with assembly
+ * code imported from Linux
+ */
 struct sha1_state {
 	u32 state[SHA1_DIGEST_SIZE / 4];
 	u64 count;
diff --git a/include/crypto/sha1_base.h b/include/crypto/sha1_base.h
new file mode 100644
index 000000000000..8e1a5fdcc865
--- /dev/null
+++ b/include/crypto/sha1_base.h
@@ -0,0 +1,104 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * sha1_base.h - core logic for SHA-1 implementations
+ *
+ * Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org>
+ */
+
+#ifndef _CRYPTO_SHA1_BASE_H
+#define _CRYPTO_SHA1_BASE_H
+
+#include <digest.h>
+#include <crypto/sha.h>
+#include <linux/string.h>
+
+#include <asm/unaligned.h>
+
+typedef void (sha1_block_fn)(struct sha1_state *sst, u8 const *src, int blocks);
+
+static inline int sha1_base_init(struct digest *desc)
+{
+	struct sha1_state *sctx = digest_ctx(desc);
+
+	*sctx = (struct sha1_state){
+		.state = { SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4 },
+	};
+
+	return 0;
+}
+
+static inline int sha1_base_do_update(struct digest *desc,
+				      const u8 *data,
+				      unsigned int len,
+				      sha1_block_fn *block_fn)
+{
+	struct sha1_state *sctx = digest_ctx(desc);
+	unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+	sctx->count += len;
+
+	if (unlikely((partial + len) >= SHA1_BLOCK_SIZE)) {
+		int blocks;
+
+		if (partial) {
+			int p = SHA1_BLOCK_SIZE - partial;
+
+			memcpy(sctx->buffer + partial, data, p);
+			data += p;
+			len -= p;
+
+			block_fn(sctx, sctx->buffer, 1);
+		}
+
+		blocks = len / SHA1_BLOCK_SIZE;
+		len %= SHA1_BLOCK_SIZE;
+
+		if (blocks) {
+			block_fn(sctx, data, blocks);
+			data += blocks * SHA1_BLOCK_SIZE;
+		}
+		partial = 0;
+	}
+	if (len)
+		memcpy(sctx->buffer + partial, data, len);
+
+	return 0;
+}
+
+static inline int sha1_base_do_finalize(struct digest *desc,
+					sha1_block_fn *block_fn)
+{
+	const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+	struct sha1_state *sctx = digest_ctx(desc);
+	__be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+	unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+	sctx->buffer[partial++] = 0x80;
+	if (partial > bit_offset) {
+		memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+		partial = 0;
+
+		block_fn(sctx, sctx->buffer, 1);
+	}
+
+	memset(sctx->buffer + partial, 0x0, bit_offset - partial);
+	*bits = cpu_to_be64(sctx->count << 3);
+	block_fn(sctx, sctx->buffer, 1);
+
+	return 0;
+}
+
+static inline int sha1_base_finish(struct digest *desc, u8 *out)
+{
+	struct sha1_state *sctx = digest_ctx(desc);
+	__be32 *digest = (__be32 *)out;
+	int i;
+
+	for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(__be32); i++)
+		put_unaligned_be32(sctx->state[i], digest++);
+
+	memzero_explicit(sctx, sizeof(*sctx));
+	return 0;
+}
+
+#endif /* _CRYPTO_SHA1_BASE_H */
diff --git a/include/crypto/sha256_base.h b/include/crypto/sha256_base.h
new file mode 100644
index 000000000000..b9e48eb942d3
--- /dev/null
+++ b/include/crypto/sha256_base.h
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * sha256_base.h - core logic for SHA-256 implementations
+ *
+ * Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org>
+ */
+
+#ifndef _CRYPTO_SHA256_BASE_H
+#define _CRYPTO_SHA256_BASE_H
+
+#include <digest.h>
+#include <crypto/sha.h>
+#include <linux/string.h>
+
+#include <asm/unaligned.h>
+
+typedef void (sha256_block_fn)(struct sha256_state *sst, u8 const *src,
+			       int blocks);
+
+static inline int sha224_base_init(struct digest *desc)
+{
+	struct sha256_state *sctx = digest_ctx(desc);
+
+	sctx->state[0] = SHA224_H0;
+	sctx->state[1] = SHA224_H1;
+	sctx->state[2] = SHA224_H2;
+	sctx->state[3] = SHA224_H3;
+	sctx->state[4] = SHA224_H4;
+	sctx->state[5] = SHA224_H5;
+	sctx->state[6] = SHA224_H6;
+	sctx->state[7] = SHA224_H7;
+	sctx->count = 0;
+
+	return 0;
+}
+
+static inline int sha256_base_init(struct digest *desc)
+{
+	struct sha256_state *sctx = digest_ctx(desc);
+
+	sctx->state[0] = SHA256_H0;
+	sctx->state[1] = SHA256_H1;
+	sctx->state[2] = SHA256_H2;
+	sctx->state[3] = SHA256_H3;
+	sctx->state[4] = SHA256_H4;
+	sctx->state[5] = SHA256_H5;
+	sctx->state[6] = SHA256_H6;
+	sctx->state[7] = SHA256_H7;
+	sctx->count = 0;
+
+	return 0;
+}
+
+static inline int sha256_base_do_update(struct digest *desc,
+					const u8 *data,
+					unsigned int len,
+					sha256_block_fn *block_fn)
+{
+	struct sha256_state *sctx = digest_ctx(desc);
+	unsigned int partial = sctx->count % SHA256_BLOCK_SIZE;
+
+	sctx->count += len;
+
+	if (unlikely((partial + len) >= SHA256_BLOCK_SIZE)) {
+		int blocks;
+
+		if (partial) {
+			int p = SHA256_BLOCK_SIZE - partial;
+
+			memcpy(sctx->buf + partial, data, p);
+			data += p;
+			len -= p;
+
+			block_fn(sctx, sctx->buf, 1);
+		}
+
+		blocks = len / SHA256_BLOCK_SIZE;
+		len %= SHA256_BLOCK_SIZE;
+
+		if (blocks) {
+			block_fn(sctx, data, blocks);
+			data += blocks * SHA256_BLOCK_SIZE;
+		}
+		partial = 0;
+	}
+	if (len)
+		memcpy(sctx->buf + partial, data, len);
+
+	return 0;
+}
+
+static inline int sha256_base_do_finalize(struct digest *desc,
+					  sha256_block_fn *block_fn)
+{
+	const int bit_offset = SHA256_BLOCK_SIZE - sizeof(__be64);
+	struct sha256_state *sctx = digest_ctx(desc);
+	__be64 *bits = (__be64 *)(sctx->buf + bit_offset);
+	unsigned int partial = sctx->count % SHA256_BLOCK_SIZE;
+
+	sctx->buf[partial++] = 0x80;
+	if (partial > bit_offset) {
+		memset(sctx->buf + partial, 0x0, SHA256_BLOCK_SIZE - partial);
+		partial = 0;
+
+		block_fn(sctx, sctx->buf, 1);
+	}
+
+	memset(sctx->buf + partial, 0x0, bit_offset - partial);
+	*bits = cpu_to_be64(sctx->count << 3);
+	block_fn(sctx, sctx->buf, 1);
+
+	return 0;
+}
+
+static inline int sha256_base_finish(struct digest *desc, u8 *out)
+{
+	unsigned int digest_size = digest_length(desc);
+	struct sha256_state *sctx = digest_ctx(desc);
+	__be32 *digest = (__be32 *)out;
+	int i;
+
+	for (i = 0; digest_size > 0; i++, digest_size -= sizeof(__be32))
+		put_unaligned_be32(sctx->state[i], digest++);
+
+	memzero_explicit(sctx, sizeof(*sctx));
+	return 0;
+}
+
+#endif /* _CRYPTO_SHA256_BASE_H */
diff --git a/include/linux/barebox-wrapper.h b/include/linux/barebox-wrapper.h
index 28e87cb17316..ed237877fc75 100644
--- a/include/linux/barebox-wrapper.h
+++ b/include/linux/barebox-wrapper.h
@@ -22,6 +22,7 @@ static inline void vfree(const void *addr)
 #define MODULE_ALIAS(x)
 #define MODULE_DEVICE_TABLE(bus, table)
 #define MODULE_ALIAS_DSA_TAG_DRIVER(drv)
+#define MODULE_ALIAS_CRYPTO(alias)
 
 #define __user
 #define __init
diff --git a/include/linux/string.h b/include/linux/string.h
index cd81ab13965b..75c8cf818b39 100644
--- a/include/linux/string.h
+++ b/include/linux/string.h
@@ -113,6 +113,26 @@ extern char *strim(char *);
 
 void *memchr_inv(const void *start, int c, size_t bytes);
 
+/**
+ * memzero_explicit - Fill a region of memory (e.g. sensitive
+ *		      keying data) with 0s.
+ * @s: Pointer to the start of the area.
+ * @count: The size of the area.
+ *
+ * Note: usually using memset() is just fine (!), but in cases
+ * where clearing out _local_ data at the end of a scope is
+ * necessary, memzero_explicit() should be used instead in
+ * order to prevent the compiler from optimising away zeroing.
+ *
+ * memzero_explicit() doesn't need an arch-specific version as
+ * it just invokes the one of memset() implicitly.
+ */
+static inline void memzero_explicit(void *s, size_t count)
+{
+	memset(s, 0, count);
+	barrier_data(s);
+}
+
 /**
  * kbasename - return the last part of a pathname.
  *
diff --git a/test/self/digest.c b/test/self/digest.c
index 769444ad15ce..4cda5b09637b 100644
--- a/test/self/digest.c
+++ b/test/self/digest.c
@@ -141,6 +141,7 @@ static void test_digests_sha12(const char *suffix)
 
 	cond = !strcmp(suffix, "generic") ? IS_ENABLED(CONFIG_DIGEST_SHA224_GENERIC) :
 	       !strcmp(suffix, "asm") ? IS_ENABLED(CONFIG_DIGEST_SHA256_ARM) :
+	       !strcmp(suffix, "ce")  ? IS_ENABLED(CONFIG_DIGEST_SHA256_ARM64_CE) :
 	       IS_ENABLED(CONFIG_HAVE_DIGEST_SHA224);
 
 	test_digest(cond, digest_suffix("sha224", suffix),
@@ -151,6 +152,7 @@ static void test_digests_sha12(const char *suffix)
 
 	cond = !strcmp(suffix, "generic") ? IS_ENABLED(CONFIG_DIGEST_SHA256_GENERIC) :
 	       !strcmp(suffix, "asm") ? IS_ENABLED(CONFIG_DIGEST_SHA256_ARM) :
+	       !strcmp(suffix, "ce")  ? IS_ENABLED(CONFIG_DIGEST_SHA256_ARM64_CE) :
 	       IS_ENABLED(CONFIG_HAVE_DIGEST_SHA256);
 
 	test_digest(cond, digest_suffix("sha256", suffix),
-- 
2.39.2




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/7] include: sync <linux/linkage.h> with Linux
  2023-05-26  6:37 ` [PATCH v2 3/7] include: sync <linux/linkage.h> with Linux Ahmad Fatoum
@ 2023-05-26  6:54   ` Sascha Hauer
  2023-05-26  7:45     ` Ahmad Fatoum
  0 siblings, 1 reply; 10+ messages in thread
From: Sascha Hauer @ 2023-05-26  6:54 UTC (permalink / raw)
  To: Ahmad Fatoum; +Cc: barebox

On Fri, May 26, 2023 at 08:37:42AM +0200, Ahmad Fatoum wrote:
> Linux has added new SYM_ macros in the assembly code and deprecated
> ENTRY/PROC. Import the necessary definitions to make kernel code
> porting easier.
> 
> Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
> ---
>  common/Kconfig          |   6 +
>  include/linux/linkage.h | 321 ++++++++++++++++++++++++++++++++++++----
>  2 files changed, 298 insertions(+), 29 deletions(-)
> 
> diff --git a/common/Kconfig b/common/Kconfig
> index ce94718c848a..5346ba5a623c 100644
> --- a/common/Kconfig
> +++ b/common/Kconfig
> @@ -1710,3 +1710,9 @@ config DDR_SPD
>  
>  config HAVE_ARCH_ASAN
>  	bool
> +
> +config ARCH_USE_SYM_ANNOTATIONS
> +	bool
> +	help
> +	  This is selected by architectures that exclusively use the new SYM_
> +	  macros in their assembly code and not the deprecated ENTRY/PROC.
> diff --git a/include/linux/linkage.h b/include/linux/linkage.h
> index efb2d6fa407b..c262c7b36907 100644
> --- a/include/linux/linkage.h
> +++ b/include/linux/linkage.h
> @@ -3,9 +3,16 @@
>  #ifndef _LINUX_LINKAGE_H
>  #define _LINUX_LINKAGE_H
>  
> -#include <linux/compiler.h>
> +#include <linux/compiler_types.h>
> +#include <linux/stringify.h>
> +#include <linux/export.h>

This breaks compilation zylonite310_defconfig. include/linux/export.h is
not safe for being included from assembly files when CONFIG_MODULES is
enabled. We could drop this inclusion here as linux/export.h doesn't
contain anything useful for assembly code. However, I added the
following instead to not trap into this again should linux/linkage.h
be updated again.

Sascha

>From 36b3448b724db0ed1888d61de7069df0ebcfcdc1 Mon Sep 17 00:00:00 2001
From: Sascha Hauer <s.hauer@pengutronix.de>
Date: Fri, 26 May 2023 08:48:55 +0200
Subject: [PATCH] include: make linux/export.h safe for being included from
 assembly

include/linux/linkage.h is included from assembly files and itself
includes linux/export.h, so add #ifndef __ASSEMBLY__ protection
to that file as well.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 include/linux/export.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/linux/export.h b/include/linux/export.h
index 88d318bd8a..90f6ada2d5 100644
--- a/include/linux/export.h
+++ b/include/linux/export.h
@@ -2,6 +2,8 @@
 #ifndef _LINUX_EXPORT_H
 #define _LINUX_EXPORT_H
 
+#ifndef __ASSEMBLY__
+
 #define THIS_MODULE	0
 
 #ifdef CONFIG_MODULES
@@ -36,4 +38,6 @@ struct kernel_symbol
 
 #endif /* CONFIG_MODULES */
 
+#endif /* __ASSEMBLY__ */
+
 #endif /* _LINUX_EXPORT_H */
-- 
2.39.2

-- 
Pengutronix e.K.                           |                             |
Steuerwalder Str. 21                       | http://www.pengutronix.de/  |
31137 Hildesheim, Germany                  | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/7] include: sync <linux/linkage.h> with Linux
  2023-05-26  6:54   ` Sascha Hauer
@ 2023-05-26  7:45     ` Ahmad Fatoum
  0 siblings, 0 replies; 10+ messages in thread
From: Ahmad Fatoum @ 2023-05-26  7:45 UTC (permalink / raw)
  To: Sascha Hauer; +Cc: barebox

On 26.05.23 08:54, Sascha Hauer wrote:
> On Fri, May 26, 2023 at 08:37:42AM +0200, Ahmad Fatoum wrote:
>> Linux has added new SYM_ macros in the assembly code and deprecated
>> ENTRY/PROC. Import the necessary definitions to make kernel code
>> porting easier.
>>
>> Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
>> ---
>>  common/Kconfig          |   6 +
>>  include/linux/linkage.h | 321 ++++++++++++++++++++++++++++++++++++----
>>  2 files changed, 298 insertions(+), 29 deletions(-)
>>
>> diff --git a/common/Kconfig b/common/Kconfig
>> index ce94718c848a..5346ba5a623c 100644
>> --- a/common/Kconfig
>> +++ b/common/Kconfig
>> @@ -1710,3 +1710,9 @@ config DDR_SPD
>>  
>>  config HAVE_ARCH_ASAN
>>  	bool
>> +
>> +config ARCH_USE_SYM_ANNOTATIONS
>> +	bool
>> +	help
>> +	  This is selected by architectures that exclusively use the new SYM_
>> +	  macros in their assembly code and not the deprecated ENTRY/PROC.
>> diff --git a/include/linux/linkage.h b/include/linux/linkage.h
>> index efb2d6fa407b..c262c7b36907 100644
>> --- a/include/linux/linkage.h
>> +++ b/include/linux/linkage.h
>> @@ -3,9 +3,16 @@
>>  #ifndef _LINUX_LINKAGE_H
>>  #define _LINUX_LINKAGE_H
>>  
>> -#include <linux/compiler.h>
>> +#include <linux/compiler_types.h>
>> +#include <linux/stringify.h>
>> +#include <linux/export.h>
> 
> This breaks compilation zylonite310_defconfig. include/linux/export.h is
> not safe for being included from assembly files when CONFIG_MODULES is
> enabled. We could drop this inclusion here as linux/export.h doesn't
> contain anything useful for assembly code. However, I added the
> following instead to not trap into this again should linux/linkage.h
> be updated again.
> 
> Sascha
> 
> From 36b3448b724db0ed1888d61de7069df0ebcfcdc1 Mon Sep 17 00:00:00 2001
> From: Sascha Hauer <s.hauer@pengutronix.de>
> Date: Fri, 26 May 2023 08:48:55 +0200
> Subject: [PATCH] include: make linux/export.h safe for being included from
>  assembly
> 
> include/linux/linkage.h is included from assembly files and itself
> includes linux/export.h, so add #ifndef __ASSEMBLY__ protection
> to that file as well.
> 
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>

Looks good to me. Thanks!

> ---
>  include/linux/export.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/include/linux/export.h b/include/linux/export.h
> index 88d318bd8a..90f6ada2d5 100644
> --- a/include/linux/export.h
> +++ b/include/linux/export.h
> @@ -2,6 +2,8 @@
>  #ifndef _LINUX_EXPORT_H
>  #define _LINUX_EXPORT_H
>  
> +#ifndef __ASSEMBLY__
> +
>  #define THIS_MODULE	0
>  
>  #ifdef CONFIG_MODULES
> @@ -36,4 +38,6 @@ struct kernel_symbol
>  
>  #endif /* CONFIG_MODULES */
>  
> +#endif /* __ASSEMBLY__ */
> +
>  #endif /* _LINUX_EXPORT_H */

-- 
Pengutronix e.K.                           |                             |
Steuerwalder Str. 21                       | http://www.pengutronix.de/  |
31137 Hildesheim, Germany                  | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |




^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-05-26  7:46 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-26  6:37 [PATCH v2 0/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation Ahmad Fatoum
2023-05-26  6:37 ` [PATCH v2 1/7] crypto: digest: match driver name if no algo name matches Ahmad Fatoum
2023-05-26  6:37 ` [PATCH v2 2/7] test: self: add digest test Ahmad Fatoum
2023-05-26  6:37 ` [PATCH v2 3/7] include: sync <linux/linkage.h> with Linux Ahmad Fatoum
2023-05-26  6:54   ` Sascha Hauer
2023-05-26  7:45     ` Ahmad Fatoum
2023-05-26  6:37 ` [PATCH v2 4/7] ARM: asm: implement CPU_BE/CPU_LE Ahmad Fatoum
2023-05-26  6:37 ` [PATCH v2 5/7] ARM: asm: import Linux adr_l/ldr_l assembler.h definitions Ahmad Fatoum
2023-05-26  6:37 ` [PATCH v2 6/7] crypto: sha: reorder struct sha*_state into Linux order Ahmad Fatoum
2023-05-26  6:37 ` [PATCH v2 7/7] ARM64: crypto: add Crypto Extensions accelerated SHA implementation Ahmad Fatoum

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox