* (no subject) @ 2016-06-14 7:06 Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 01/12] arm: add armv8 Kconfig entries Raphael Poggi ` (12 more replies) 0 siblings, 13 replies; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox Change since v1: PATCH 2/12: remove hunk which belongs to patch adding mach-qemu PATCH 3/12: remove unused files PATCH 4/12: create lowlevel64 PATCH 11/12: create pgtables64 (nothing in common with the arm32 version) PATCH 12/12: rename "mach-virt" => "mach-qemu" rename board "qemu_virt64" remove board env files Hello, This patch series introduces a basic support for arm64. The arm64 code is merged in the current arch/arm directory. I try to be iterative in the merge process, and find correct solutions to handle both architecture at some places. I test the patch series by compiling arm64 virt machine and arm32 vexpress-a9 and test it in qemu, everything seems to work. Thanks, Raphaël arch/arm/Kconfig | 28 ++ arch/arm/Makefile | 30 +- arch/arm/boards/Makefile | 1 + arch/arm/boards/qemu-virt64/Kconfig | 8 + arch/arm/boards/qemu-virt64/Makefile | 1 + arch/arm/boards/qemu-virt64/init.c | 67 ++++ arch/arm/configs/qemu_virt64_defconfig | 55 +++ arch/arm/cpu/Kconfig | 29 +- arch/arm/cpu/Makefile | 26 +- arch/arm/cpu/cache-armv8.S | 168 +++++++++ arch/arm/cpu/cache.c | 19 + arch/arm/cpu/cpu.c | 5 + arch/arm/cpu/cpuinfo.c | 58 ++- arch/arm/cpu/exceptions_64.S | 127 +++++++ arch/arm/cpu/interrupts.c | 47 +++ arch/arm/cpu/lowlevel_64.S | 40 ++ arch/arm/cpu/mmu.h | 54 +++ arch/arm/cpu/mmu_64.c | 333 +++++++++++++++++ arch/arm/cpu/start.c | 2 + arch/arm/include/asm/bitops.h | 5 + arch/arm/include/asm/cache.h | 9 + arch/arm/include/asm/mmu.h | 14 +- arch/arm/include/asm/pgtable64.h | 140 +++++++ arch/arm/include/asm/system.h | 46 ++- arch/arm/include/asm/system_info.h | 38 ++ arch/arm/lib64/Makefile | 10 + arch/arm/lib64/armlinux.c | 275 ++++++++++++++ arch/arm/lib64/asm-offsets.c | 16 + arch/arm/lib64/barebox.lds.S | 125 +++++++ arch/arm/lib64/bootm.c | 572 +++++++++++++++++++++++++++++ arch/arm/lib64/copy_template.S | 192 ++++++++++ arch/arm/lib64/div0.c | 27 ++ arch/arm/lib64/memcpy.S | 74 ++++ arch/arm/lib64/memset.S | 215 +++++++++++ arch/arm/lib64/module.c | 98 +++++ arch/arm/mach-qemu/Kconfig | 18 + arch/arm/mach-qemu/Makefile | 2 + arch/arm/mach-qemu/include/mach/debug_ll.h | 24 ++ arch/arm/mach-qemu/include/mach/devices.h | 13 + arch/arm/mach-qemu/virt_devices.c | 30 ++ arch/arm/mach-qemu/virt_lowlevel.c | 19 + 41 files changed, 3044 insertions(+), 16 deletions(-) _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 01/12] arm: add armv8 Kconfig entries 2016-06-14 7:06 Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-15 6:33 ` Sascha Hauer 2016-06-14 7:06 ` [PATCH v2 02/12] arm: Makefile: rework makefile to handle armv8 Raphael Poggi ` (11 subsequent siblings) 12 siblings, 1 reply; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/Kconfig | 23 +++++++++++++++++++++++ arch/arm/cpu/Kconfig | 29 ++++++++++++++++++++++++++++- 2 files changed, 51 insertions(+), 1 deletion(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 1fc887b..986fdaa 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -315,6 +315,29 @@ config ARM_BOARD_APPEND_ATAG endmenu +choice + prompt "Barebox code model" + help + You should only select this option if you have a workload that + actually benefits from 64-bit processing or if your machine has + large memory. You will only be presented a single option in this + menu if your system does not support both 32-bit and 64-bit modes. + +config 32BIT + bool "32-bit barebox" + depends on CPU_SUPPORTS_32BIT_KERNEL && SYS_SUPPORTS_32BIT_KERNEL + help + Select this option if you want to build a 32-bit barebox. + +config 64BIT + bool "64-bit barebox" + depends on CPU_SUPPORTS_64BIT_KERNEL && SYS_SUPPORTS_64BIT_KERNEL + select ARCH_DMA_ADDR_T_64BIT + help + Select this option if you want to build a 64-bit barebox. + +endchoice + menu "ARM specific settings" config ARM_OPTIMZED_STRING_FUNCTIONS diff --git a/arch/arm/cpu/Kconfig b/arch/arm/cpu/Kconfig index 4f5d9b6..fd327a8 100644 --- a/arch/arm/cpu/Kconfig +++ b/arch/arm/cpu/Kconfig @@ -2,7 +2,9 @@ comment "Processor Type" config CPU_32 bool - default y + +config CPU_64 + bool # Select CPU types depending on the architecture selected. This selects # which CPUs we support in the kernel image, and the compiler instruction @@ -69,6 +71,12 @@ config CPU_V7 bool select CPU_32v7 +# ARMv8 +config CPU_V8 + bool + select CPU_64v8 + select CPU_SUPPORTS_64BIT_KERNEL + config CPU_XSC3 bool select CPU_32v4T @@ -84,15 +92,23 @@ config CPU_XSCALE # This defines the compiler instruction set which depends on the machine type. config CPU_32v4T bool + select CPU_32 config CPU_32v5 bool + select CPU_32 config CPU_32v6 bool + select CPU_32 config CPU_32v7 bool + select CPU_32 + +config CPU_64v8 + bool + select CPU_64 comment "processor features" @@ -124,3 +140,14 @@ config CACHE_L2X0 bool "Enable L2x0 PrimeCell" depends on MMU && ARCH_HAS_L2X0 +config SYS_SUPPORTS_32BIT_KERNEL + bool + +config SYS_SUPPORTS_64BIT_KERNEL + bool + +config CPU_SUPPORTS_32BIT_KERNEL + bool + +config CPU_SUPPORTS_64BIT_KERNEL + bool -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 01/12] arm: add armv8 Kconfig entries 2016-06-14 7:06 ` [PATCH v2 01/12] arm: add armv8 Kconfig entries Raphael Poggi @ 2016-06-15 6:33 ` Sascha Hauer 2016-06-23 14:43 ` Raphaël Poggi 0 siblings, 1 reply; 19+ messages in thread From: Sascha Hauer @ 2016-06-15 6:33 UTC (permalink / raw) To: Raphael Poggi; +Cc: barebox Hi Raphael, On Tue, Jun 14, 2016 at 09:06:35AM +0200, Raphael Poggi wrote: > Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> > --- > arch/arm/Kconfig | 23 +++++++++++++++++++++++ > arch/arm/cpu/Kconfig | 29 ++++++++++++++++++++++++++++- > 2 files changed, 51 insertions(+), 1 deletion(-) > > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig > index 1fc887b..986fdaa 100644 > --- a/arch/arm/Kconfig > +++ b/arch/arm/Kconfig > @@ -315,6 +315,29 @@ config ARM_BOARD_APPEND_ATAG > > endmenu > > +choice > + prompt "Barebox code model" > + help > + You should only select this option if you have a workload that > + actually benefits from 64-bit processing or if your machine has > + large memory. You will only be presented a single option in this > + menu if your system does not support both 32-bit and 64-bit modes. > + > +config 32BIT > + bool "32-bit barebox" > + depends on CPU_SUPPORTS_32BIT_KERNEL && SYS_SUPPORTS_32BIT_KERNEL > + help > + Select this option if you want to build a 32-bit barebox. > + > +config 64BIT > + bool "64-bit barebox" > + depends on CPU_SUPPORTS_64BIT_KERNEL && SYS_SUPPORTS_64BIT_KERNEL > + select ARCH_DMA_ADDR_T_64BIT > + help > + Select this option if you want to build a 64-bit barebox. > + > +endchoice > + > menu "ARM specific settings" > > config ARM_OPTIMZED_STRING_FUNCTIONS > diff --git a/arch/arm/cpu/Kconfig b/arch/arm/cpu/Kconfig arm64 needs 64bit pointers. You could merge the following to this patch to make resource_size_t 64bit wide and to get rid of the "warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]" warnings. Sascha From 599547f4054ca715f66a83bf49dc9293e3cc0af0 Mon Sep 17 00:00:00 2001 From: Sascha Hauer <s.hauer@pengutronix.de> Date: Wed, 15 Jun 2016 08:29:51 +0200 Subject: [PATCH] arm64: select PHYS_ADDR_T_64BIT Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> --- arch/arm/cpu/Kconfig | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm/cpu/Kconfig b/arch/arm/cpu/Kconfig index fd327a8..c90501e 100644 --- a/arch/arm/cpu/Kconfig +++ b/arch/arm/cpu/Kconfig @@ -1,9 +1,13 @@ comment "Processor Type" +config PHYS_ADDR_T_64BIT + bool + config CPU_32 bool config CPU_64 + select PHYS_ADDR_T_64BIT bool # Select CPU types depending on the architecture selected. This selects -- 2.8.1 -- Pengutronix e.K. | | Industrial Linux Solutions | http://www.pengutronix.de/ | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 01/12] arm: add armv8 Kconfig entries 2016-06-15 6:33 ` Sascha Hauer @ 2016-06-23 14:43 ` Raphaël Poggi 0 siblings, 0 replies; 19+ messages in thread From: Raphaël Poggi @ 2016-06-23 14:43 UTC (permalink / raw) To: Sascha Hauer; +Cc: barebox 2016-06-15 8:33 GMT+02:00 Sascha Hauer <s.hauer@pengutronix.de>: > Hi Raphael, > > On Tue, Jun 14, 2016 at 09:06:35AM +0200, Raphael Poggi wrote: >> Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> >> --- >> arch/arm/Kconfig | 23 +++++++++++++++++++++++ >> arch/arm/cpu/Kconfig | 29 ++++++++++++++++++++++++++++- >> 2 files changed, 51 insertions(+), 1 deletion(-) >> >> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig >> index 1fc887b..986fdaa 100644 >> --- a/arch/arm/Kconfig >> +++ b/arch/arm/Kconfig >> @@ -315,6 +315,29 @@ config ARM_BOARD_APPEND_ATAG >> >> endmenu >> >> +choice >> + prompt "Barebox code model" >> + help >> + You should only select this option if you have a workload that >> + actually benefits from 64-bit processing or if your machine has >> + large memory. You will only be presented a single option in this >> + menu if your system does not support both 32-bit and 64-bit modes. >> + >> +config 32BIT >> + bool "32-bit barebox" >> + depends on CPU_SUPPORTS_32BIT_KERNEL && SYS_SUPPORTS_32BIT_KERNEL >> + help >> + Select this option if you want to build a 32-bit barebox. >> + >> +config 64BIT >> + bool "64-bit barebox" >> + depends on CPU_SUPPORTS_64BIT_KERNEL && SYS_SUPPORTS_64BIT_KERNEL >> + select ARCH_DMA_ADDR_T_64BIT >> + help >> + Select this option if you want to build a 64-bit barebox. >> + >> +endchoice >> + >> menu "ARM specific settings" >> >> config ARM_OPTIMZED_STRING_FUNCTIONS >> diff --git a/arch/arm/cpu/Kconfig b/arch/arm/cpu/Kconfig > > arm64 needs 64bit pointers. You could merge the following to this patch > to make resource_size_t 64bit wide and to get rid of the "warning: cast > from pointer to integer of different size [-Wpointer-to-int-cast]" > warnings. Ok, thanks > > Sascha > > From 599547f4054ca715f66a83bf49dc9293e3cc0af0 Mon Sep 17 00:00:00 2001 > From: Sascha Hauer <s.hauer@pengutronix.de> > Date: Wed, 15 Jun 2016 08:29:51 +0200 > Subject: [PATCH] arm64: select PHYS_ADDR_T_64BIT > > Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> > --- > arch/arm/cpu/Kconfig | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/arch/arm/cpu/Kconfig b/arch/arm/cpu/Kconfig > index fd327a8..c90501e 100644 > --- a/arch/arm/cpu/Kconfig > +++ b/arch/arm/cpu/Kconfig > @@ -1,9 +1,13 @@ > comment "Processor Type" > > +config PHYS_ADDR_T_64BIT > + bool > + > config CPU_32 > bool > > config CPU_64 > + select PHYS_ADDR_T_64BIT > bool > > # Select CPU types depending on the architecture selected. This selects > -- > 2.8.1 > > -- > Pengutronix e.K. | | > Industrial Linux Solutions | http://www.pengutronix.de/ | > Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | > Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | > > _______________________________________________ > barebox mailing list > barebox@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/barebox _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 02/12] arm: Makefile: rework makefile to handle armv8 2016-06-14 7:06 Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 01/12] arm: add armv8 Kconfig entries Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 03/12] arm: introduce lib64 for arm64 related stuff Raphael Poggi ` (10 subsequent siblings) 12 siblings, 0 replies; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/Makefile | 29 +++++++++++++++++++++++++++-- 1 file changed, 27 insertions(+), 2 deletions(-) diff --git a/arch/arm/Makefile b/arch/arm/Makefile index 5ccdb83..2743d96 100644 --- a/arch/arm/Makefile +++ b/arch/arm/Makefile @@ -1,7 +1,11 @@ CPPFLAGS += -D__ARM__ -fno-strict-aliasing # Explicitly specifiy 32-bit ARM ISA since toolchain default can be -mthumb: +ifeq ($(CONFIG_CPU_V8),y) +CPPFLAGS +=$(call cc-option,-maarch64,) +else CPPFLAGS +=$(call cc-option,-marm,) +endif ifeq ($(CONFIG_CPU_BIG_ENDIAN),y) CPPFLAGS += -mbig-endian @@ -17,13 +21,16 @@ endif # at least some of the code would be executed with MMU off, lets be # conservative and instruct the compiler not to generate any unaligned # accesses +ifeq ($(CONFIG_CPU_V8),n) CFLAGS += -mno-unaligned-access +endif # This selects which instruction set is used. # Note that GCC does not numerically define an architecture version # macro, but instead defines a whole series of macros which makes # testing for a specific architecture or later rather impossible. +arch-$(CONFIG_CPU_64v8) := -D__LINUX_ARM_ARCH__=8 $(call cc-option,-march=armv8-a) arch-$(CONFIG_CPU_32v7) :=-D__LINUX_ARM_ARCH__=7 $(call cc-option,-march=armv7-a,-march=armv5t -Wa$(comma)-march=armv7-a) arch-$(CONFIG_CPU_32v6) :=-D__LINUX_ARM_ARCH__=6 $(call cc-option,-march=armv6,-march=armv5t -Wa$(comma)-march=armv6) arch-$(CONFIG_CPU_32v5) :=-D__LINUX_ARM_ARCH__=5 $(call cc-option,-march=armv5te,-march=armv4t) @@ -34,11 +41,15 @@ tune-$(CONFIG_CPU_ARM920T) :=-mtune=arm9tdmi tune-$(CONFIG_CPU_ARM926T) :=-mtune=arm9tdmi tune-$(CONFIG_CPU_XSCALE) :=$(call cc-option,-mtune=xscale,-mtune=strongarm110) -Wa,-mcpu=xscale +ifeq ($(CONFIG_CPU_V8), y) +CFLAGS_ABI :=-mabi=lp64 +else ifeq ($(CONFIG_AEABI),y) CFLAGS_ABI :=-mabi=aapcs-linux -mno-thumb-interwork else CFLAGS_ABI :=$(call cc-option,-mapcs-32,-mabi=apcs-gnu) $(call cc-option,-mno-thumb-interwork,) endif +endif ifeq ($(CONFIG_ARM_UNWIND),y) CFLAGS_ABI +=-funwind-tables @@ -51,8 +62,13 @@ CFLAGS_THUMB2 :=-mthumb $(AFLAGS_AUTOIT) $(AFLAGS_NOWARN) AFLAGS_THUMB2 :=$(CFLAGS_THUMB2) -Wa$(comma)-mthumb endif +ifeq ($(CONFIG_CPU_V8), y) +CPPFLAGS += $(CFLAGS_ABI) $(arch-y) $(tune-y) +AFLAGS += -include asm/unified.h +else CPPFLAGS += $(CFLAGS_ABI) $(arch-y) $(tune-y) -msoft-float $(CFLAGS_THUMB2) AFLAGS += -include asm/unified.h -msoft-float $(AFLAGS_THUMB2) +endif # Machine directory name. This list is sorted alphanumerically # by CONFIG_* macro name. @@ -275,12 +291,21 @@ MACH := endif common-y += $(BOARD) arch/arm/boards/ $(MACH) -common-y += arch/arm/lib/ arch/arm/cpu/ -common-y += arch/arm/crypto/ +common-y += arch/arm/cpu/ + +ifeq ($(CONFIG_CPU_V8), y) +common-y += arch/arm/lib64/ +else +common-y += arch/arm/lib/ arch/arm/crypto/ +endif common-$(CONFIG_OFTREE) += arch/arm/dts/ +ifeq ($(CONFIG_CPU_V8), y) +lds-y := arch/arm/lib64/barebox.lds +else lds-y := arch/arm/lib/barebox.lds +endif common- += $(patsubst %,arch/arm/boards/%/,$(board-)) -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 03/12] arm: introduce lib64 for arm64 related stuff 2016-06-14 7:06 Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 01/12] arm: add armv8 Kconfig entries Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 02/12] arm: Makefile: rework makefile to handle armv8 Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-15 6:15 ` Sascha Hauer 2016-06-14 7:06 ` [PATCH v2 04/12] arm: cpu: add arm64 specific code Raphael Poggi ` (9 subsequent siblings) 12 siblings, 1 reply; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/lib64/Makefile | 10 + arch/arm/lib64/armlinux.c | 275 ++++++++++++++++++++ arch/arm/lib64/asm-offsets.c | 16 ++ arch/arm/lib64/barebox.lds.S | 125 +++++++++ arch/arm/lib64/bootm.c | 572 +++++++++++++++++++++++++++++++++++++++++ arch/arm/lib64/copy_template.S | 192 ++++++++++++++ arch/arm/lib64/div0.c | 27 ++ arch/arm/lib64/memcpy.S | 74 ++++++ arch/arm/lib64/memset.S | 215 ++++++++++++++++ arch/arm/lib64/module.c | 98 +++++++ 10 files changed, 1604 insertions(+) create mode 100644 arch/arm/lib64/Makefile create mode 100644 arch/arm/lib64/armlinux.c create mode 100644 arch/arm/lib64/asm-offsets.c create mode 100644 arch/arm/lib64/barebox.lds.S create mode 100644 arch/arm/lib64/bootm.c create mode 100644 arch/arm/lib64/copy_template.S create mode 100644 arch/arm/lib64/div0.c create mode 100644 arch/arm/lib64/memcpy.S create mode 100644 arch/arm/lib64/memset.S create mode 100644 arch/arm/lib64/module.c diff --git a/arch/arm/lib64/Makefile b/arch/arm/lib64/Makefile new file mode 100644 index 0000000..a424293 --- /dev/null +++ b/arch/arm/lib64/Makefile @@ -0,0 +1,10 @@ +obj-$(CONFIG_ARM_LINUX) += armlinux.o +obj-$(CONFIG_BOOTM) += bootm.o +obj-y += div0.o +obj-$(CONFIG_ARM_OPTIMZED_STRING_FUNCTIONS) += memcpy.o +obj-$(CONFIG_ARM_OPTIMZED_STRING_FUNCTIONS) += memset.o +extra-y += barebox.lds + +pbl-y += lib1funcs.o +pbl-y += ashldi3.o +pbl-y += div0.o diff --git a/arch/arm/lib64/armlinux.c b/arch/arm/lib64/armlinux.c new file mode 100644 index 0000000..21a2292 --- /dev/null +++ b/arch/arm/lib64/armlinux.c @@ -0,0 +1,275 @@ +/* + * (C) Copyright 2002 + * Sysgo Real-Time Solutions, GmbH <www.elinos.com> + * Marius Groeger <mgroeger@sysgo.de> + * + * Copyright (C) 2001 Erik Mouw (J.A.K.Mouw@its.tudelft.nl) + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include <boot.h> +#include <common.h> +#include <command.h> +#include <driver.h> +#include <environment.h> +#include <image.h> +#include <init.h> +#include <fs.h> +#include <linux/list.h> +#include <xfuncs.h> +#include <malloc.h> +#include <fcntl.h> +#include <errno.h> +#include <memory.h> +#include <of.h> +#include <magicvar.h> + +#include <asm/byteorder.h> +#include <asm/setup.h> +#include <asm/barebox-arm.h> +#include <asm/armlinux.h> +#include <asm/system.h> + +static struct tag *params; +static void *armlinux_bootparams = NULL; + +static int armlinux_architecture; +static u32 armlinux_system_rev; +static u64 armlinux_system_serial; + +BAREBOX_MAGICVAR(armlinux_architecture, "ARM machine ID"); +BAREBOX_MAGICVAR(armlinux_system_rev, "ARM system revision"); +BAREBOX_MAGICVAR(armlinux_system_serial, "ARM system serial"); + +void armlinux_set_architecture(int architecture) +{ + export_env_ull("armlinux_architecture", architecture); + armlinux_architecture = architecture; +} + +int armlinux_get_architecture(void) +{ + getenv_uint("armlinux_architecture", &armlinux_architecture); + + return armlinux_architecture; +} + +void armlinux_set_revision(unsigned int rev) +{ + export_env_ull("armlinux_system_rev", rev); + armlinux_system_rev = rev; +} + +unsigned int armlinux_get_revision(void) +{ + getenv_uint("armlinux_system_rev", &armlinux_system_rev); + + return armlinux_system_rev; +} + +void armlinux_set_serial(u64 serial) +{ + export_env_ull("armlinux_system_serial", serial); + armlinux_system_serial = serial; +} + +u64 armlinux_get_serial(void) +{ + getenv_ull("armlinux_system_serial", &armlinux_system_serial); + + return armlinux_system_serial; +} + +void armlinux_set_bootparams(void *params) +{ + armlinux_bootparams = params; +} + +static struct tag *armlinux_get_bootparams(void) +{ + struct memory_bank *mem; + + if (armlinux_bootparams) + return armlinux_bootparams; + + for_each_memory_bank(mem) + return (void *)mem->start + 0x100; + + BUG(); +} + +#ifdef CONFIG_ARM_BOARD_APPEND_ATAG +static struct tag *(*atag_appender)(struct tag *); +void armlinux_set_atag_appender(struct tag *(*func)(struct tag *)) +{ + atag_appender = func; +} +#endif + +static void setup_start_tag(void) +{ + params = armlinux_get_bootparams(); + + params->hdr.tag = ATAG_CORE; + params->hdr.size = tag_size(tag_core); + + params->u.core.flags = 0; + params->u.core.pagesize = 0; + params->u.core.rootdev = 0; + + params = tag_next(params); +} + +static void setup_memory_tags(void) +{ + struct memory_bank *bank; + + for_each_memory_bank(bank) { + params->hdr.tag = ATAG_MEM; + params->hdr.size = tag_size(tag_mem32); + + params->u.mem.start = bank->start; + params->u.mem.size = bank->size; + + params = tag_next(params); + } +} + +static void setup_commandline_tag(const char *commandline, int swap) +{ + const char *p; + size_t words; + + if (!commandline) + return; + + /* eat leading white space */ + for (p = commandline; *p == ' '; p++) ; + + /* + * skip non-existent command lines so the kernel will still + * use its default command line. + */ + if (*p == '\0') + return; + + words = (strlen(p) + 1 /* NUL */ + 3 /* round up */) >> 2; + params->hdr.tag = ATAG_CMDLINE; + params->hdr.size = (sizeof(struct tag_header) >> 2) + words; + + strcpy(params->u.cmdline.cmdline, p); + +#ifdef CONFIG_BOOT_ENDIANNESS_SWITCH + if (swap) { + u32 *cmd = (u32 *)params->u.cmdline.cmdline; + while (words--) + cmd[words] = swab32(cmd[words]); + } +#endif + + params = tag_next(params); +} + +static void setup_revision_tag(void) +{ + u32 system_rev = armlinux_get_revision(); + + if (system_rev) { + params->hdr.tag = ATAG_REVISION; + params->hdr.size = tag_size(tag_revision); + + params->u.revision.rev = system_rev; + + params = tag_next(params); + } +} + +static void setup_serial_tag(void) +{ + u64 system_serial = armlinux_get_serial(); + + if (system_serial) { + params->hdr.tag = ATAG_SERIAL; + params->hdr.size = tag_size(tag_serialnr); + + params->u.serialnr.low = system_serial & 0xffffffff; + params->u.serialnr.high = system_serial >> 32; + + params = tag_next(params); + } +} + +static void setup_initrd_tag(unsigned long start, unsigned long size) +{ + /* an ATAG_INITRD node tells the kernel where the compressed + * ramdisk can be found. ATAG_RDIMG is a better name, actually. + */ + params->hdr.tag = ATAG_INITRD2; + params->hdr.size = tag_size(tag_initrd); + + params->u.initrd.start = start; + params->u.initrd.size = size; + + params = tag_next(params); +} + +static void setup_end_tag (void) +{ + params->hdr.tag = ATAG_NONE; + params->hdr.size = 0; +} + +static void setup_tags(unsigned long initrd_address, + unsigned long initrd_size, int swap) +{ + const char *commandline = linux_bootargs_get(); + + setup_start_tag(); + setup_memory_tags(); + setup_commandline_tag(commandline, swap); + + if (initrd_size) + setup_initrd_tag(initrd_address, initrd_size); + + setup_revision_tag(); + setup_serial_tag(); +#ifdef CONFIG_ARM_BOARD_APPEND_ATAG + if (atag_appender != NULL) + params = atag_appender(params); +#endif + setup_end_tag(); + + printf("commandline: %s\n" + "arch_number: %d\n", commandline, armlinux_get_architecture()); + +} + +void start_linux(void *adr, int swap, unsigned long initrd_address, + unsigned long initrd_size, void *oftree) +{ + void (*kernel)(int zero, int arch, void *params) = adr; + void *params = NULL; + int architecture; + + if (oftree) { + pr_debug("booting kernel with devicetree\n"); + params = oftree; + } else { + setup_tags(initrd_address, initrd_size, swap); + params = armlinux_get_bootparams(); + } + architecture = armlinux_get_architecture(); + + shutdown_barebox(); + + kernel(0, architecture, params); +} diff --git a/arch/arm/lib64/asm-offsets.c b/arch/arm/lib64/asm-offsets.c new file mode 100644 index 0000000..7bf6d12 --- /dev/null +++ b/arch/arm/lib64/asm-offsets.c @@ -0,0 +1,16 @@ +/* + * Generate definitions needed by assembly language modules. + * This code generates raw asm output which is post-processed to extract + * and format the required data. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/kbuild.h> + +int main(void) +{ + return 0; +} diff --git a/arch/arm/lib64/barebox.lds.S b/arch/arm/lib64/barebox.lds.S new file mode 100644 index 0000000..240699f --- /dev/null +++ b/arch/arm/lib64/barebox.lds.S @@ -0,0 +1,125 @@ +/* + * (C) Copyright 2000-2004 + * Wolfgang Denk, DENX Software Engineering, wd@denx.de. + * + * See file CREDITS for list of people who contributed to this + * project. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 of + * the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * + */ + +#include <asm-generic/barebox.lds.h> + +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-littleaarch64", "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(start) +SECTIONS +{ +#ifdef CONFIG_RELOCATABLE + . = 0x0; +#else + . = TEXT_BASE; +#endif + +#ifndef CONFIG_PBL_IMAGE + PRE_IMAGE +#endif + . = ALIGN(4); + .text : + { + _stext = .; + _text = .; + *(.text_entry*) + __bare_init_start = .; + *(.text_bare_init*) + __bare_init_end = .; + __exceptions_start = .; + KEEP(*(.text_exceptions*)) + __exceptions_stop = .; + *(.text*) + } + BAREBOX_BARE_INIT_SIZE + + . = ALIGN(4); + .rodata : { *(.rodata*) } + +#ifdef CONFIG_ARM_UNWIND + /* + * Stack unwinding tables + */ + . = ALIGN(8); + .ARM.unwind_idx : { + __start_unwind_idx = .; + *(.ARM.exidx*) + __stop_unwind_idx = .; + } + .ARM.unwind_tab : { + __start_unwind_tab = .; + *(.ARM.extab*) + __stop_unwind_tab = .; + } +#endif + _etext = .; /* End of text and rodata section */ + _sdata = .; + + . = ALIGN(4); + .data : { *(.data*) } + + .barebox_imd : { BAREBOX_IMD } + + . = .; + __barebox_cmd_start = .; + .barebox_cmd : { BAREBOX_CMDS } + __barebox_cmd_end = .; + + __barebox_magicvar_start = .; + .barebox_magicvar : { BAREBOX_MAGICVARS } + __barebox_magicvar_end = .; + + __barebox_initcalls_start = .; + .barebox_initcalls : { INITCALLS } + __barebox_initcalls_end = .; + + __barebox_exitcalls_start = .; + .barebox_exitcalls : { EXITCALLS } + __barebox_exitcalls_end = .; + + __usymtab_start = .; + __usymtab : { BAREBOX_SYMS } + __usymtab_end = .; + + .oftables : { BAREBOX_CLK_TABLE() } + + .dtb : { BAREBOX_DTB() } + + .rel.dyn : { + __rel_dyn_start = .; + *(.rel*) + __rel_dyn_end = .; + } + + .dynsym : { + __dynsym_start = .; + *(.dynsym) + __dynsym_end = .; + } + + _edata = .; + + . = ALIGN(4); + __bss_start = .; + .bss : { *(.bss*) } + __bss_stop = .; + _end = .; + _barebox_image_size = __bss_start - TEXT_BASE; +} diff --git a/arch/arm/lib64/bootm.c b/arch/arm/lib64/bootm.c new file mode 100644 index 0000000..1913d5f --- /dev/null +++ b/arch/arm/lib64/bootm.c @@ -0,0 +1,572 @@ +#include <boot.h> +#include <common.h> +#include <command.h> +#include <driver.h> +#include <environment.h> +#include <image.h> +#include <init.h> +#include <fs.h> +#include <libfile.h> +#include <linux/list.h> +#include <xfuncs.h> +#include <malloc.h> +#include <fcntl.h> +#include <errno.h> +#include <linux/sizes.h> +#include <libbb.h> +#include <magicvar.h> +#include <binfmt.h> +#include <restart.h> + +#include <asm/byteorder.h> +#include <asm/setup.h> +#include <asm/barebox-arm.h> +#include <asm/armlinux.h> +#include <asm/system.h> + +/* + * sdram_start_and_size() - determine place for putting the kernel/oftree/initrd + * + * @start: returns the start address of the first RAM bank + * @size: returns the usable space at the beginning of the first RAM bank + * + * This function returns the base address of the first RAM bank and the free + * space found there. + * + * return: 0 for success, negative error code otherwise + */ +static int sdram_start_and_size(unsigned long *start, unsigned long *size) +{ + struct memory_bank *bank; + struct resource *res; + + /* + * We use the first memory bank for the kernel and other resources + */ + bank = list_first_entry_or_null(&memory_banks, struct memory_bank, + list); + if (!bank) { + printf("cannot find first memory bank\n"); + return -EINVAL; + } + + /* + * If the first memory bank has child resources we can use the bank up + * to the beginning of the first child resource, otherwise we can use + * the whole bank. + */ + res = list_first_entry_or_null(&bank->res->children, struct resource, + sibling); + if (res) + *size = res->start - bank->start; + else + *size = bank->size; + + *start = bank->start; + + return 0; +} + +static int __do_bootm_linux(struct image_data *data, unsigned long free_mem, int swap) +{ + unsigned long kernel; + unsigned long initrd_start = 0, initrd_size = 0, initrd_end = 0; + int ret; + + kernel = data->os_res->start + data->os_entry; + + initrd_start = data->initrd_address; + + if (initrd_start == UIMAGE_INVALID_ADDRESS) { + initrd_start = PAGE_ALIGN(free_mem); + + if (bootm_verbose(data)) { + printf("no initrd load address, defaulting to 0x%08lx\n", + initrd_start); + } + } + + if (bootm_has_initrd(data)) { + ret = bootm_load_initrd(data, initrd_start); + if (ret) + return ret; + } + + if (data->initrd_res) { + initrd_start = data->initrd_res->start; + initrd_end = data->initrd_res->end; + initrd_size = resource_size(data->initrd_res); + free_mem = PAGE_ALIGN(initrd_end); + } + + ret = bootm_load_devicetree(data, free_mem); + if (ret) + return ret; + + if (bootm_verbose(data)) { + printf("\nStarting kernel at 0x%08lx", kernel); + if (initrd_size) + printf(", initrd at 0x%08lx", initrd_start); + if (data->oftree) + printf(", oftree at 0x%p", data->oftree); + printf("...\n"); + } + + if (data->dryrun) + return 0; + + start_linux((void *)kernel, swap, initrd_start, initrd_size, data->oftree); + + restart_machine(); + + return -ERESTARTSYS; +} + +static int do_bootm_linux(struct image_data *data) +{ + unsigned long load_address, mem_start, mem_size, mem_free; + int ret; + + ret = sdram_start_and_size(&mem_start, &mem_size); + if (ret) + return ret; + + load_address = data->os_address; + + if (load_address == UIMAGE_INVALID_ADDRESS) { + /* + * Just use a conservative default of 4 times the size of the + * compressed image, to avoid the need for the kernel to + * relocate itself before decompression. + */ + load_address = mem_start + PAGE_ALIGN( + bootm_get_os_size(data) * 4); + if (bootm_verbose(data)) + printf("no OS load address, defaulting to 0x%08lx\n", + load_address); + } + + ret = bootm_load_os(data, load_address); + if (ret) + return ret; + + /* + * put oftree/initrd close behind compressed kernel image to avoid + * placing it outside of the kernels lowmem. + */ + mem_free = PAGE_ALIGN(data->os_res->end + SZ_1M); + + return __do_bootm_linux(data, mem_free, 0); +} + +static struct image_handler uimage_handler = { + .name = "ARM Linux uImage", + .bootm = do_bootm_linux, + .filetype = filetype_uimage, + .ih_os = IH_OS_LINUX, +}; + +static struct image_handler rawimage_handler = { + .name = "ARM raw image", + .bootm = do_bootm_linux, + .filetype = filetype_unknown, +}; + +struct zimage_header { + u32 unused[9]; + u32 magic; + u32 start; + u32 end; +}; + +#define ZIMAGE_MAGIC 0x016F2818 + +static int do_bootz_linux_fdt(int fd, struct image_data *data) +{ + struct fdt_header __header, *header; + void *oftree; + int ret; + + u32 end; + + if (data->oftree) + return -ENXIO; + + header = &__header; + ret = read(fd, header, sizeof(*header)); + if (ret < sizeof(*header)) + return ret; + + if (file_detect_type(header, sizeof(*header)) != filetype_oftree) + return -ENXIO; + + end = be32_to_cpu(header->totalsize); + + oftree = malloc(end + 0x8000); + if (!oftree) { + perror("zImage: oftree malloc"); + return -ENOMEM; + } + + memcpy(oftree, header, sizeof(*header)); + + end -= sizeof(*header); + + ret = read_full(fd, oftree + sizeof(*header), end); + if (ret < 0) + goto err_free; + if (ret < end) { + printf("premature end of image\n"); + ret = -EIO; + goto err_free; + } + + if (IS_BUILTIN(CONFIG_OFTREE)) { + data->of_root_node = of_unflatten_dtb(oftree); + if (!data->of_root_node) { + pr_err("unable to unflatten devicetree\n"); + ret = -EINVAL; + goto err_free; + } + free(oftree); + } else { + data->oftree = oftree; + } + + pr_info("zImage: concatenated oftree detected\n"); + + return 0; + +err_free: + free(oftree); + + return ret; +} + +static int do_bootz_linux(struct image_data *data) +{ + int fd, ret, swap = 0; + struct zimage_header __header, *header; + void *zimage; + u32 end, start; + size_t image_size; + unsigned long load_address = data->os_address; + unsigned long mem_start, mem_size, mem_free; + + ret = sdram_start_and_size(&mem_start, &mem_size); + if (ret) + return ret; + + fd = open(data->os_file, O_RDONLY); + if (fd < 0) { + perror("open"); + return 1; + } + + header = &__header; + ret = read(fd, header, sizeof(*header)); + if (ret < sizeof(*header)) { + printf("could not read %s\n", data->os_file); + goto err_out; + } + + switch (header->magic) { + case swab32(ZIMAGE_MAGIC): + swap = 1; + /* fall through */ + case ZIMAGE_MAGIC: + break; + default: + printf("invalid magic 0x%08x\n", header->magic); + ret = -EINVAL; + goto err_out; + } + + end = header->end; + start = header->start; + + if (swap) { + end = swab32(end); + start = swab32(start); + } + + image_size = end - start; + + if (load_address == UIMAGE_INVALID_ADDRESS) { + /* + * Just use a conservative default of 4 times the size of the + * compressed image, to avoid the need for the kernel to + * relocate itself before decompression. + */ + data->os_address = mem_start + PAGE_ALIGN(image_size * 4); + + load_address = data->os_address; + if (bootm_verbose(data)) + printf("no OS load address, defaulting to 0x%08lx\n", + load_address); + } + + data->os_res = request_sdram_region("zimage", load_address, image_size); + if (!data->os_res) { + pr_err("bootm/zImage: failed to request memory at 0x%lx to 0x%lx (%d).\n", + load_address, load_address + image_size, image_size); + ret = -ENOMEM; + goto err_out; + } + + zimage = (void *)data->os_res->start; + + memcpy(zimage, header, sizeof(*header)); + + ret = read_full(fd, zimage + sizeof(*header), + image_size - sizeof(*header)); + if (ret < 0) + goto err_out; + if (ret < image_size - sizeof(*header)) { + printf("premature end of image\n"); + ret = -EIO; + goto err_out; + } + + if (swap) { + void *ptr; + for (ptr = zimage; ptr < zimage + end; ptr += 4) + *(u32 *)ptr = swab32(*(u32 *)ptr); + } + + ret = do_bootz_linux_fdt(fd, data); + if (ret && ret != -ENXIO) + goto err_out; + + close(fd); + + /* + * put oftree/initrd close behind compressed kernel image to avoid + * placing it outside of the kernels lowmem. + */ + mem_free = PAGE_ALIGN(data->os_res->end + SZ_1M); + + return __do_bootm_linux(data, mem_free, swap); + +err_out: + close(fd); + + return ret; +} + +static struct image_handler zimage_handler = { + .name = "ARM zImage", + .bootm = do_bootz_linux, + .filetype = filetype_arm_zimage, +}; + +static struct image_handler barebox_handler = { + .name = "ARM barebox", + .bootm = do_bootm_linux, + .filetype = filetype_arm_barebox, +}; + +#include <aimage.h> + +static int aimage_load_resource(int fd, struct resource *r, void* buf, int ps) +{ + int ret; + void *image = (void *)r->start; + unsigned to_read = ps - resource_size(r) % ps; + + ret = read_full(fd, image, resource_size(r)); + if (ret < 0) + return ret; + + ret = read_full(fd, buf, to_read); + if (ret < 0) + printf("could not read dummy %u\n", to_read); + + return ret; +} + +static int do_bootm_aimage(struct image_data *data) +{ + struct resource *snd_stage_res; + int fd, ret; + struct android_header __header, *header; + void *buf; + int to_read; + struct android_header_comp *cmp; + unsigned long mem_free; + unsigned long mem_start, mem_size; + + ret = sdram_start_and_size(&mem_start, &mem_size); + if (ret) + return ret; + + fd = open(data->os_file, O_RDONLY); + if (fd < 0) { + perror("open"); + return 1; + } + + header = &__header; + ret = read(fd, header, sizeof(*header)); + if (ret < sizeof(*header)) { + printf("could not read %s\n", data->os_file); + goto err_out; + } + + printf("Android Image for '%s'\n", header->name); + + /* + * As on tftp we do not support lseek and we will just have to seek + * for the size of a page - 1 max just buffer instead to read to dummy + * data + */ + buf = xmalloc(header->page_size); + + to_read = header->page_size - sizeof(*header); + ret = read_full(fd, buf, to_read); + if (ret < 0) { + printf("could not read dummy %d from %s\n", to_read, data->os_file); + goto err_out; + } + + cmp = &header->kernel; + data->os_res = request_sdram_region("akernel", cmp->load_addr, cmp->size); + if (!data->os_res) { + pr_err("Cannot request region 0x%08x - 0x%08x, using default load address\n", + cmp->load_addr, cmp->size); + + data->os_address = mem_start + PAGE_ALIGN(cmp->size * 4); + data->os_res = request_sdram_region("akernel", data->os_address, cmp->size); + if (!data->os_res) { + pr_err("Cannot request region 0x%08x - 0x%08x\n", + cmp->load_addr, cmp->size); + ret = -ENOMEM; + goto err_out; + } + } + + ret = aimage_load_resource(fd, data->os_res, buf, header->page_size); + if (ret < 0) { + perror("could not read kernel"); + goto err_out; + } + + /* + * fastboot always expect a ramdisk + * in barebox we can be less restrictive + */ + cmp = &header->ramdisk; + if (cmp->size) { + data->initrd_res = request_sdram_region("ainitrd", cmp->load_addr, cmp->size); + if (!data->initrd_res) { + ret = -ENOMEM; + goto err_out; + } + + ret = aimage_load_resource(fd, data->initrd_res, buf, header->page_size); + if (ret < 0) { + perror("could not read initrd"); + goto err_out; + } + } + + if (!getenv("aimage_noverwrite_bootargs")) + linux_bootargs_overwrite(header->cmdline); + + if (!getenv("aimage_noverwrite_tags")) + armlinux_set_bootparams((void*)header->tags_addr); + + cmp = &header->second_stage; + if (cmp->size) { + void (*second)(void); + + snd_stage_res = request_sdram_region("asecond", cmp->load_addr, cmp->size); + if (!snd_stage_res) { + ret = -ENOMEM; + goto err_out; + } + + ret = aimage_load_resource(fd, snd_stage_res, buf, header->page_size); + if (ret < 0) { + perror("could not read initrd"); + goto err_out; + } + + second = (void*)snd_stage_res->start; + shutdown_barebox(); + + second(); + + restart_machine(); + } + + close(fd); + + /* + * Put devicetree right after initrd if present or after the kernel + * if not. + */ + if (data->initrd_res) + mem_free = PAGE_ALIGN(data->initrd_res->end); + else + mem_free = PAGE_ALIGN(data->os_res->end + SZ_1M); + + return __do_bootm_linux(data, mem_free, 0); + +err_out: + linux_bootargs_overwrite(NULL); + close(fd); + + return ret; +} + +static struct image_handler aimage_handler = { + .name = "ARM Android Image", + .bootm = do_bootm_aimage, + .filetype = filetype_aimage, +}; + +#ifdef CONFIG_CMD_BOOTM_AIMAGE +BAREBOX_MAGICVAR(aimage_noverwrite_bootargs, "Disable overwrite of the bootargs with the one present in aimage"); +BAREBOX_MAGICVAR(aimage_noverwrite_tags, "Disable overwrite of the tags addr with the one present in aimage"); +#endif + +static struct image_handler arm_fit_handler = { + .name = "FIT image", + .bootm = do_bootm_linux, + .filetype = filetype_oftree, +}; + +static struct binfmt_hook binfmt_aimage_hook = { + .type = filetype_aimage, + .exec = "bootm", +}; + +static struct binfmt_hook binfmt_arm_zimage_hook = { + .type = filetype_arm_zimage, + .exec = "bootm", +}; + +static struct binfmt_hook binfmt_barebox_hook = { + .type = filetype_arm_barebox, + .exec = "bootm", +}; + +static int armlinux_register_image_handler(void) +{ + register_image_handler(&barebox_handler); + register_image_handler(&uimage_handler); + register_image_handler(&rawimage_handler); + register_image_handler(&zimage_handler); + if (IS_BUILTIN(CONFIG_CMD_BOOTM_AIMAGE)) { + register_image_handler(&aimage_handler); + binfmt_register(&binfmt_aimage_hook); + } + if (IS_BUILTIN(CONFIG_CMD_BOOTM_FITIMAGE)) + register_image_handler(&arm_fit_handler); + binfmt_register(&binfmt_arm_zimage_hook); + binfmt_register(&binfmt_barebox_hook); + + return 0; +} +late_initcall(armlinux_register_image_handler); diff --git a/arch/arm/lib64/copy_template.S b/arch/arm/lib64/copy_template.S new file mode 100644 index 0000000..cc9a842 --- /dev/null +++ b/arch/arm/lib64/copy_template.S @@ -0,0 +1,192 @@ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * and re-licensed under GPLv2 for the Linux kernel. The original code can + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + */ + + +/* + * Copy a buffer from src to dest (alignment handled by the hardware) + * + * Parameters: + * x0 - dest + * x1 - src + * x2 - n + * Returns: + * x0 - dest + */ +dstin .req x0 +src .req x1 +count .req x2 +tmp1 .req x3 +tmp1w .req w3 +tmp2 .req x4 +tmp2w .req w4 +dst .req x6 + +A_l .req x7 +A_h .req x8 +B_l .req x9 +B_h .req x10 +C_l .req x11 +C_h .req x12 +D_l .req x13 +D_h .req x14 + + mov dst, dstin + cmp count, #16 + /*When memory length is less than 16, the accessed are not aligned.*/ + b.lo .Ltiny15 + + neg tmp2, src + ands tmp2, tmp2, #15/* Bytes to reach alignment. */ + b.eq .LSrcAligned + sub count, count, tmp2 + /* + * Copy the leading memory data from src to dst in an increasing + * address order.By this way,the risk of overwritting the source + * memory data is eliminated when the distance between src and + * dst is less than 16. The memory accesses here are alignment. + */ + tbz tmp2, #0, 1f + ldrb1 tmp1w, src, #1 + strb1 tmp1w, dst, #1 +1: + tbz tmp2, #1, 2f + ldrh1 tmp1w, src, #2 + strh1 tmp1w, dst, #2 +2: + tbz tmp2, #2, 3f + ldr1 tmp1w, src, #4 + str1 tmp1w, dst, #4 +3: + tbz tmp2, #3, .LSrcAligned + ldr1 tmp1, src, #8 + str1 tmp1, dst, #8 + +.LSrcAligned: + cmp count, #64 + b.ge .Lcpy_over64 + /* + * Deal with small copies quickly by dropping straight into the + * exit block. + */ +.Ltail63: + /* + * Copy up to 48 bytes of data. At this point we only need the + * bottom 6 bits of count to be accurate. + */ + ands tmp1, count, #0x30 + b.eq .Ltiny15 + cmp tmp1w, #0x20 + b.eq 1f + b.lt 2f + ldp1 A_l, A_h, src, #16 + stp1 A_l, A_h, dst, #16 +1: + ldp1 A_l, A_h, src, #16 + stp1 A_l, A_h, dst, #16 +2: + ldp1 A_l, A_h, src, #16 + stp1 A_l, A_h, dst, #16 +.Ltiny15: + /* + * Prefer to break one ldp/stp into several load/store to access + * memory in an increasing address order,rather than to load/store 16 + * bytes from (src-16) to (dst-16) and to backward the src to aligned + * address,which way is used in original cortex memcpy. If keeping + * the original memcpy process here, memmove need to satisfy the + * precondition that src address is at least 16 bytes bigger than dst + * address,otherwise some source data will be overwritten when memove + * call memcpy directly. To make memmove simpler and decouple the + * memcpy's dependency on memmove, withdrew the original process. + */ + tbz count, #3, 1f + ldr1 tmp1, src, #8 + str1 tmp1, dst, #8 +1: + tbz count, #2, 2f + ldr1 tmp1w, src, #4 + str1 tmp1w, dst, #4 +2: + tbz count, #1, 3f + ldrh1 tmp1w, src, #2 + strh1 tmp1w, dst, #2 +3: + tbz count, #0, .Lexitfunc + ldrb1 tmp1w, src, #1 + strb1 tmp1w, dst, #1 + + b .Lexitfunc + +.Lcpy_over64: + subs count, count, #128 + b.ge .Lcpy_body_large + /* + * Less than 128 bytes to copy, so handle 64 here and then jump + * to the tail. + */ + ldp1 A_l, A_h, src, #16 + stp1 A_l, A_h, dst, #16 + ldp1 B_l, B_h, src, #16 + ldp1 C_l, C_h, src, #16 + stp1 B_l, B_h, dst, #16 + stp1 C_l, C_h, dst, #16 + ldp1 D_l, D_h, src, #16 + stp1 D_l, D_h, dst, #16 + + tst count, #0x3f + b.ne .Ltail63 + b .Lexitfunc + + /* + * Critical loop. Start at a new cache line boundary. Assuming + * 64 bytes per line this ensures the entire loop is in one line. + */ +.Lcpy_body_large: + /* pre-get 64 bytes data. */ + ldp1 A_l, A_h, src, #16 + ldp1 B_l, B_h, src, #16 + ldp1 C_l, C_h, src, #16 + ldp1 D_l, D_h, src, #16 +1: + /* + * interlace the load of next 64 bytes data block with store of the last + * loaded 64 bytes data. + */ + stp1 A_l, A_h, dst, #16 + ldp1 A_l, A_h, src, #16 + stp1 B_l, B_h, dst, #16 + ldp1 B_l, B_h, src, #16 + stp1 C_l, C_h, dst, #16 + ldp1 C_l, C_h, src, #16 + stp1 D_l, D_h, dst, #16 + ldp1 D_l, D_h, src, #16 + subs count, count, #64 + b.ge 1b + stp1 A_l, A_h, dst, #16 + stp1 B_l, B_h, dst, #16 + stp1 C_l, C_h, dst, #16 + stp1 D_l, D_h, dst, #16 + + tst count, #0x3f + b.ne .Ltail63 +.Lexitfunc: diff --git a/arch/arm/lib64/div0.c b/arch/arm/lib64/div0.c new file mode 100644 index 0000000..852cb72 --- /dev/null +++ b/arch/arm/lib64/div0.c @@ -0,0 +1,27 @@ +/* + * (C) Copyright 2002 + * Wolfgang Denk, DENX Software Engineering, wd@denx.de. + * + * See file CREDITS for list of people who contributed to this + * project. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 of + * the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ +#include <common.h> + +extern void __div0(void); + +/* Replacement (=dummy) for GNU/Linux division-by zero handler */ +void __div0 (void) +{ + panic("division by zero\n"); +} diff --git a/arch/arm/lib64/memcpy.S b/arch/arm/lib64/memcpy.S new file mode 100644 index 0000000..cfed319 --- /dev/null +++ b/arch/arm/lib64/memcpy.S @@ -0,0 +1,74 @@ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * and re-licensed under GPLv2 for the Linux kernel. The original code can + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + */ + +#include <linux/linkage.h> +#include <asm/assembler.h> + +/* + * Copy a buffer from src to dest (alignment handled by the hardware) + * + * Parameters: + * x0 - dest + * x1 - src + * x2 - n + * Returns: + * x0 - dest + */ + .macro ldrb1 ptr, regB, val + ldrb \ptr, [\regB], \val + .endm + + .macro strb1 ptr, regB, val + strb \ptr, [\regB], \val + .endm + + .macro ldrh1 ptr, regB, val + ldrh \ptr, [\regB], \val + .endm + + .macro strh1 ptr, regB, val + strh \ptr, [\regB], \val + .endm + + .macro ldr1 ptr, regB, val + ldr \ptr, [\regB], \val + .endm + + .macro str1 ptr, regB, val + str \ptr, [\regB], \val + .endm + + .macro ldp1 ptr, regB, regC, val + ldp \ptr, \regB, [\regC], \val + .endm + + .macro stp1 ptr, regB, regC, val + stp \ptr, \regB, [\regC], \val + .endm + + .weak memcpy +ENTRY(memcpy) +#include "copy_template.S" + ret +ENDPROC(memcpy) diff --git a/arch/arm/lib64/memset.S b/arch/arm/lib64/memset.S new file mode 100644 index 0000000..380a540 --- /dev/null +++ b/arch/arm/lib64/memset.S @@ -0,0 +1,215 @@ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * and re-licensed under GPLv2 for the Linux kernel. The original code can + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + */ + +#include <linux/linkage.h> +#include <asm/assembler.h> + +/* + * Fill in the buffer with character c (alignment handled by the hardware) + * + * Parameters: + * x0 - buf + * x1 - c + * x2 - n + * Returns: + * x0 - buf + */ + +dstin .req x0 +val .req w1 +count .req x2 +tmp1 .req x3 +tmp1w .req w3 +tmp2 .req x4 +tmp2w .req w4 +zva_len_x .req x5 +zva_len .req w5 +zva_bits_x .req x6 + +A_l .req x7 +A_lw .req w7 +dst .req x8 +tmp3w .req w9 +tmp3 .req x9 + + .weak memset +ENTRY(memset) + mov dst, dstin /* Preserve return value. */ + and A_lw, val, #255 + orr A_lw, A_lw, A_lw, lsl #8 + orr A_lw, A_lw, A_lw, lsl #16 + orr A_l, A_l, A_l, lsl #32 + + cmp count, #15 + b.hi .Lover16_proc + /*All store maybe are non-aligned..*/ + tbz count, #3, 1f + str A_l, [dst], #8 +1: + tbz count, #2, 2f + str A_lw, [dst], #4 +2: + tbz count, #1, 3f + strh A_lw, [dst], #2 +3: + tbz count, #0, 4f + strb A_lw, [dst] +4: + ret + +.Lover16_proc: + /*Whether the start address is aligned with 16.*/ + neg tmp2, dst + ands tmp2, tmp2, #15 + b.eq .Laligned +/* +* The count is not less than 16, we can use stp to store the start 16 bytes, +* then adjust the dst aligned with 16.This process will make the current +* memory address at alignment boundary. +*/ + stp A_l, A_l, [dst] /*non-aligned store..*/ + /*make the dst aligned..*/ + sub count, count, tmp2 + add dst, dst, tmp2 + +.Laligned: + cbz A_l, .Lzero_mem + +.Ltail_maybe_long: + cmp count, #64 + b.ge .Lnot_short +.Ltail63: + ands tmp1, count, #0x30 + b.eq 3f + cmp tmp1w, #0x20 + b.eq 1f + b.lt 2f + stp A_l, A_l, [dst], #16 +1: + stp A_l, A_l, [dst], #16 +2: + stp A_l, A_l, [dst], #16 +/* +* The last store length is less than 16,use stp to write last 16 bytes. +* It will lead some bytes written twice and the access is non-aligned. +*/ +3: + ands count, count, #15 + cbz count, 4f + add dst, dst, count + stp A_l, A_l, [dst, #-16] /* Repeat some/all of last store. */ +4: + ret + + /* + * Critical loop. Start at a new cache line boundary. Assuming + * 64 bytes per line, this ensures the entire loop is in one line. + */ +.Lnot_short: + sub dst, dst, #16/* Pre-bias. */ + sub count, count, #64 +1: + stp A_l, A_l, [dst, #16] + stp A_l, A_l, [dst, #32] + stp A_l, A_l, [dst, #48] + stp A_l, A_l, [dst, #64]! + subs count, count, #64 + b.ge 1b + tst count, #0x3f + add dst, dst, #16 + b.ne .Ltail63 +.Lexitfunc: + ret + + /* + * For zeroing memory, check to see if we can use the ZVA feature to + * zero entire 'cache' lines. + */ +.Lzero_mem: + cmp count, #63 + b.le .Ltail63 + /* + * For zeroing small amounts of memory, it's not worth setting up + * the line-clear code. + */ + cmp count, #128 + b.lt .Lnot_short /*count is at least 128 bytes*/ + + mrs tmp1, dczid_el0 + tbnz tmp1, #4, .Lnot_short + mov tmp3w, #4 + and zva_len, tmp1w, #15 /* Safety: other bits reserved. */ + lsl zva_len, tmp3w, zva_len + + ands tmp3w, zva_len, #63 + /* + * ensure the zva_len is not less than 64. + * It is not meaningful to use ZVA if the block size is less than 64. + */ + b.ne .Lnot_short +.Lzero_by_line: + /* + * Compute how far we need to go to become suitably aligned. We're + * already at quad-word alignment. + */ + cmp count, zva_len_x + b.lt .Lnot_short /* Not enough to reach alignment. */ + sub zva_bits_x, zva_len_x, #1 + neg tmp2, dst + ands tmp2, tmp2, zva_bits_x + b.eq 2f /* Already aligned. */ + /* Not aligned, check that there's enough to copy after alignment.*/ + sub tmp1, count, tmp2 + /* + * grantee the remain length to be ZVA is bigger than 64, + * avoid to make the 2f's process over mem range.*/ + cmp tmp1, #64 + ccmp tmp1, zva_len_x, #8, ge /* NZCV=0b1000 */ + b.lt .Lnot_short + /* + * We know that there's at least 64 bytes to zero and that it's safe + * to overrun by 64 bytes. + */ + mov count, tmp1 +1: + stp A_l, A_l, [dst] + stp A_l, A_l, [dst, #16] + stp A_l, A_l, [dst, #32] + subs tmp2, tmp2, #64 + stp A_l, A_l, [dst, #48] + add dst, dst, #64 + b.ge 1b + /* We've overrun a bit, so adjust dst downwards.*/ + add dst, dst, tmp2 +2: + sub count, count, zva_len_x +3: + dc zva, dst + add dst, dst, zva_len_x + subs count, count, zva_len_x + b.ge 3b + ands count, count, zva_bits_x + b.ne .Ltail_maybe_long + ret +ENDPROC(memset) diff --git a/arch/arm/lib64/module.c b/arch/arm/lib64/module.c new file mode 100644 index 0000000..be7965d --- /dev/null +++ b/arch/arm/lib64/module.c @@ -0,0 +1,98 @@ +/* + * linux/arch/arm/kernel/module.c + * + * Copyright (C) 2002 Russell King. + * Modified for nommu by Hyok S. Choi + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * Module allocation method suggested by Andi Kleen. + */ + +//#include <asm/pgtable.h> +#include <common.h> +#include <elf.h> +#include <module.h> +#include <errno.h> + +int +apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, + unsigned int relindex, struct module *module) +{ + Elf32_Shdr *symsec = sechdrs + symindex; + Elf32_Shdr *relsec = sechdrs + relindex; + Elf32_Shdr *dstsec = sechdrs + relsec->sh_info; + Elf32_Rel *rel = (void *)relsec->sh_addr; + unsigned int i; + + for (i = 0; i < relsec->sh_size / sizeof(Elf32_Rel); i++, rel++) { + unsigned long loc; + Elf32_Sym *sym; + s32 offset; + + offset = ELF32_R_SYM(rel->r_info); + if (offset < 0 || offset > (symsec->sh_size / sizeof(Elf32_Sym))) { + printf("%s: bad relocation, section %u reloc %u\n", + module->name, relindex, i); + return -ENOEXEC; + } + + sym = ((Elf32_Sym *)symsec->sh_addr) + offset; + + if (rel->r_offset < 0 || rel->r_offset > dstsec->sh_size - sizeof(u32)) { + printf("%s: out of bounds relocation, " + "section %u reloc %u offset %d size %d\n", + module->name, relindex, i, rel->r_offset, + dstsec->sh_size); + return -ENOEXEC; + } + + loc = dstsec->sh_addr + rel->r_offset; + + switch (ELF32_R_TYPE(rel->r_info)) { + case R_ARM_ABS32: + *(u32 *)loc += sym->st_value; + break; + + case R_ARM_PC24: + case R_ARM_CALL: + case R_ARM_JUMP24: + offset = (*(u32 *)loc & 0x00ffffff) << 2; + if (offset & 0x02000000) + offset -= 0x04000000; + + offset += sym->st_value - loc; + if (offset & 3 || + offset <= (s32)0xfe000000 || + offset >= (s32)0x02000000) { + printf("%s: relocation out of range, section " + "%u reloc %u sym '%s'\n", module->name, + relindex, i, strtab + sym->st_name); + return -ENOEXEC; + } + + offset >>= 2; + + *(u32 *)loc &= 0xff000000; + *(u32 *)loc |= offset & 0x00ffffff; + break; + + default: + printf("%s: unknown relocation: %u\n", + module->name, ELF32_R_TYPE(rel->r_info)); + return -ENOEXEC; + } + } + return 0; +} + +int +apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, + unsigned int symindex, unsigned int relsec, struct module *module) +{ + printf("module %s: ADD RELOCATION unsupported\n", + module->name); + return -ENOEXEC; +} -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 03/12] arm: introduce lib64 for arm64 related stuff 2016-06-14 7:06 ` [PATCH v2 03/12] arm: introduce lib64 for arm64 related stuff Raphael Poggi @ 2016-06-15 6:15 ` Sascha Hauer 2016-06-23 14:43 ` Raphaël Poggi 0 siblings, 1 reply; 19+ messages in thread From: Sascha Hauer @ 2016-06-15 6:15 UTC (permalink / raw) To: Raphael Poggi; +Cc: barebox On Tue, Jun 14, 2016 at 09:06:37AM +0200, Raphael Poggi wrote: > diff --git a/arch/arm/lib64/Makefile b/arch/arm/lib64/Makefile > new file mode 100644 > index 0000000..a424293 > --- /dev/null > +++ b/arch/arm/lib64/Makefile > @@ -0,0 +1,10 @@ > +obj-$(CONFIG_ARM_LINUX) += armlinux.o > +obj-$(CONFIG_BOOTM) += bootm.o > +obj-y += div0.o > +obj-$(CONFIG_ARM_OPTIMZED_STRING_FUNCTIONS) += memcpy.o > +obj-$(CONFIG_ARM_OPTIMZED_STRING_FUNCTIONS) += memset.o > +extra-y += barebox.lds > + > +pbl-y += lib1funcs.o > +pbl-y += ashldi3.o > +pbl-y += div0.o > diff --git a/arch/arm/lib64/armlinux.c b/arch/arm/lib64/armlinux.c > new file mode 100644 > index 0000000..21a2292 > --- /dev/null > +++ b/arch/arm/lib64/armlinux.c > @@ -0,0 +1,275 @@ [...] > +static void setup_tags(unsigned long initrd_address, > + unsigned long initrd_size, int swap) > +{ > + const char *commandline = linux_bootargs_get(); > + > + setup_start_tag(); > + setup_memory_tags(); > + setup_commandline_tag(commandline, swap); > + > + if (initrd_size) > + setup_initrd_tag(initrd_address, initrd_size); > + > + setup_revision_tag(); > + setup_serial_tag(); > +#ifdef CONFIG_ARM_BOARD_APPEND_ATAG > + if (atag_appender != NULL) > + params = atag_appender(params); > +#endif > + setup_end_tag(); > + > + printf("commandline: %s\n" > + "arch_number: %d\n", commandline, armlinux_get_architecture()); > + > +} All the code around ATAGs can be removed. ARM64 is device tree only and won't ever need this. > + > +void start_linux(void *adr, int swap, unsigned long initrd_address, > + unsigned long initrd_size, void *oftree) > +{ > + void (*kernel)(int zero, int arch, void *params) = adr; > + void *params = NULL; > + int architecture; > + > + if (oftree) { > + pr_debug("booting kernel with devicetree\n"); > + params = oftree; > + } else { > + setup_tags(initrd_address, initrd_size, swap); > + params = armlinux_get_bootparams(); > + } > + architecture = armlinux_get_architecture(); > + > + shutdown_barebox(); > + > + kernel(0, architecture, params); > +} > diff --git a/arch/arm/lib64/bootm.c b/arch/arm/lib64/bootm.c This file is an exact copy of arch/arm/lib/bootm.c as of a324fb5a, so it seems that either the 32bit version works for arm64 or this is untested. Have you already booted a kernel? Does it work? Although starting a kernel is admittedly a key feature for a bootloader, we could drop the code from the initial arm64 porting effort if it doesn't work yet. > diff --git a/arch/arm/lib64/module.c b/arch/arm/lib64/module.c Please drop this file. I suppose it won't work on arm64 anyway, so there's no need to carry a nonworking copy of arm32 module support in the arm64 tree. Sascha -- Pengutronix e.K. | | Industrial Linux Solutions | http://www.pengutronix.de/ | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 03/12] arm: introduce lib64 for arm64 related stuff 2016-06-15 6:15 ` Sascha Hauer @ 2016-06-23 14:43 ` Raphaël Poggi 0 siblings, 0 replies; 19+ messages in thread From: Raphaël Poggi @ 2016-06-23 14:43 UTC (permalink / raw) To: Sascha Hauer; +Cc: barebox Hi, 2016-06-15 8:15 GMT+02:00 Sascha Hauer <s.hauer@pengutronix.de>: > On Tue, Jun 14, 2016 at 09:06:37AM +0200, Raphael Poggi wrote: >> diff --git a/arch/arm/lib64/Makefile b/arch/arm/lib64/Makefile >> new file mode 100644 >> index 0000000..a424293 >> --- /dev/null >> +++ b/arch/arm/lib64/Makefile >> @@ -0,0 +1,10 @@ >> +obj-$(CONFIG_ARM_LINUX) += armlinux.o >> +obj-$(CONFIG_BOOTM) += bootm.o >> +obj-y += div0.o >> +obj-$(CONFIG_ARM_OPTIMZED_STRING_FUNCTIONS) += memcpy.o >> +obj-$(CONFIG_ARM_OPTIMZED_STRING_FUNCTIONS) += memset.o >> +extra-y += barebox.lds >> + >> +pbl-y += lib1funcs.o >> +pbl-y += ashldi3.o >> +pbl-y += div0.o >> diff --git a/arch/arm/lib64/armlinux.c b/arch/arm/lib64/armlinux.c >> new file mode 100644 >> index 0000000..21a2292 >> --- /dev/null >> +++ b/arch/arm/lib64/armlinux.c >> @@ -0,0 +1,275 @@ > > [...] > >> +static void setup_tags(unsigned long initrd_address, >> + unsigned long initrd_size, int swap) >> +{ >> + const char *commandline = linux_bootargs_get(); >> + >> + setup_start_tag(); >> + setup_memory_tags(); >> + setup_commandline_tag(commandline, swap); >> + >> + if (initrd_size) >> + setup_initrd_tag(initrd_address, initrd_size); >> + >> + setup_revision_tag(); >> + setup_serial_tag(); >> +#ifdef CONFIG_ARM_BOARD_APPEND_ATAG >> + if (atag_appender != NULL) >> + params = atag_appender(params); >> +#endif >> + setup_end_tag(); >> + >> + printf("commandline: %s\n" >> + "arch_number: %d\n", commandline, armlinux_get_architecture()); >> + >> +} > > All the code around ATAGs can be removed. ARM64 is device tree only and > won't ever need this. Ok, I have done some rework of this part and remove all *_tags functions. > >> + >> +void start_linux(void *adr, int swap, unsigned long initrd_address, >> + unsigned long initrd_size, void *oftree) >> +{ >> + void (*kernel)(int zero, int arch, void *params) = adr; >> + void *params = NULL; >> + int architecture; >> + >> + if (oftree) { >> + pr_debug("booting kernel with devicetree\n"); >> + params = oftree; >> + } else { >> + setup_tags(initrd_address, initrd_size, swap); >> + params = armlinux_get_bootparams(); >> + } >> + architecture = armlinux_get_architecture(); >> + >> + shutdown_barebox(); >> + >> + kernel(0, architecture, params); >> +} > >> diff --git a/arch/arm/lib64/bootm.c b/arch/arm/lib64/bootm.c > > This file is an exact copy of arch/arm/lib/bootm.c as of a324fb5a, so it > seems that either the 32bit version works for arm64 or this is untested. > > Have you already booted a kernel? Does it work? Although starting a > kernel is admittedly a key feature for a bootloader, we could drop the > code from the initial arm64 porting effort if it doesn't work yet. I have made some rework for booting a kernel, and bootm remains the same as arch/arm/lib/bootm.c. However, I think it is better to conserve the two files, because in a near future it will be useful to add support of arm64 Image format in arch/arm/lib64/bootm.c. > >> diff --git a/arch/arm/lib64/module.c b/arch/arm/lib64/module.c > > Please drop this file. I suppose it won't work on arm64 anyway, so > there's no need to carry a nonworking copy of arm32 module support in > the arm64 tree. Ok > > Sascha > > -- > Pengutronix e.K. | | > Industrial Linux Solutions | http://www.pengutronix.de/ | > Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | > Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | > > _______________________________________________ > barebox mailing list > barebox@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/barebox _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 04/12] arm: cpu: add arm64 specific code 2016-06-14 7:06 Raphael Poggi ` (2 preceding siblings ...) 2016-06-14 7:06 ` [PATCH v2 03/12] arm: introduce lib64 for arm64 related stuff Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 05/12] arm: include: system: add arm64 helper functions Raphael Poggi ` (8 subsequent siblings) 12 siblings, 0 replies; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi This patch adds arm64 specific codes, which are: - exception support - cache support - rework Makefile to support arm64 Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/cpu/Makefile | 24 +++++-- arch/arm/cpu/cache-armv8.S | 168 +++++++++++++++++++++++++++++++++++++++++++ arch/arm/cpu/cache.c | 19 +++++ arch/arm/cpu/exceptions_64.S | 127 ++++++++++++++++++++++++++++++++ arch/arm/cpu/interrupts.c | 47 ++++++++++++ arch/arm/cpu/lowlevel_64.S | 40 +++++++++++ arch/arm/include/asm/cache.h | 9 +++ 7 files changed, 430 insertions(+), 4 deletions(-) create mode 100644 arch/arm/cpu/cache-armv8.S create mode 100644 arch/arm/cpu/exceptions_64.S create mode 100644 arch/arm/cpu/lowlevel_64.S diff --git a/arch/arm/cpu/Makefile b/arch/arm/cpu/Makefile index 854df60..86a4a90 100644 --- a/arch/arm/cpu/Makefile +++ b/arch/arm/cpu/Makefile @@ -1,7 +1,22 @@ obj-y += cpu.o + +ifeq ($(CONFIG_CPU_64v8), y) +obj-$(CONFIG_ARM_EXCEPTIONS) += exceptions_64.o +obj-$(CONFIG_MMU) += mmu_64.o +lwl-y += lowlevel_64.o +else obj-$(CONFIG_ARM_EXCEPTIONS) += exceptions.o +obj-$(CONFIG_MMU) += mmu.o mmu-early.o +pbl-$(CONFIG_MMU) += mmu-early.o +lwl-y += lowlevel.o +endif + obj-$(CONFIG_ARM_EXCEPTIONS) += interrupts.o -obj-y += start.o setupc.o entry.o +obj-y += start.o entry.o + +ifeq ($(CONFIG_CPU_64v8), ) +obj-y += setupc.o +endif # # Any variants can be called as start-armxyz.S @@ -11,7 +26,6 @@ obj-$(CONFIG_CMD_ARM_MMUINFO) += mmuinfo.o obj-$(CONFIG_OFDEVICE) += dtb.o obj-$(CONFIG_MMU) += mmu.o cache.o mmu-early.o pbl-$(CONFIG_MMU) += mmu-early.o - ifeq ($(CONFIG_MMU),) obj-y += no-mmu.o endif @@ -27,6 +41,10 @@ obj-$(CONFIG_CPU_32v7) += cache-armv7.o AFLAGS_pbl-cache-armv7.o :=-Wa,-march=armv7-a pbl-$(CONFIG_CPU_32v7) += cache-armv7.o obj-$(CONFIG_CACHE_L2X0) += cache-l2x0.o +AFLAGS_cache-armv8.o :=-Wa,-march=armv8-a +obj-$(CONFIG_CPU_64v8) += cache-armv8.o +AFLAGS_pbl-cache-armv8.o :=-Wa,-march=armv8-a +pbl-$(CONFIG_CPU_64v8) += cache-armv8.o pbl-y += setupc.o entry.o pbl-$(CONFIG_PBL_SINGLE_IMAGE) += start-pbl.o @@ -34,5 +52,3 @@ pbl-$(CONFIG_PBL_MULTI_IMAGES) += uncompress.o obj-y += common.o cache.o pbl-y += common.o cache.o - -lwl-y += lowlevel.o diff --git a/arch/arm/cpu/cache-armv8.S b/arch/arm/cpu/cache-armv8.S new file mode 100644 index 0000000..82b2f81 --- /dev/null +++ b/arch/arm/cpu/cache-armv8.S @@ -0,0 +1,168 @@ +/* + * (C) Copyright 2013 + * David Feng <fenghua@phytium.com.cn> + * + * This file is based on sample code from ARMv8 ARM. + * + * SPDX-License-Identifier: GPL-2.0+ + */ + +#include <config.h> +#include <linux/linkage.h> +#include <init.h> + +/* + * void v8_flush_dcache_level(level) + * + * clean and invalidate one level cache. + * + * x0: cache level + * x1: 0 flush & invalidate, 1 invalidate only + * x2~x9: clobbered + */ +.section .text.v8_flush_dcache_level +ENTRY(v8_flush_dcache_level) + lsl x12, x0, #1 + msr csselr_el1, x12 /* select cache level */ + isb /* sync change of cssidr_el1 */ + mrs x6, ccsidr_el1 /* read the new cssidr_el1 */ + and x2, x6, #7 /* x2 <- log2(cache line size)-4 */ + add x2, x2, #4 /* x2 <- log2(cache line size) */ + mov x3, #0x3ff + and x3, x3, x6, lsr #3 /* x3 <- max number of #ways */ + clz w5, w3 /* bit position of #ways */ + mov x4, #0x7fff + and x4, x4, x6, lsr #13 /* x4 <- max number of #sets */ + /* x12 <- cache level << 1 */ + /* x2 <- line length offset */ + /* x3 <- number of cache ways - 1 */ + /* x4 <- number of cache sets - 1 */ + /* x5 <- bit position of #ways */ + +loop_set: + mov x6, x3 /* x6 <- working copy of #ways */ +loop_way: + lsl x7, x6, x5 + orr x9, x12, x7 /* map way and level to cisw value */ + lsl x7, x4, x2 + orr x9, x9, x7 /* map set number to cisw value */ + tbz w1, #0, 1f + dc isw, x9 + b 2f +1: dc cisw, x9 /* clean & invalidate by set/way */ +2: subs x6, x6, #1 /* decrement the way */ + b.ge loop_way + subs x4, x4, #1 /* decrement the set */ + b.ge loop_set + + ret +ENDPROC(v8_flush_dcache_level) + +/* + * void v8_flush_dcache_all(int invalidate_only) + * + * x0: 0 flush & invalidate, 1 invalidate only + * + * clean and invalidate all data cache by SET/WAY. + */ +.section .text.v8_dcache_all +ENTRY(v8_dcache_all) + mov x1, x0 + dsb sy + mrs x10, clidr_el1 /* read clidr_el1 */ + lsr x11, x10, #24 + and x11, x11, #0x7 /* x11 <- loc */ + cbz x11, finished /* if loc is 0, exit */ + mov x15, x30 + mov x0, #0 /* start flush at cache level 0 */ + /* x0 <- cache level */ + /* x10 <- clidr_el1 */ + /* x11 <- loc */ + /* x15 <- return address */ + +loop_level: + lsl x12, x0, #1 + add x12, x12, x0 /* x0 <- tripled cache level */ + lsr x12, x10, x12 + and x12, x12, #7 /* x12 <- cache type */ + cmp x12, #2 + b.lt skip /* skip if no cache or icache */ + bl v8_flush_dcache_level /* x1 = 0 flush, 1 invalidate */ +skip: + add x0, x0, #1 /* increment cache level */ + cmp x11, x0 + b.gt loop_level + + mov x0, #0 + msr csselr_el1, x0 /* restore csselr_el1 */ + dsb sy + isb + mov x30, x15 + +finished: + ret +ENDPROC(v8_dcache_all) + +.section .text.v8_flush_dcache_all +ENTRY(v8_flush_dcache_all) + mov x16, x30 + mov x0, #0 + bl v8_dcache_all + mov x30, x16 + ret +ENDPROC(v8_flush_dcache_all) + +.section .text.v8_invalidate_dcache_all +ENTRY(v8_invalidate_dcache_all) + mov x16, x30 + mov x0, #0x1 + bl v8_dcache_all + mov x30, x16 + ret +ENDPROC(v8_invalidate_dcache_all) + +/* + * void v8_flush_dcache_range(start, end) + * + * clean & invalidate data cache in the range + * + * x0: start address + * x1: end address + */ +.section .text.v8_flush_dcache_range +ENTRY(v8_flush_dcache_range) + mrs x3, ctr_el0 + lsr x3, x3, #16 + and x3, x3, #0xf + mov x2, #4 + lsl x2, x2, x3 /* cache line size */ + + /* x2 <- minimal cache line size in cache system */ + sub x3, x2, #1 + bic x0, x0, x3 +1: dc civac, x0 /* clean & invalidate data or unified cache */ + add x0, x0, x2 + cmp x0, x1 + b.lo 1b + dsb sy + ret +ENDPROC(v8_flush_dcache_range) + +/* + * void v8_invalidate_icache_all(void) + * + * invalidate all tlb entries. + */ +.section .text.v8_invalidate_icache_all +ENTRY(v8_invalidate_icache_all) + ic ialluis + isb sy + ret +ENDPROC(v8_invalidate_icache_all) + +.section .text.v8_flush_l3_cache +ENTRY(v8_flush_l3_cache) + mov x0, #0 /* return status as success */ + ret +ENDPROC(v8_flush_l3_cache) + .weak v8_flush_l3_cache diff --git a/arch/arm/cpu/cache.c b/arch/arm/cpu/cache.c index 27ead1c..929c385 100644 --- a/arch/arm/cpu/cache.c +++ b/arch/arm/cpu/cache.c @@ -36,6 +36,7 @@ DEFINE_CPU_FNS(v4) DEFINE_CPU_FNS(v5) DEFINE_CPU_FNS(v6) DEFINE_CPU_FNS(v7) +DEFINE_CPU_FNS(v8) void __dma_clean_range(unsigned long start, unsigned long end) { @@ -101,6 +102,11 @@ int arm_set_cache_functions(void) cache_fns = &cache_fns_armv7; break; #endif +#ifdef CONFIG_CPU_64v8 + case CPU_ARCH_ARMv8: + cache_fns = &cache_fns_armv8; + break; +#endif default: while(1); } @@ -138,6 +144,11 @@ void arm_early_mmu_cache_flush(void) v7_mmu_cache_flush(); return; #endif +#ifdef CONFIG_CPU_64v8 + case CPU_ARCH_ARMv8: + v8_dcache_all(); + return; +#endif } } @@ -146,6 +157,7 @@ void v7_mmu_cache_invalidate(void); void arm_early_mmu_cache_invalidate(void) { switch (arm_early_get_cpu_architecture()) { +#if __LINUX_ARM_ARCH__ <= 7 case CPU_ARCH_ARMv4T: case CPU_ARCH_ARMv5: case CPU_ARCH_ARMv5T: @@ -159,5 +171,12 @@ void arm_early_mmu_cache_invalidate(void) v7_mmu_cache_invalidate(); return; #endif +#else +#ifdef CONFIG_CPU_64v8 + case CPU_ARCH_ARMv8: + v8_invalidate_icache_all(); + return; +#endif +#endif } } diff --git a/arch/arm/cpu/exceptions_64.S b/arch/arm/cpu/exceptions_64.S new file mode 100644 index 0000000..5812025 --- /dev/null +++ b/arch/arm/cpu/exceptions_64.S @@ -0,0 +1,127 @@ +/* + * (C) Copyright 2013 + * David Feng <fenghua@phytium.com.cn> + * + * SPDX-License-Identifier: GPL-2.0+ + */ + +#include <config.h> +#include <asm/ptrace.h> +#include <linux/linkage.h> + +/* + * Enter Exception. + * This will save the processor state that is ELR/X0~X30 + * to the stack frame. + */ +.macro exception_entry + stp x29, x30, [sp, #-16]! + stp x27, x28, [sp, #-16]! + stp x25, x26, [sp, #-16]! + stp x23, x24, [sp, #-16]! + stp x21, x22, [sp, #-16]! + stp x19, x20, [sp, #-16]! + stp x17, x18, [sp, #-16]! + stp x15, x16, [sp, #-16]! + stp x13, x14, [sp, #-16]! + stp x11, x12, [sp, #-16]! + stp x9, x10, [sp, #-16]! + stp x7, x8, [sp, #-16]! + stp x5, x6, [sp, #-16]! + stp x3, x4, [sp, #-16]! + stp x1, x2, [sp, #-16]! + + /* Could be running at EL3/EL2/EL1 */ + mrs x11, CurrentEL + cmp x11, #0xC /* Check EL3 state */ + b.eq 1f + cmp x11, #0x8 /* Check EL2 state */ + b.eq 2f + cmp x11, #0x4 /* Check EL1 state */ + b.eq 3f +3: mrs x1, esr_el3 + mrs x2, elr_el3 + b 0f +2: mrs x1, esr_el2 + mrs x2, elr_el2 + b 0f +1: mrs x1, esr_el1 + mrs x2, elr_el1 +0: + stp x2, x0, [sp, #-16]! + mov x0, sp +.endm + +/* + * Exception vectors. + */ + .align 11 + .globl vectors +vectors: + .align 7 + b _do_bad_sync /* Current EL Synchronous Thread */ + + .align 7 + b _do_bad_irq /* Current EL IRQ Thread */ + + .align 7 + b _do_bad_fiq /* Current EL FIQ Thread */ + + .align 7 + b _do_bad_error /* Current EL Error Thread */ + + .align 7 + b _do_sync /* Current EL Synchronous Handler */ + + .align 7 + b _do_irq /* Current EL IRQ Handler */ + + .align 7 + b _do_fiq /* Current EL FIQ Handler */ + + .align 7 + b _do_error /* Current EL Error Handler */ + + +_do_bad_sync: + exception_entry + bl do_bad_sync + +_do_bad_irq: + exception_entry + bl do_bad_irq + +_do_bad_fiq: + exception_entry + bl do_bad_fiq + +_do_bad_error: + exception_entry + bl do_bad_error + +_do_sync: + exception_entry + bl do_sync + +_do_irq: + exception_entry + bl do_irq + +_do_fiq: + exception_entry + bl do_fiq + +_do_error: + exception_entry + bl do_error + +.section .data +.align 4 +.global arm_ignore_data_abort +arm_ignore_data_abort: +.word 0 /* When != 0 data aborts are ignored */ +.global arm_data_abort_occurred +arm_data_abort_occurred: +.word 0 /* set != 0 by the data abort handler */ +abort_stack: +.space 8 diff --git a/arch/arm/cpu/interrupts.c b/arch/arm/cpu/interrupts.c index fb4bb78..c34108a 100644 --- a/arch/arm/cpu/interrupts.c +++ b/arch/arm/cpu/interrupts.c @@ -27,6 +27,8 @@ #include <asm/ptrace.h> #include <asm/unwind.h> + +#if __LINUX_ARM_ARCH__ <= 7 /** * Display current register set content * @param[in] regs Guess what @@ -70,10 +72,13 @@ void show_regs (struct pt_regs *regs) unwind_backtrace(regs); #endif } +#endif static void __noreturn do_exception(struct pt_regs *pt_regs) { +#if __LINUX_ARM_ARCH__ <= 7 show_regs(pt_regs); +#endif panic(""); } @@ -121,6 +126,8 @@ void do_prefetch_abort (struct pt_regs *pt_regs) */ void do_data_abort (struct pt_regs *pt_regs) { + +#if __LINUX_ARM_ARCH__ <= 7 u32 far; asm volatile ("mrc p15, 0, %0, c6, c0, 0" : "=r" (far) : : "cc"); @@ -128,6 +135,7 @@ void do_data_abort (struct pt_regs *pt_regs) printf("unable to handle %s at address 0x%08x\n", far < PAGE_SIZE ? "NULL pointer dereference" : "paging request", far); +#endif do_exception(pt_regs); } @@ -156,6 +164,45 @@ void do_irq (struct pt_regs *pt_regs) do_exception(pt_regs); } +#ifdef CONFIG_CPU_64v8 +void do_bad_sync(struct pt_regs *pt_regs) +{ + printf("bad sync\n"); + do_exception(pt_regs); +} + +void do_bad_irq(struct pt_regs *pt_regs) +{ + printf("bad irq\n"); + do_exception(pt_regs); +} + +void do_bad_fiq(struct pt_regs *pt_regs) +{ + printf("bad fiq\n"); + do_exception(pt_regs); +} + +void do_bad_error(struct pt_regs *pt_regs) +{ + printf("bad error\n"); + do_exception(pt_regs); +} + +void do_sync(struct pt_regs *pt_regs) +{ + printf("sync exception\n"); + do_exception(pt_regs); +} + + +void do_error(struct pt_regs *pt_regs) +{ + printf("error exception\n"); + do_exception(pt_regs); +} +#endif + extern volatile int arm_ignore_data_abort; extern volatile int arm_data_abort_occurred; diff --git a/arch/arm/cpu/lowlevel_64.S b/arch/arm/cpu/lowlevel_64.S new file mode 100644 index 0000000..4850895 --- /dev/null +++ b/arch/arm/cpu/lowlevel_64.S @@ -0,0 +1,40 @@ +#include <linux/linkage.h> +#include <init.h> +#include <asm/system.h> + +.section ".text_bare_init_","ax" +ENTRY(arm_cpu_lowlevel_init) + adr x0, vectors + mrs x1, CurrentEL + cmp x1, #0xC /* Check EL3 state */ + b.eq 1f + cmp x1, #0x8 /* Check EL2 state */ + b.eq 2f + cmp x1, #0x4 /* Check EL1 state */ + b.eq 3f + +1: + msr vbar_el3, x0 + mov x0, #1 /* Non-Secure EL0/1 */ + orr x0, x0, #(1 << 10) /* 64-bit EL2 */ + msr scr_el3, x0 + msr cptr_el3, xzr + b done + +2: + msr vbar_el2, x0 + mov x0, #0x33ff /* Enable FP/SIMD */ + msr cptr_el2, x0 + b done + + +3: + msr vbar_el1, x0 + mov x0, #(3 << 20) /* Enable FP/SIMD */ + msr cpacr_el1, x0 + b done + +done: + ret + +ENDPROC(arm_cpu_lowlevel_init) diff --git a/arch/arm/include/asm/cache.h b/arch/arm/include/asm/cache.h index 2f6eab0..8fcdb64 100644 --- a/arch/arm/include/asm/cache.h +++ b/arch/arm/include/asm/cache.h @@ -1,9 +1,18 @@ #ifndef __ASM_CACHE_H #define __ASM_CACHE_H +#ifdef CONFIG_CPU_64v8 +extern void v8_invalidate_icache_all(void); +extern void v8_dcache_all(void); +#endif + static inline void flush_icache(void) { +#if __LINUX_ARM_ARCH__ <= 7 asm volatile("mcr p15, 0, %0, c7, c5, 0" : : "r" (0)); +#else + v8_invalidate_icache_all(); +#endif } int arm_set_cache_functions(void); -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 05/12] arm: include: system: add arm64 helper functions 2016-06-14 7:06 Raphael Poggi ` (3 preceding siblings ...) 2016-06-14 7:06 ` [PATCH v2 04/12] arm: cpu: add arm64 specific code Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 06/12] arm: cpu: start: arm64 does not support relocation Raphael Poggi ` (7 subsequent siblings) 12 siblings, 0 replies; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/include/asm/system.h | 46 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 45 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/system.h b/arch/arm/include/asm/system.h index b118a42..57c7618 100644 --- a/arch/arm/include/asm/system.h +++ b/arch/arm/include/asm/system.h @@ -3,7 +3,11 @@ #if __LINUX_ARM_ARCH__ >= 7 #define isb() __asm__ __volatile__ ("isb" : : : "memory") +#ifdef CONFIG_CPU_64v8 +#define dsb() __asm__ __volatile__ ("dsb sy" : : : "memory") +#else #define dsb() __asm__ __volatile__ ("dsb" : : : "memory") +#endif #define dmb() __asm__ __volatile__ ("dmb" : : : "memory") #elif defined(CONFIG_CPU_XSC3) || __LINUX_ARM_ARCH__ == 6 #define isb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \ @@ -57,17 +61,58 @@ #define CR_TE (1 << 30) /* Thumb exception enable */ #ifndef __ASSEMBLY__ +#if __LINUX_ARM_ARCH__ >= 7 +static inline unsigned int current_el(void) +{ + unsigned int el; + asm volatile("mrs %0, CurrentEL" : "=r" (el) : : "cc"); + return el >> 2; +} + +static inline unsigned long read_mpidr(void) +{ + unsigned long val; + + asm volatile("mrs %0, mpidr_el1" : "=r" (val)); + + return val; +} +#endif static inline unsigned int get_cr(void) { unsigned int val; + +#ifdef CONFIG_CPU_64v8 + unsigned int el = current_el(); + if (el == 1) + asm volatile("mrs %0, sctlr_el1" : "=r" (val) : : "cc"); + else if (el == 2) + asm volatile("mrs %0, sctlr_el2" : "=r" (val) : : "cc"); + else + asm volatile("mrs %0, sctlr_el3" : "=r" (val) : : "cc"); +#else asm volatile ("mrc p15, 0, %0, c1, c0, 0 @ get CR" : "=r" (val) : : "cc"); +#endif + return val; } static inline void set_cr(unsigned int val) { +#ifdef CONFIG_CPU_64v8 + unsigned int el; + + el = current_el(); + if (el == 1) + asm volatile("msr sctlr_el1, %0" : : "r" (val) : "cc"); + else if (el == 2) + asm volatile("msr sctlr_el2, %0" : : "r" (val) : "cc"); + else + asm volatile("msr sctlr_el3, %0" : : "r" (val) : "cc"); +#else asm volatile("mcr p15, 0, %0, c1, c0, 0 @ set CR" : : "r" (val) : "cc"); +#endif isb(); } @@ -90,7 +135,6 @@ static inline void set_vbar(unsigned int vbar) static inline unsigned int get_vbar(void) { return 0; } static inline void set_vbar(unsigned int vbar) {} #endif - #endif #endif /* __ASM_ARM_SYSTEM_H */ -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 06/12] arm: cpu: start: arm64 does not support relocation 2016-06-14 7:06 Raphael Poggi ` (4 preceding siblings ...) 2016-06-14 7:06 ` [PATCH v2 05/12] arm: include: system: add arm64 helper functions Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 07/12] arm: include: bitops: arm64 use generic __fls Raphael Poggi ` (6 subsequent siblings) 12 siblings, 0 replies; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi For now, the relocation is not supported in arm64, so enclosed call to "setup_c" with #if directive Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/cpu/start.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm/cpu/start.c b/arch/arm/cpu/start.c index d03d1ed..25ba5fc 100644 --- a/arch/arm/cpu/start.c +++ b/arch/arm/cpu/start.c @@ -151,7 +151,9 @@ __noreturn void barebox_non_pbl_start(unsigned long membase, relocate_to_adr(barebox_base); } +#if __LINUX_ARM_ARCH__ <= 7 setup_c(); +#endif barrier(); -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 07/12] arm: include: bitops: arm64 use generic __fls 2016-06-14 7:06 Raphael Poggi ` (5 preceding siblings ...) 2016-06-14 7:06 ` [PATCH v2 06/12] arm: cpu: start: arm64 does not support relocation Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 08/12] arm: include: system_info: add armv8 identification Raphael Poggi ` (5 subsequent siblings) 12 siblings, 0 replies; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/include/asm/bitops.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm/include/asm/bitops.h b/arch/arm/include/asm/bitops.h index 138ebe2..b51225e 100644 --- a/arch/arm/include/asm/bitops.h +++ b/arch/arm/include/asm/bitops.h @@ -177,6 +177,11 @@ static inline unsigned long ffz(unsigned long word) #include <asm-generic/bitops/ffs.h> #include <asm-generic/bitops/fls.h> #endif /* __ARM__USE_GENERIC_FF */ + +#if __LINUX_ARM_ARCH__ == 8 +#include <asm-generic/bitops/__fls.h> +#endif + #include <asm-generic/bitops/fls64.h> #include <asm-generic/bitops/hweight.h> -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 08/12] arm: include: system_info: add armv8 identification 2016-06-14 7:06 Raphael Poggi ` (6 preceding siblings ...) 2016-06-14 7:06 ` [PATCH v2 07/12] arm: include: bitops: arm64 use generic __fls Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 09/12] arm: cpu: cpuinfo: add armv8 support Raphael Poggi ` (4 subsequent siblings) 12 siblings, 0 replies; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/include/asm/system_info.h | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/arch/arm/include/asm/system_info.h b/arch/arm/include/asm/system_info.h index 0761848..25fffd2 100644 --- a/arch/arm/include/asm/system_info.h +++ b/arch/arm/include/asm/system_info.h @@ -13,6 +13,7 @@ #define CPU_ARCH_ARMv5TEJ 7 #define CPU_ARCH_ARMv6 8 #define CPU_ARCH_ARMv7 9 +#define CPU_ARCH_ARMv8 10 #define CPU_IS_ARM720 0x41007200 #define CPU_IS_ARM720_MASK 0xff00fff0 @@ -41,6 +42,12 @@ #define CPU_IS_CORTEX_A15 0x410fc0f0 #define CPU_IS_CORTEX_A15_MASK 0xff0ffff0 +#define CPU_IS_CORTEX_A53 0x410fd034 +#define CPU_IS_CORTEX_A53_MASK 0xff0ffff0 + +#define CPU_IS_CORTEX_A57 0x411fd070 +#define CPU_IS_CORTEX_A57_MASK 0xff0ffff0 + #define CPU_IS_PXA250 0x69052100 #define CPU_IS_PXA250_MASK 0xfffff7f0 @@ -112,6 +119,19 @@ #define cpu_is_cortex_a15() (0) #endif +#ifdef CONFIG_CPU_64v8 +#ifdef ARM_ARCH +#define ARM_MULTIARCH +#else +#define ARM_ARCH CPU_ARCH_ARMv8 +#endif +#define cpu_is_cortex_a53() cpu_is_arm(CORTEX_A53) +#define cpu_is_cortex_a57() cpu_is_arm(CORTEX_A57) +#else +#define cpu_is_cortex_a53() (0) +#define cpu_is_cortex_a57() (0) +#endif + #ifndef __ASSEMBLY__ #ifdef ARM_MULTIARCH @@ -133,6 +153,23 @@ static inline int arm_early_get_cpu_architecture(void) if (cpu_arch) cpu_arch += CPU_ARCH_ARMv3; } else if ((read_cpuid_id() & 0x000f0000) == 0x000f0000) { +#ifdef CONFIG_CPU_64v8 + unsigned int isar2; + + __asm__ __volatile__( + "mrs %0, id_isar2_el1\n" + : "=r" (isar2) + : + : "memory"); + + + /* Check Load/Store acquire to check if ARMv8 or not */ + + if (isar2 & 0x2) + cpu_arch = CPU_ARCH_ARMv8; + else + cpu_arch = CPU_ARCH_UNKNOWN; +#else unsigned int mmfr0; /* Revised CPUID format. Read the Memory Model Feature @@ -149,6 +186,7 @@ static inline int arm_early_get_cpu_architecture(void) cpu_arch = CPU_ARCH_UNKNOWN; } else cpu_arch = CPU_ARCH_UNKNOWN; +#endif return cpu_arch; } -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 09/12] arm: cpu: cpuinfo: add armv8 support 2016-06-14 7:06 Raphael Poggi ` (7 preceding siblings ...) 2016-06-14 7:06 ` [PATCH v2 08/12] arm: include: system_info: add armv8 identification Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 10/12] arm: cpu: disable code portion in armv8 case Raphael Poggi ` (3 subsequent siblings) 12 siblings, 0 replies; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/cpu/cpuinfo.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 54 insertions(+), 4 deletions(-) diff --git a/arch/arm/cpu/cpuinfo.c b/arch/arm/cpu/cpuinfo.c index 8b22e9b..86e19d9 100644 --- a/arch/arm/cpu/cpuinfo.c +++ b/arch/arm/cpu/cpuinfo.c @@ -30,12 +30,15 @@ #define CPU_ARCH_ARMv5TEJ 7 #define CPU_ARCH_ARMv6 8 #define CPU_ARCH_ARMv7 9 +#define CPU_ARCH_ARMv8 10 #define ARM_CPU_PART_CORTEX_A5 0xC050 #define ARM_CPU_PART_CORTEX_A7 0xC070 #define ARM_CPU_PART_CORTEX_A8 0xC080 #define ARM_CPU_PART_CORTEX_A9 0xC090 #define ARM_CPU_PART_CORTEX_A15 0xC0F0 +#define ARM_CPU_PART_CORTEX_A53 0xD030 +#define ARM_CPU_PART_CORTEX_A57 0xD070 static void decode_cache(unsigned long size) { @@ -60,6 +63,25 @@ static int do_cpuinfo(int argc, char *argv[]) int i; int cpu_arch; +#ifdef CONFIG_CPU_64v8 + __asm__ __volatile__( + "mrs %0, midr_el1\n" + : "=r" (mainid) + : + : "memory"); + + __asm__ __volatile__( + "mrs %0, ctr_el0\n" + : "=r" (cache) + : + : "memory"); + + __asm__ __volatile__( + "mrs %0, sctlr_el1\n" + : "=r" (cr) + : + : "memory"); +#else __asm__ __volatile__( "mrc p15, 0, %0, c0, c0, 0 @ read control reg\n" : "=r" (mainid) @@ -74,9 +96,10 @@ static int do_cpuinfo(int argc, char *argv[]) __asm__ __volatile__( "mrc p15, 0, %0, c1, c0, 0 @ read control reg\n" - : "=r" (cr) - : - : "memory"); + : "=r" (cr) + : + : "memory"); +#endif switch (mainid >> 24) { case 0x41: @@ -107,6 +130,23 @@ static int do_cpuinfo(int argc, char *argv[]) if (cpu_arch) cpu_arch += CPU_ARCH_ARMv3; } else if ((mainid & 0x000f0000) == 0x000f0000) { +#ifdef CONFIG_CPU_64v8 + unsigned int isar2; + + __asm__ __volatile__( + "mrs %0, id_isar2_el1\n" + : "=r" (isar2) + : + : "memory"); + + + /* Check Load/Store acquire to check if ARMv8 or not */ + + if (isar2 & 0x2) + cpu_arch = CPU_ARCH_ARMv8; + else + cpu_arch = CPU_ARCH_UNKNOWN; +#else unsigned int mmfr0; /* Revised CPUID format. Read the Memory Model Feature @@ -121,6 +161,7 @@ static int do_cpuinfo(int argc, char *argv[]) cpu_arch = CPU_ARCH_ARMv6; else cpu_arch = CPU_ARCH_UNKNOWN; +#endif } else cpu_arch = CPU_ARCH_UNKNOWN; @@ -152,6 +193,9 @@ static int do_cpuinfo(int argc, char *argv[]) case CPU_ARCH_ARMv7: architecture = "v7"; break; + case CPU_ARCH_ARMv8: + architecture = "v8"; + break; case CPU_ARCH_UNKNOWN: default: architecture = "Unknown"; @@ -160,7 +204,7 @@ static int do_cpuinfo(int argc, char *argv[]) printf("implementer: %s\narchitecture: %s\n", implementer, architecture); - if (cpu_arch == CPU_ARCH_ARMv7) { + if (cpu_arch >= CPU_ARCH_ARMv7) { unsigned int major, minor; char *part; major = (mainid >> 20) & 0xf; @@ -181,6 +225,12 @@ static int do_cpuinfo(int argc, char *argv[]) case ARM_CPU_PART_CORTEX_A15: part = "Cortex-A15"; break; + case ARM_CPU_PART_CORTEX_A53: + part = "Cortex-A53"; + break; + case ARM_CPU_PART_CORTEX_A57: + part = "Cortex-A57"; + break; default: part = "unknown"; } -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 10/12] arm: cpu: disable code portion in armv8 case 2016-06-14 7:06 Raphael Poggi ` (8 preceding siblings ...) 2016-06-14 7:06 ` [PATCH v2 09/12] arm: cpu: cpuinfo: add armv8 support Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 11/12] arm: cpu: add basic arm64 mmu support Raphael Poggi ` (2 subsequent siblings) 12 siblings, 0 replies; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi Enclosed by #if directive OMAP specific code and mmu_disable (ARMv8 code will implemented it somewhere else). Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/cpu/cpu.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm/cpu/cpu.c b/arch/arm/cpu/cpu.c index eb12166..cc54324 100644 --- a/arch/arm/cpu/cpu.c +++ b/arch/arm/cpu/cpu.c @@ -68,6 +68,7 @@ int icache_status(void) return (get_cr () & CR_I) != 0; } +#if __LINUX_ARM_ARCH__ <= 7 /* * SoC like the ux500 have the l2x0 always enable * with or without MMU enable @@ -86,6 +87,7 @@ void mmu_disable(void) } __mmu_cache_off(); } +#endif /** * Disable MMU and D-cache, flush caches @@ -100,6 +102,8 @@ static void arch_shutdown(void) mmu_disable(); flush_icache(); + +#if __LINUX_ARM_ARCH__ <= 7 /* * barebox normally does not use interrupts, but some functionalities * (eg. OMAP4_USBBOOT) require them enabled. So be sure interrupts are @@ -108,6 +112,7 @@ static void arch_shutdown(void) __asm__ __volatile__("mrs %0, cpsr" : "=r"(r)); r |= PSR_I_BIT; __asm__ __volatile__("msr cpsr, %0" : : "r"(r)); +#endif } archshutdown_exitcall(arch_shutdown); -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 11/12] arm: cpu: add basic arm64 mmu support 2016-06-14 7:06 Raphael Poggi ` (9 preceding siblings ...) 2016-06-14 7:06 ` [PATCH v2 10/12] arm: cpu: disable code portion in armv8 case Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 12/12] arm: boards: add mach-qemu and virt64 board Raphael Poggi 2016-06-24 8:17 ` Raphaël Poggi 12 siblings, 0 replies; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi This commit adds basic mmu support, ie: - DMA cache handling is not supported - Remapping memory region also The current mmu setting is: - 4KB granularity - 3 level lookup (skipping L0) - 33 bits per VA This is based on coreboot and u-boot mmu configuration. Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/cpu/Makefile | 4 +- arch/arm/cpu/mmu.h | 54 +++++++ arch/arm/cpu/mmu_64.c | 333 +++++++++++++++++++++++++++++++++++++++ arch/arm/include/asm/mmu.h | 14 +- arch/arm/include/asm/pgtable64.h | 140 ++++++++++++++++ 5 files changed, 540 insertions(+), 5 deletions(-) create mode 100644 arch/arm/cpu/mmu_64.c create mode 100644 arch/arm/include/asm/pgtable64.h diff --git a/arch/arm/cpu/Makefile b/arch/arm/cpu/Makefile index 86a4a90..7cf5da7 100644 --- a/arch/arm/cpu/Makefile +++ b/arch/arm/cpu/Makefile @@ -24,8 +24,8 @@ endif obj-$(CONFIG_CMD_ARM_CPUINFO) += cpuinfo.o obj-$(CONFIG_CMD_ARM_MMUINFO) += mmuinfo.o obj-$(CONFIG_OFDEVICE) += dtb.o -obj-$(CONFIG_MMU) += mmu.o cache.o mmu-early.o -pbl-$(CONFIG_MMU) += mmu-early.o +obj-$(CONFIG_MMU) += cache.o + ifeq ($(CONFIG_MMU),) obj-y += no-mmu.o endif diff --git a/arch/arm/cpu/mmu.h b/arch/arm/cpu/mmu.h index 79ebc80..186d408 100644 --- a/arch/arm/cpu/mmu.h +++ b/arch/arm/cpu/mmu.h @@ -1,6 +1,60 @@ #ifndef __ARM_MMU_H #define __ARM_MMU_H +#ifdef CONFIG_CPU_64v8 + +#define TCR_FLAGS (TCR_TG0_4K | \ + TCR_SHARED_OUTER | \ + TCR_SHARED_INNER | \ + TCR_IRGN_WBWA | \ + TCR_ORGN_WBWA | \ + TCR_T0SZ(BITS_PER_VA)) + +#ifndef __ASSEMBLY__ + +static inline void set_ttbr_tcr_mair(int el, uint64_t table, uint64_t tcr, uint64_t attr) +{ + asm volatile("dsb sy"); + if (el == 1) { + asm volatile("msr ttbr0_el1, %0" : : "r" (table) : "memory"); + asm volatile("msr tcr_el1, %0" : : "r" (tcr) : "memory"); + asm volatile("msr mair_el1, %0" : : "r" (attr) : "memory"); + } else if (el == 2) { + asm volatile("msr ttbr0_el2, %0" : : "r" (table) : "memory"); + asm volatile("msr tcr_el2, %0" : : "r" (tcr) : "memory"); + asm volatile("msr mair_el2, %0" : : "r" (attr) : "memory"); + } else if (el == 3) { + asm volatile("msr ttbr0_el3, %0" : : "r" (table) : "memory"); + asm volatile("msr tcr_el3, %0" : : "r" (tcr) : "memory"); + asm volatile("msr mair_el3, %0" : : "r" (attr) : "memory"); + } else { + hang(); + } + asm volatile("isb"); +} + +static inline uint64_t get_ttbr(int el) +{ + uint64_t val; + if (el == 1) { + asm volatile("mrs %0, ttbr0_el1" : "=r" (val)); + } else if (el == 2) { + asm volatile("mrs %0, ttbr0_el2" : "=r" (val)); + } else if (el == 3) { + asm volatile("mrs %0, ttbr0_el3" : "=r" (val)); + } else { + hang(); + } + + return val; +} + +void mmu_early_enable(uint64_t membase, uint64_t memsize, uint64_t _ttb); + +#endif + +#endif /* CONFIG_CPU_64v8 */ + #ifdef CONFIG_MMU void __mmu_cache_on(void); void __mmu_cache_off(void); diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c new file mode 100644 index 0000000..256aa7e --- /dev/null +++ b/arch/arm/cpu/mmu_64.c @@ -0,0 +1,333 @@ +/* + * Copyright (c) 2009-2013 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix + * Copyright (c) 2016 Raphaël Poggi <poggi.raph@gmail.com> + * + * See file CREDITS for list of people who contributed to this + * project. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#define pr_fmt(fmt) "mmu: " fmt + +#include <common.h> +#include <dma-dir.h> +#include <init.h> +#include <mmu.h> +#include <errno.h> +#include <linux/sizes.h> +#include <asm/memory.h> +#include <asm/barebox-arm.h> +#include <asm/system.h> +#include <asm/cache.h> +#include <memory.h> +#include <asm/system_info.h> + +#include "mmu.h" + +static uint64_t *ttb; +static int free_idx; + +static void arm_mmu_not_initialized_error(void) +{ + /* + * This means: + * - one of the MMU functions like dma_alloc_coherent + * or remap_range is called too early, before the MMU is initialized + * - Or the MMU initialization has failed earlier + */ + panic("MMU not initialized\n"); +} + + +/* + * Do it the simple way for now and invalidate the entire + * tlb + */ +static inline void tlb_invalidate(void) +{ + unsigned int el = current_el(); + + dsb(); + + if (el == 1) + __asm__ __volatile__("tlbi alle1\n\t" : : : "memory"); + else if (el == 2) + __asm__ __volatile__("tlbi alle2\n\t" : : : "memory"); + else if (el == 3) + __asm__ __volatile__("tlbi alle3\n\t" : : : "memory"); + + dsb(); + isb(); +} + +static int level2shift(int level) +{ + /* Page is 12 bits wide, every level translates 9 bits */ + return (12 + 9 * (3 - level)); +} + +static uint64_t level2mask(int level) +{ + uint64_t mask = -EINVAL; + + if (level == 1) + mask = L1_ADDR_MASK; + else if (level == 2) + mask = L2_ADDR_MASK; + else if (level == 3) + mask = L3_ADDR_MASK; + + return mask; +} + +static int pte_type(uint64_t *pte) +{ + return *pte & PMD_TYPE_MASK; +} + +static void set_table(uint64_t *pt, uint64_t *table_addr) +{ + uint64_t val; + + val = PMD_TYPE_TABLE | (uint64_t)table_addr; + *pt = val; +} + +static uint64_t *create_table(void) +{ + uint64_t *new_table = ttb + free_idx * GRANULE_SIZE; + + /* Mark all entries as invalid */ + memset(new_table, 0, GRANULE_SIZE); + + free_idx++; + + return new_table; +} + +static uint64_t *get_level_table(uint64_t *pte) +{ + uint64_t *table = (uint64_t *)(*pte & XLAT_ADDR_MASK); + + if (pte_type(pte) != PMD_TYPE_TABLE) { + table = create_table(); + set_table(pte, table); + } + + return table; +} + +static uint64_t *find_pte(uint64_t addr) +{ + uint64_t *pte; + uint64_t block_shift; + uint64_t idx; + int i; + + pte = ttb; + + for (i = 1; i < 4; i++) { + block_shift = level2shift(i); + idx = (addr & level2mask(i)) >> block_shift; + pte += idx; + + if ((pte_type(pte) != PMD_TYPE_TABLE) || (block_shift <= GRANULE_SIZE_SHIFT)) + break; + else + pte = (uint64_t *)(*pte & XLAT_ADDR_MASK); + } + + return pte; +} + +static void map_region(uint64_t virt, uint64_t phys, uint64_t size, uint64_t attr) +{ + uint64_t block_size; + uint64_t block_shift; + uint64_t *pte; + uint64_t idx; + uint64_t addr; + uint64_t *table; + int level; + + if (!ttb) + arm_mmu_not_initialized_error(); + + addr = virt; + + attr &= ~(PMD_TYPE_SECT); + + while (size) { + table = ttb; + for (level = 1; level < 4; level++) { + block_shift = level2shift(level); + idx = (addr & level2mask(level)) >> block_shift; + block_size = (1 << block_shift); + + pte = table + idx; + + if (level == 3) + attr |= PTE_TYPE_PAGE; + else + attr |= PMD_TYPE_SECT; + + if (size >= block_size && IS_ALIGNED(addr, block_size)) { + *pte = phys | attr; + addr += block_size; + phys += block_size; + size -= block_size; + break; + + } + + table = get_level_table(pte); + } + + } +} + +static void create_sections(uint64_t virt, uint64_t phys, uint64_t size_m, uint64_t flags) +{ + + map_region(virt, phys, size_m, flags); +} + +void *map_io_sections(unsigned long phys, void *_start, size_t size) +{ + + map_region((uint64_t)_start, phys, (uint64_t)size, UNCACHED_MEM); + + tlb_invalidate(); + return _start; +} + + +int arch_remap_range(void *_start, size_t size, unsigned flags) +{ + map_region((uint64_t)_start, (uint64_t)_start, (uint64_t)size, flags); + + return 0; +} + +/* + * Prepare MMU for usage enable it. + */ +static int mmu_init(void) +{ + struct memory_bank *bank; + + if (list_empty(&memory_banks)) + /* + * If you see this it means you have no memory registered. + * This can be done either with arm_add_mem_device() in an + * initcall prior to mmu_initcall or via devicetree in the + * memory node. + */ + panic("MMU: No memory bank found! Cannot continue\n"); + + if (get_cr() & CR_M) { + ttb = (uint64_t *)get_ttbr(1); + if (!request_sdram_region("ttb", (unsigned long)ttb, SZ_16K)) + /* + * This can mean that: + * - the early MMU code has put the ttb into a place + * which we don't have inside our available memory + * - Somebody else has occupied the ttb region which means + * the ttb will get corrupted. + */ + pr_crit("Critical Error: Can't request SDRAM region for ttb at %p\n", + ttb); + } else { + ttb = memalign(0x1000, SZ_16K); + free_idx = 1; + + memset(ttb, 0, GRANULE_SIZE); + + set_ttbr_tcr_mair(current_el(), (uint64_t)ttb, TCR_FLAGS, UNCACHED_MEM); + } + + pr_debug("ttb: 0x%p\n", ttb); + + /* create a flat mapping using 1MiB sections */ + create_sections(0, 0, GRANULE_SIZE, UNCACHED_MEM); + + /* + * First remap sdram cached using sections. + * This is to speed up the generation of 2nd level page tables + * below + */ + for_each_memory_bank(bank) + create_sections(bank->start, bank->start, bank->size, CACHED_MEM); + + return 0; +} +mmu_initcall(mmu_init); + +void mmu_enable(void) +{ + if (!ttb) + arm_mmu_not_initialized_error(); + + if (!(get_cr() & CR_M)) { + + isb(); + set_cr(get_cr() | CR_M | CR_C | CR_I); + } +} + +void mmu_disable(void) +{ + unsigned int cr; + + if (!ttb) + arm_mmu_not_initialized_error(); + + cr = get_cr(); + cr &= ~(CR_M | CR_C | CR_I); + + tlb_invalidate(); + + dsb(); + isb(); + + set_cr(cr); + + dsb(); + isb(); +} + +void mmu_early_enable(uint64_t membase, uint64_t memsize, uint64_t _ttb) +{ + ttb = (uint64_t *)_ttb; + + memset(ttb, 0, GRANULE_SIZE); + free_idx = 1; + + set_ttbr_tcr_mair(current_el(), (uint64_t)ttb, TCR_FLAGS, UNCACHED_MEM); + + create_sections(0, 0, 4096, UNCACHED_MEM); + + create_sections(membase, membase, memsize, CACHED_MEM); + + isb(); + set_cr(get_cr() | CR_M); +} + +unsigned long virt_to_phys(volatile void *virt) +{ + return (unsigned long)virt; +} + +void *phys_to_virt(unsigned long phys) +{ + return (void *)phys; +} diff --git a/arch/arm/include/asm/mmu.h b/arch/arm/include/asm/mmu.h index 8de6544..f68ab37 100644 --- a/arch/arm/include/asm/mmu.h +++ b/arch/arm/include/asm/mmu.h @@ -6,16 +6,24 @@ #include <malloc.h> #include <xfuncs.h> +#ifdef CONFIG_CPU_64v8 +#include <asm/pgtable64.h> + +#define DEV_MEM (PMD_ATTRINDX(MT_DEVICE_nGnRnE) | PMD_SECT_AF | PMD_TYPE_SECT) +#define CACHED_MEM (PMD_ATTRINDX(MT_NORMAL) | PMD_SECT_S | PMD_SECT_AF | PMD_TYPE_SECT) +#define UNCACHED_MEM (PMD_ATTRINDX(MT_NORMAL_NC) | PMD_SECT_S | PMD_SECT_AF | PMD_TYPE_SECT) +#else #include <asm/pgtable.h> #define PMD_SECT_DEF_UNCACHED (PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | PMD_TYPE_SECT) #define PMD_SECT_DEF_CACHED (PMD_SECT_WB | PMD_SECT_DEF_UNCACHED) +#endif + + struct arm_memory; -static inline void mmu_enable(void) -{ -} +void mmu_enable(void); void mmu_disable(void); static inline void arm_create_section(unsigned long virt, unsigned long phys, int size_m, unsigned int flags) diff --git a/arch/arm/include/asm/pgtable64.h b/arch/arm/include/asm/pgtable64.h new file mode 100644 index 0000000..20bea5b --- /dev/null +++ b/arch/arm/include/asm/pgtable64.h @@ -0,0 +1,140 @@ +/* + * Copyright (C) 2012 ARM Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + */ +#ifndef __ASM_PGTABLE64_H +#define __ASM_PGTABLE64_H + +#define UL(x) _AC(x, UL) + +#define UNUSED_DESC 0x6EbAAD0BBADbA6E0 + +#define VA_START 0x0 +#define BITS_PER_VA 33 + +/* Granule size of 4KB is being used */ +#define GRANULE_SIZE_SHIFT 12 +#define GRANULE_SIZE (1 << GRANULE_SIZE_SHIFT) +#define XLAT_ADDR_MASK ((1UL << BITS_PER_VA) - GRANULE_SIZE) +#define GRANULE_SIZE_MASK ((1 << GRANULE_SIZE_SHIFT) - 1) + +#define BITS_RESOLVED_PER_LVL (GRANULE_SIZE_SHIFT - 3) +#define L1_ADDR_SHIFT (GRANULE_SIZE_SHIFT + BITS_RESOLVED_PER_LVL * 2) +#define L2_ADDR_SHIFT (GRANULE_SIZE_SHIFT + BITS_RESOLVED_PER_LVL * 1) +#define L3_ADDR_SHIFT (GRANULE_SIZE_SHIFT + BITS_RESOLVED_PER_LVL * 0) + + +#define L1_ADDR_MASK (((1UL << BITS_RESOLVED_PER_LVL) - 1) << L1_ADDR_SHIFT) +#define L2_ADDR_MASK (((1UL << BITS_RESOLVED_PER_LVL) - 1) << L2_ADDR_SHIFT) +#define L3_ADDR_MASK (((1UL << BITS_RESOLVED_PER_LVL) - 1) << L3_ADDR_SHIFT) + +/* These macros give the size of the region addressed by each entry of a xlat + table at any given level */ +#define L3_XLAT_SIZE (1UL << L3_ADDR_SHIFT) +#define L2_XLAT_SIZE (1UL << L2_ADDR_SHIFT) +#define L1_XLAT_SIZE (1UL << L1_ADDR_SHIFT) + +#define GRANULE_MASK GRANULE_SIZE + + +/* + * Level 2 descriptor (PMD). + */ +#define PMD_TYPE_MASK (3 << 0) +#define PMD_TYPE_FAULT (0 << 0) +#define PMD_TYPE_TABLE (3 << 0) +#define PMD_TYPE_SECT (1 << 0) +#define PMD_TABLE_BIT (1 << 1) + +/* + * Section + */ +#define PMD_SECT_VALID (1 << 0) +#define PMD_SECT_USER (1 << 6) /* AP[1] */ +#define PMD_SECT_RDONLY (1 << 7) /* AP[2] */ +#define PMD_SECT_S (3 << 8) +#define PMD_SECT_AF (1 << 10) +#define PMD_SECT_NG (1 << 11) +#define PMD_SECT_CONT (1 << 52) +#define PMD_SECT_PXN (1 << 53) +#define PMD_SECT_UXN (1 << 54) + +/* + * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers). + */ +#define PMD_ATTRINDX(t) ((t) << 2) +#define PMD_ATTRINDX_MASK (7 << 2) + +/* + * Level 3 descriptor (PTE). + */ +#define PTE_TYPE_MASK (3 << 0) +#define PTE_TYPE_FAULT (0 << 0) +#define PTE_TYPE_PAGE (3 << 0) +#define PTE_TABLE_BIT (1 << 1) +#define PTE_USER (1 << 6) /* AP[1] */ +#define PTE_RDONLY (1 << 7) /* AP[2] */ +#define PTE_SHARED (3 << 8) /* SH[1:0], inner shareable */ +#define PTE_AF (1 << 10) /* Access Flag */ +#define PTE_NG (1 << 11) /* nG */ +#define PTE_DBM (1 << 51) /* Dirty Bit Management */ +#define PTE_CONT (1 << 52) /* Contiguous range */ +#define PTE_PXN (1 << 53) /* Privileged XN */ +#define PTE_UXN (1 << 54) /* User XN */ + +/* + * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers). + */ +#define PTE_ATTRINDX(t) ((t) << 2) +#define PTE_ATTRINDX_MASK (7 << 2) + +/* + * Memory types available. + */ +#define MT_DEVICE_nGnRnE 0 +#define MT_DEVICE_nGnRE 1 +#define MT_DEVICE_GRE 2 +#define MT_NORMAL_NC 3 +#define MT_NORMAL 4 +#define MT_NORMAL_WT 5 + +/* + * TCR flags. + */ +#define TCR_T0SZ(x) ((64 - (x)) << 0) +#define TCR_IRGN_NC (0 << 8) +#define TCR_IRGN_WBWA (1 << 8) +#define TCR_IRGN_WT (2 << 8) +#define TCR_IRGN_WBNWA (3 << 8) +#define TCR_IRGN_MASK (3 << 8) +#define TCR_ORGN_NC (0 << 10) +#define TCR_ORGN_WBWA (1 << 10) +#define TCR_ORGN_WT (2 << 10) +#define TCR_ORGN_WBNWA (3 << 10) +#define TCR_ORGN_MASK (3 << 10) +#define TCR_SHARED_NON (0 << 12) +#define TCR_SHARED_OUTER (2 << 12) +#define TCR_SHARED_INNER (3 << 12) +#define TCR_TG0_4K (0 << 14) +#define TCR_TG0_64K (1 << 14) +#define TCR_TG0_16K (2 << 14) +#define TCR_EL1_IPS_BITS (UL(3) << 32) /* 42 bits physical address */ +#define TCR_EL2_IPS_BITS (3 << 16) /* 42 bits physical address */ +#define TCR_EL3_IPS_BITS (3 << 16) /* 42 bits physical address */ + +#define TCR_EL1_RSVD (1 << 31) +#define TCR_EL2_RSVD (1 << 31 | 1 << 23) +#define TCR_EL3_RSVD (1 << 31 | 1 << 23) + +#endif -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 12/12] arm: boards: add mach-qemu and virt64 board 2016-06-14 7:06 Raphael Poggi ` (10 preceding siblings ...) 2016-06-14 7:06 ` [PATCH v2 11/12] arm: cpu: add basic arm64 mmu support Raphael Poggi @ 2016-06-14 7:06 ` Raphael Poggi 2016-06-24 8:17 ` Raphaël Poggi 12 siblings, 0 replies; 19+ messages in thread From: Raphael Poggi @ 2016-06-14 7:06 UTC (permalink / raw) To: barebox; +Cc: Raphael Poggi Introduce mach-qemu and add qemu virt64 board which emulates arm64 board. Signed-off-by: Raphael Poggi <poggi.raph@gmail.com> --- arch/arm/Kconfig | 5 +++ arch/arm/Makefile | 1 + arch/arm/boards/Makefile | 1 + arch/arm/boards/qemu-virt64/Kconfig | 8 ++++ arch/arm/boards/qemu-virt64/Makefile | 1 + arch/arm/boards/qemu-virt64/init.c | 67 ++++++++++++++++++++++++++++++ arch/arm/configs/qemu_virt64_defconfig | 55 ++++++++++++++++++++++++ arch/arm/mach-qemu/Kconfig | 18 ++++++++ arch/arm/mach-qemu/Makefile | 2 + arch/arm/mach-qemu/include/mach/debug_ll.h | 24 +++++++++++ arch/arm/mach-qemu/include/mach/devices.h | 13 ++++++ arch/arm/mach-qemu/virt_devices.c | 30 +++++++++++++ arch/arm/mach-qemu/virt_lowlevel.c | 19 +++++++++ 13 files changed, 244 insertions(+) create mode 100644 arch/arm/boards/qemu-virt64/Kconfig create mode 100644 arch/arm/boards/qemu-virt64/Makefile create mode 100644 arch/arm/boards/qemu-virt64/init.c create mode 100644 arch/arm/configs/qemu_virt64_defconfig create mode 100644 arch/arm/mach-qemu/Kconfig create mode 100644 arch/arm/mach-qemu/Makefile create mode 100644 arch/arm/mach-qemu/include/mach/debug_ll.h create mode 100644 arch/arm/mach-qemu/include/mach/devices.h create mode 100644 arch/arm/mach-qemu/virt_devices.c create mode 100644 arch/arm/mach-qemu/virt_lowlevel.c diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 986fdaa..f904579 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -255,6 +255,10 @@ config ARCH_ZYNQ bool "Xilinx Zynq-based boards" select HAS_DEBUG_LL +config ARCH_QEMU + bool "ARM QEMU boards" + select HAS_DEBUG_LL + endchoice source arch/arm/cpu/Kconfig @@ -280,6 +284,7 @@ source arch/arm/mach-vexpress/Kconfig source arch/arm/mach-tegra/Kconfig source arch/arm/mach-uemd/Kconfig source arch/arm/mach-zynq/Kconfig +source arch/arm/mach-qemu/Kconfig config ARM_ASM_UNIFIED bool diff --git a/arch/arm/Makefile b/arch/arm/Makefile index 2743d96..fd3453d 100644 --- a/arch/arm/Makefile +++ b/arch/arm/Makefile @@ -94,6 +94,7 @@ machine-$(CONFIG_ARCH_VEXPRESS) := vexpress machine-$(CONFIG_ARCH_TEGRA) := tegra machine-$(CONFIG_ARCH_UEMD) := uemd machine-$(CONFIG_ARCH_ZYNQ) := zynq +machine-$(CONFIG_ARCH_QEMU) := qemu # Board directory name. This list is sorted alphanumerically diff --git a/arch/arm/boards/Makefile b/arch/arm/boards/Makefile index 5a755c9..f12c074 100644 --- a/arch/arm/boards/Makefile +++ b/arch/arm/boards/Makefile @@ -134,3 +134,4 @@ obj-$(CONFIG_MACH_VIRT2REAL) += virt2real/ obj-$(CONFIG_MACH_ZEDBOARD) += avnet-zedboard/ obj-$(CONFIG_MACH_ZYLONITE) += zylonite/ obj-$(CONFIG_MACH_VARISCITE_MX6) += variscite-mx6/ +obj-$(CONFIG_MACH_QEMU_VIRT64) += qemu-virt64/ diff --git a/arch/arm/boards/qemu-virt64/Kconfig b/arch/arm/boards/qemu-virt64/Kconfig new file mode 100644 index 0000000..b7bee3a --- /dev/null +++ b/arch/arm/boards/qemu-virt64/Kconfig @@ -0,0 +1,8 @@ + +if MACH_QEMU + +config ARCH_TEXT_BASE + hex + default 0x40000000 + +endif diff --git a/arch/arm/boards/qemu-virt64/Makefile b/arch/arm/boards/qemu-virt64/Makefile new file mode 100644 index 0000000..eb072c0 --- /dev/null +++ b/arch/arm/boards/qemu-virt64/Makefile @@ -0,0 +1 @@ +obj-y += init.o diff --git a/arch/arm/boards/qemu-virt64/init.c b/arch/arm/boards/qemu-virt64/init.c new file mode 100644 index 0000000..312cc7f --- /dev/null +++ b/arch/arm/boards/qemu-virt64/init.c @@ -0,0 +1,67 @@ +/* + * Copyright (C) 2016 Raphaël Poggi <poggi.raph@gmail.com> + * + * GPLv2 only + */ + +#include <common.h> +#include <init.h> +#include <asm/armlinux.h> +#include <asm/system_info.h> +#include <mach/devices.h> +#include <environment.h> +#include <linux/sizes.h> +#include <io.h> +#include <globalvar.h> +#include <asm/mmu.h> + +static int virt_mem_init(void) +{ + virt_add_ddram(SZ_512M); + + add_cfi_flash_device(0, 0x00000000, SZ_4M, 0); + + devfs_add_partition("nor0", 0x00000, 0x40000, DEVFS_PARTITION_FIXED, "self0"); + devfs_add_partition("nor0", 0x40000, 0x20000, DEVFS_PARTITION_FIXED, "env0"); + + return 0; +} +mem_initcall(virt_mem_init); + +static int virt_console_init(void) +{ + virt_register_uart(0); + + return 0; +} +console_initcall(virt_console_init); + +static int virt_core_init(void) +{ + char *hostname = "virt"; + + if (cpu_is_cortex_a53()) + hostname = "virt-a53"; + else if (cpu_is_cortex_a57()) + hostname = "virt-a57"; + + barebox_set_model("ARM QEMU virt"); + barebox_set_hostname(hostname); + + return 0; +} +postcore_initcall(virt_core_init); + +static int virt_mmu_enable(void) +{ + /* Mapping all periph range */ + arch_remap_range(0x09000000, 0x01000000, DEV_MEM); + + /* Mapping all flash range */ + arch_remap_range(0x00000000, 0x08000000, DEV_MEM); + + mmu_enable(); + + return 0; +} +postmmu_initcall(virt_mmu_enable); diff --git a/arch/arm/configs/qemu_virt64_defconfig b/arch/arm/configs/qemu_virt64_defconfig new file mode 100644 index 0000000..cdce253 --- /dev/null +++ b/arch/arm/configs/qemu_virt64_defconfig @@ -0,0 +1,55 @@ +CONFIG_ARCH_QEMU=y +CONFIG_BAREBOX_MAX_IMAGE_SIZE=0x05000000 +CONFIG_AEABI=y +CONFIG_ARM_OPTIMZED_STRING_FUNCTIONS=y +CONFIG_MMU=y +# CONFIG_MMU_EARLY is not set +CONFIG_BAREBOX_MAX_BARE_INIT_SIZE=0x01000000 +CONFIG_MEMORY_LAYOUT_FIXED=y +CONFIG_STACK_BASE=0x60000000 +CONFIG_MALLOC_BASE=0x50000000 +CONFIG_PROMPT="virt64: " +CONFIG_HUSH_FANCY_PROMPT=y +CONFIG_CMDLINE_EDITING=y +CONFIG_AUTO_COMPLETE=y +CONFIG_MENU=y +CONFIG_PASSWORD=y +CONFIG_PARTITION=y +CONFIG_DEFAULT_ENVIRONMENT_GENERIC_NEW=y +CONFIG_DEFAULT_ENVIRONMENT_PATH="arch/arm/boards/virt/env" +CONFIG_DEBUG_INFO=y +CONFIG_LONGHELP=y +# CONFIG_CMD_BOOTM is not set +# CONFIG_CMD_BOOTU is not set +# CONFIG_CMD_MOUNT is not set +# CONFIG_CMD_UMOUNT is not set +# CONFIG_CMD_CAT is not set +# CONFIG_CMD_CD is not set +# CONFIG_CMD_CP is not set +# CONFIG_CMD_LS is not set +# CONFIG_CMD_MKDIR is not set +# CONFIG_CMD_PWD is not set +# CONFIG_CMD_RM is not set +# CONFIG_CMD_RMDIR is not set +# CONFIG_CMD_FALSE is not set +# CONFIG_CMD_TEST is not set +# CONFIG_CMD_TRUE is not set +# CONFIG_CMD_CLEAR is not set +# CONFIG_CMD_ECHO is not set +CONFIG_CMD_CRC=y +CONFIG_CMD_CRC_CMP=y +# CONFIG_CMD_MD is not set +# CONFIG_CMD_MEMCMP is not set +# CONFIG_CMD_MEMCPY is not set +# CONFIG_CMD_MEMSET is not set +# CONFIG_CMD_MW is not set +CONFIG_SERIAL_AMBA_PL011=y +# CONFIG_SPI is not set +CONFIG_MTD=y +CONFIG_DRIVER_CFI=y +CONFIG_DRIVER_CFI_BANK_WIDTH_8=y +CONFIG_CFI_BUFFER_WRITE=y +CONFIG_NAND=y +# CONFIG_FS_RAMFS is not set +CONFIG_DIGEST_SHA1_GENERIC=y +CONFIG_DIGEST_SHA256_GENERIC=y diff --git a/arch/arm/mach-qemu/Kconfig b/arch/arm/mach-qemu/Kconfig new file mode 100644 index 0000000..d30bae4 --- /dev/null +++ b/arch/arm/mach-qemu/Kconfig @@ -0,0 +1,18 @@ +if ARCH_QEMU + +config ARCH_TEXT_BASE + hex + default 0x40000000 + +choice + prompt "ARM Board type" + +config MACH_QEMU_VIRT64 + bool "QEMU arm64 virt machine" + select CPU_V8 + select SYS_SUPPORTS_64BIT_KERNEL + select ARM_AMBA + select HAVE_CONFIGURABLE_MEMORY_LAYOUT + +endchoice +endif diff --git a/arch/arm/mach-qemu/Makefile b/arch/arm/mach-qemu/Makefile new file mode 100644 index 0000000..29e5f35 --- /dev/null +++ b/arch/arm/mach-qemu/Makefile @@ -0,0 +1,2 @@ +obj-$(CONFIG_MACH_QEMU_VIRT64) += virt_devices.o +lwl-$(CONFIG_MACH_QEMU_VIRT64) += virt_lowlevel.o diff --git a/arch/arm/mach-qemu/include/mach/debug_ll.h b/arch/arm/mach-qemu/include/mach/debug_ll.h new file mode 100644 index 0000000..89b0692 --- /dev/null +++ b/arch/arm/mach-qemu/include/mach/debug_ll.h @@ -0,0 +1,24 @@ +/* + * Copyright 2013 Jean-Christophe PLAGNIOL-VILLARD <plagniol@jcrosoft.com> + * + * GPLv2 only + */ + +#ifndef __MACH_DEBUG_LL_H__ +#define __MACH_DEBUG_LL_H__ + +#include <linux/amba/serial.h> +#include <io.h> + +#define DEBUG_LL_PHYS_BASE 0x10000000 +#define DEBUG_LL_PHYS_BASE_RS1 0x1c000000 + +#ifdef MP +#define DEBUG_LL_UART_ADDR DEBUG_LL_PHYS_BASE +#else +#define DEBUG_LL_UART_ADDR DEBUG_LL_PHYS_BASE_RS1 +#endif + +#include <asm/debug_ll_pl011.h> + +#endif diff --git a/arch/arm/mach-qemu/include/mach/devices.h b/arch/arm/mach-qemu/include/mach/devices.h new file mode 100644 index 0000000..9872c61 --- /dev/null +++ b/arch/arm/mach-qemu/include/mach/devices.h @@ -0,0 +1,13 @@ +/* + * Copyright (C) 2016 Raphaël Poggi <poggi.raph@gmail.com> + * + * GPLv2 only + */ + +#ifndef __ASM_ARCH_DEVICES_H__ +#define __ASM_ARCH_DEVICES_H__ + +void virt_add_ddram(u32 size); +void virt_register_uart(unsigned id); + +#endif /* __ASM_ARCH_DEVICES_H__ */ diff --git a/arch/arm/mach-qemu/virt_devices.c b/arch/arm/mach-qemu/virt_devices.c new file mode 100644 index 0000000..999f463 --- /dev/null +++ b/arch/arm/mach-qemu/virt_devices.c @@ -0,0 +1,30 @@ +/* + * Copyright (C) 2016 Raphaël Poggi <poggi.raph@gmail.com> + * + * GPLv2 only + */ + +#include <common.h> +#include <linux/amba/bus.h> +#include <asm/memory.h> +#include <mach/devices.h> +#include <linux/ioport.h> + +void virt_add_ddram(u32 size) +{ + arm_add_mem_device("ram0", 0x40000000, size); +} + +void virt_register_uart(unsigned id) +{ + resource_size_t start; + + switch (id) { + case 0: + start = 0x09000000; + break; + default: + return; + } + amba_apb_device_add(NULL, "uart-pl011", id, start, 4096, NULL, 0); +} diff --git a/arch/arm/mach-qemu/virt_lowlevel.c b/arch/arm/mach-qemu/virt_lowlevel.c new file mode 100644 index 0000000..6f695a5 --- /dev/null +++ b/arch/arm/mach-qemu/virt_lowlevel.c @@ -0,0 +1,19 @@ +/* + * Copyright (C) 2013 Jean-Christophe PLAGNIOL-VILLARD <plagnio@jcrosoft.com> + * + * GPLv2 only + */ + +#include <common.h> +#include <linux/sizes.h> +#include <asm/barebox-arm-head.h> +#include <asm/barebox-arm.h> +#include <asm/system_info.h> + +void barebox_arm_reset_vector(void) +{ + arm_cpu_lowlevel_init(); + arm_setup_stack(STACK_BASE); + + barebox_arm_entry(0x40000000, SZ_512M, NULL); +} -- 2.1.0 _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: 2016-06-14 7:06 Raphael Poggi ` (11 preceding siblings ...) 2016-06-14 7:06 ` [PATCH v2 12/12] arm: boards: add mach-qemu and virt64 board Raphael Poggi @ 2016-06-24 8:17 ` Raphaël Poggi 2016-06-24 11:49 ` Re: Sascha Hauer 12 siblings, 1 reply; 19+ messages in thread From: Raphaël Poggi @ 2016-06-24 8:17 UTC (permalink / raw) To: barebox, Sascha Hauer Hi Sascha, Beside the comments on [PATCH 01/12] and [PATCH 03/12], do you have any comments about the series ? I have a v3 series ready to be sent (with your recent suggestions). Thanks, Raphaël 2016-06-14 9:06 GMT+02:00 Raphael Poggi <poggi.raph@gmail.com>: > Change since v1: > PATCH 2/12: remove hunk which belongs to patch adding mach-qemu > > PATCH 3/12: remove unused files > > PATCH 4/12: create lowlevel64 > > PATCH 11/12: create pgtables64 (nothing in common with the arm32 version) > > PATCH 12/12: rename "mach-virt" => "mach-qemu" > rename board "qemu_virt64" > remove board env files > > > Hello, > > This patch series introduces a basic support for arm64. > > The arm64 code is merged in the current arch/arm directory. > I try to be iterative in the merge process, and find correct solutions > to handle both architecture at some places. > > I test the patch series by compiling arm64 virt machine and arm32 vexpress-a9 and test it > in qemu, everything seems to work. > > Thanks, > Raphaël > > arch/arm/Kconfig | 28 ++ > arch/arm/Makefile | 30 +- > arch/arm/boards/Makefile | 1 + > arch/arm/boards/qemu-virt64/Kconfig | 8 + > arch/arm/boards/qemu-virt64/Makefile | 1 + > arch/arm/boards/qemu-virt64/init.c | 67 ++++ > arch/arm/configs/qemu_virt64_defconfig | 55 +++ > arch/arm/cpu/Kconfig | 29 +- > arch/arm/cpu/Makefile | 26 +- > arch/arm/cpu/cache-armv8.S | 168 +++++++++ > arch/arm/cpu/cache.c | 19 + > arch/arm/cpu/cpu.c | 5 + > arch/arm/cpu/cpuinfo.c | 58 ++- > arch/arm/cpu/exceptions_64.S | 127 +++++++ > arch/arm/cpu/interrupts.c | 47 +++ > arch/arm/cpu/lowlevel_64.S | 40 ++ > arch/arm/cpu/mmu.h | 54 +++ > arch/arm/cpu/mmu_64.c | 333 +++++++++++++++++ > arch/arm/cpu/start.c | 2 + > arch/arm/include/asm/bitops.h | 5 + > arch/arm/include/asm/cache.h | 9 + > arch/arm/include/asm/mmu.h | 14 +- > arch/arm/include/asm/pgtable64.h | 140 +++++++ > arch/arm/include/asm/system.h | 46 ++- > arch/arm/include/asm/system_info.h | 38 ++ > arch/arm/lib64/Makefile | 10 + > arch/arm/lib64/armlinux.c | 275 ++++++++++++++ > arch/arm/lib64/asm-offsets.c | 16 + > arch/arm/lib64/barebox.lds.S | 125 +++++++ > arch/arm/lib64/bootm.c | 572 +++++++++++++++++++++++++++++ > arch/arm/lib64/copy_template.S | 192 ++++++++++ > arch/arm/lib64/div0.c | 27 ++ > arch/arm/lib64/memcpy.S | 74 ++++ > arch/arm/lib64/memset.S | 215 +++++++++++ > arch/arm/lib64/module.c | 98 +++++ > arch/arm/mach-qemu/Kconfig | 18 + > arch/arm/mach-qemu/Makefile | 2 + > arch/arm/mach-qemu/include/mach/debug_ll.h | 24 ++ > arch/arm/mach-qemu/include/mach/devices.h | 13 + > arch/arm/mach-qemu/virt_devices.c | 30 ++ > arch/arm/mach-qemu/virt_lowlevel.c | 19 + > 41 files changed, 3044 insertions(+), 16 deletions(-) > > > _______________________________________________ > barebox mailing list > barebox@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/barebox _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: 2016-06-24 8:17 ` Raphaël Poggi @ 2016-06-24 11:49 ` Sascha Hauer 0 siblings, 0 replies; 19+ messages in thread From: Sascha Hauer @ 2016-06-24 11:49 UTC (permalink / raw) To: Raphaël Poggi; +Cc: barebox Hi Raphaël, On Fri, Jun 24, 2016 at 10:17:45AM +0200, Raphaël Poggi wrote: > Hi Sascha, > > Beside the comments on [PATCH 01/12] and [PATCH 03/12], do you have > any comments about the series ? I have a v3 series ready to be sent > (with your recent suggestions). No more comments for now, go ahead with the new series. Sascha -- Pengutronix e.K. | | Industrial Linux Solutions | http://www.pengutronix.de/ | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | _______________________________________________ barebox mailing list barebox@lists.infradead.org http://lists.infradead.org/mailman/listinfo/barebox ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2016-06-24 11:49 UTC | newest] Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2016-06-14 7:06 Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 01/12] arm: add armv8 Kconfig entries Raphael Poggi 2016-06-15 6:33 ` Sascha Hauer 2016-06-23 14:43 ` Raphaël Poggi 2016-06-14 7:06 ` [PATCH v2 02/12] arm: Makefile: rework makefile to handle armv8 Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 03/12] arm: introduce lib64 for arm64 related stuff Raphael Poggi 2016-06-15 6:15 ` Sascha Hauer 2016-06-23 14:43 ` Raphaël Poggi 2016-06-14 7:06 ` [PATCH v2 04/12] arm: cpu: add arm64 specific code Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 05/12] arm: include: system: add arm64 helper functions Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 06/12] arm: cpu: start: arm64 does not support relocation Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 07/12] arm: include: bitops: arm64 use generic __fls Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 08/12] arm: include: system_info: add armv8 identification Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 09/12] arm: cpu: cpuinfo: add armv8 support Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 10/12] arm: cpu: disable code portion in armv8 case Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 11/12] arm: cpu: add basic arm64 mmu support Raphael Poggi 2016-06-14 7:06 ` [PATCH v2 12/12] arm: boards: add mach-qemu and virt64 board Raphael Poggi 2016-06-24 8:17 ` Raphaël Poggi 2016-06-24 11:49 ` Re: Sascha Hauer
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox