mail archive of the barebox mailing list
 help / color / mirror / Atom feed
From: Ahmad Fatoum <a.fatoum@pengutronix.de>
To: Renaud Barbier <Renaud.Barbier@ametek.com>,
	Barebox List <barebox@lists.infradead.org>
Cc: Lucas Stach <lst@pengutronix.de>
Subject: Re: PCIE on LS1021A
Date: Tue, 20 Jan 2026 14:41:06 +0100	[thread overview]
Message-ID: <b4339f2e-e306-4f84-a43b-751ef7b4ed55@pengutronix.de> (raw)
In-Reply-To: <SJ0PR07MB9766897F421E8809B42266ABEC8EA@SJ0PR07MB9766.namprd07.prod.outlook.com>

Hello Renaud,

On 1/13/26 7:26 PM, Renaud Barbier wrote:
> Changing the NVME to the  PCIe2 bus and fixing a few things in the MMU support, I am now able to detect the NVME:
> 
> nvme pci-126f:2263.0: serial: A012410180629000000
> nvme pci-126f:2263.0: model: SM681GEF AGS
> nvme pci-126f:2263.0: firmware: TFX7GB
> 
> barebox:/ ls /dev/nvme0n1
> barebox:/ ls /dev/nvme0n1*
> /dev/nvme0n1                        /dev/nvme0n1.0
> /dev/nvme0n1.1                      /dev/nvme0n1.2
> /dev/nvme0n1.3                      /dev/nvme0n1.4
> ...
> 
> Thanks to the following remapping:
> /* PCIe1 Config and memory area remapping */
> map_io_sections(0x4000000000ULL, IOMEM(0x24000000), 192 << 20); /* PCIE1 conf space */
> //map_io_sections(0x4040000000ULL, IOMEM(0x40000000), 128 << 20); /* PCIE1 mem space */
> 
>  /* PCIe2 Config and memory area remapping */
> map_io_sections(0x4800000000ULL, IOMEM(0x34000000), 192 << 20); /* PCIe2 config space */
> map_io_sections(0x4840000000ULL, IOMEM(0x50000000), 128 << 20); /* PCIE2 mem space */
> 
> For some reason, I had to comment out the remapping of the PCIe1 MEM space as the system hangs just after detecting the NVME device.
> The PCIe1 device node is not even enabled. 
> If you have a clue, let me know.

I don't have an idea off the top of my head sorry.
If you have something roughly working, it would be good if you could
check it works with qemu-system-arm -M virt,highmem=on and send an
initial patch series?

Cheers,
Ahmad

> 
> Cheers,
> Renaud
> 
> 
> 
> 
>> -----Original Message-----
>> From: barebox <barebox-bounces@lists.infradead.org> On Behalf Of Renaud
>> Barbier
>> Sent: 07 January 2026 09:44
>> To: Ahmad Fatoum <a.fatoum@pengutronix.de>; Barebox List
>> <barebox@lists.infradead.org>
>> Cc: Lucas Stach <lst@pengutronix.de>
>> Subject: RE: PCIE on LS1021A
>>
>> ***NOTICE*** This came from an external source. Use caution when
>> replying, clicking links, or opening attachments.
>>
>> Based on your information and U-boot and I have started to work on the LPAE
>> support. So far full of debugging and hacks.
>>
>> It is based on the mmu_32.c file. As I have failed to use the 3 MMU tables,
>> at present I am using only 2 as  in u-boot.
>> The 64-bit PCI space is remapped with:
>> map_io_sections(0x4000000000ULL ,IOMEM(0x24000000UL), 192 << 20);
>>
>> To detect the NVME device, the virtulal address 0x24000000 is hard-coded
>> into the functions dw_pcie_[wr|rd]_other_conf of drivers/pci/pcie-
>> designware-host.c as follows:
>> if (bus->primary == pp->root_bus_nr) {
>>                   type = PCIE_ATU_TYPE_CFG0;
>>                   cpu_addr = pp->cfg0_base;
>>                   cfg_size = pp->cfg0_size;
>>                   pp->va_cfg0_base = IOMEM(0x24000000); /* XXX */
>>                   va_cfg_base = pp->va_cfg0_base;
>>
>> What is the method to pass the address to the driver?
>>
>> And I get the following:
>> layerscape-pcie 3400000.pcie@3400000.of: host bridge /soc/pcie@3400000
>> ranges:
>> layerscape-pcie 3400000.pcie@3400000.of: Parsing ranges property...
>> layerscape-pcie 3400000.pcie@3400000.of:       IO
>> 0x4000010000..0x400001ffff -> 0x0000000000
>> layerscape-pcie 3400000.pcie@3400000.of:      MEM
>> 0x4040000000..0x407fffffff -> 0x0040000000
>>
>> ERROR: io_bus_addr = 0x0, io_base = 0x4000010000
>> ERROR: mem_bus_addr = 0x4040000000 -> Based on Linux output, the
>> mem_bus_addr should be as above 0x4000.0000 to be programmed in the
>> ATU target register.
>> ERROR: mem_base = 0x4040000000, offset = 0x0
>>
>> ERROR: layerscape-pcie 3400000.pcie@3400000.of: iATU unroll: disabled
>>
>> pci: pci_scan_bus for bus 0
>> pci:  last_io = 0x00010000, last_mem = 0x40000000, last_mem_pref =
>> 0x00000000
>> pci: class = 00000604, hdr_type = 00000001
>> pci: 00:00 [1957:0e0a]
>> pci: pci_scan_bus for bus 1
>> pci:  last_io = 0x00010000, last_mem = 0x40000000, last_mem_pref =
>> 0x00000000
>>
>> pci: class = 00000108, hdr_type = 00000000
>> pci: 01:00 [126f:2263] -> NVME device found
>> pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
>> ERROR: pci: &&&  sub = 0x2263, 0x126f kind = NP-MEM&&&
>> ERROR: pci: &&& write BAR 0x10 = 0x40000000 &&& ...
>> pci: pci_scan_bus returning with max=02
>> pci: bridge NP limit at 0x40100000
>> pci: bridge IO limit at 0x00010000
>> pci: pbar0: mask=ff000000 NP-MEM 16777216 bytes
>> pci: pbar1: mask=fc000000 NP-MEM 67108864 bytes
>> pci: pci_scan_bus returning with max=02
>> ERROR: nvme pci-126f:2263.0: enabling bus mastering
>>
>> Then, the system hangs on the instruction 3 lines below:
>> ERROR: nvme_pci_enable : 0x4000001c -> Fails to access the NVME CSTS
>> register. It does not matter if mem_bus_addr is set to 0x4000.0000 to
>> program the ATU to translate the address 0x40.4000.0000 to 0x4000.0000.
>> if (readl(dev->bar + NVME_REG_CSTS) == -1)
>>
>> 0x4000.0000 is also the quadSPI memory area. So I guess I should remap the
>> access too.
>>
>> Unhappily, my work is now at a stop as there is a hardware failure on my
>> system.
>>
>> Note: the MMU may not be set properly as the out of-band fails to on TX
>> timeout. I can reach the prompt after the NVME probing failed.
>>
>>
>>
> 

-- 
Pengutronix e.K.                  |                             |
Steuerwalder Str. 21              | http://www.pengutronix.de/  |
31137 Hildesheim, Germany         | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686  | Fax:   +49-5121-206917-5555 |




      reply	other threads:[~2026-01-20 13:41 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-09 17:31 Renaud Barbier
2022-12-09 18:01 ` Renaud Barbier
2022-12-09 18:37 ` Ahmad Fatoum
2022-12-09 18:58   ` Renaud Barbier
2022-12-09 19:01     ` Ahmad Fatoum
2022-12-13  9:40       ` Renaud Barbier
2022-12-09 19:18   ` Ahmad Fatoum
2025-11-18 17:42     ` Renaud Barbier
2025-11-28 10:14       ` Ahmad Fatoum
     [not found]         ` <BN8PR07MB6993379BECE786EB6B07106FECA7A@BN8PR07MB6993.namprd07.prod.outlook.com>
2025-12-11 10:46           ` Ahmad Fatoum
2026-01-07  9:43         ` Renaud Barbier
2026-01-13 18:26           ` Renaud Barbier
2026-01-20 13:41             ` Ahmad Fatoum [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b4339f2e-e306-4f84-a43b-751ef7b4ed55@pengutronix.de \
    --to=a.fatoum@pengutronix.de \
    --cc=Renaud.Barbier@ametek.com \
    --cc=barebox@lists.infradead.org \
    --cc=lst@pengutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox