From: Ahmad Fatoum <a.fatoum@pengutronix.de>
To: Renaud Barbier <Renaud.Barbier@ametek.com>,
Barebox List <barebox@lists.infradead.org>
Cc: Lucas Stach <lst@pengutronix.de>
Subject: Re: PCIE on LS1021A
Date: Tue, 3 Feb 2026 14:12:15 +0100 [thread overview]
Message-ID: <6d4ec426-170d-4a14-a936-d821229fd9e7@pengutronix.de> (raw)
In-Reply-To: <SJ0PR07MB976646426776C78BAF43934AEC9BA@SJ0PR07MB9766.namprd07.prod.outlook.com>
Hello Renaud,
On 2/3/26 2:06 PM, Renaud Barbier wrote:
> I have a patch ready.
> As I was not familiar with the difficulty of supporting/debugging MMU and needing a quick around, the file mmu_32.c has been duplicated into mmu_lpae.c.
>
> With so many similarities between the two files, I am pretty sure you were expecting an update of mmu_32.c with LPAE support.
>
> Please advise, if you like to receive the patch or wait a bit longer.
I expect LPAE would only be enabled for hardware that really needs it,
so having a separate mmu_32lpae.c sounds better to me than complicating
the logic inside mmu_32.c.
I can't say for sure without seeing it first though, so I suggest you
just send out what you have and then we can see where we go from there?
Cheers,
Ahmad
>
> Cheers,
> Renaud
>
>> -----Original Message-----
>> From: Ahmad Fatoum <a.fatoum@pengutronix.de>
>> Sent: 02 February 2026 10:14
>> To: Renaud Barbier <Renaud.Barbier@ametek.com>; Barebox List
>> <barebox@lists.infradead.org>
>> Cc: Lucas Stach <lst@pengutronix.de>
>> Subject: Re: PCIE on LS1021A
>>
>> ***NOTICE*** This came from an external source. Use caution when
>> replying, clicking links, or opening attachments.
>>
>> Hello Renaud,
>>
>> On 2/2/26 10:57 AM, Renaud Barbier wrote:
>>> From the head of next, I got the MMU with LPAE support to work.
>>> I can prepare a patch for the MMU LPAE support and later a patch for the
>> LS1021A PCIE support.
>>> I have not tested the code on QEMU yet.
>>>
>>> Do you require the code to be tested in QEMU before I send it?
>>
>> We'll want to test the LPAE case in CI, so it doesn't bitrot over time.
>>
>> I can help with the QEMU integration, for v1, just make sure that a single
>> user visible CONFIG_ARM_LPAE enables it and if it's disabled, behavior is
>> unmodified.
>>
>> Cheers,
>> Ahmad
>>
>>>
>>>> -----Original Message-----
>>>> From: barebox <barebox-bounces@lists.infradead.org> On Behalf Of
>>>> Renaud Barbier
>>>> Sent: 28 January 2026 16:40
>>>> To: Ahmad Fatoum <a.fatoum@pengutronix.de>; Barebox List
>>>> <barebox@lists.infradead.org>
>>>> Cc: Lucas Stach <lst@pengutronix.de>
>>>> Subject: RE: PCIE on LS1021A
>>>>
>>>> ***NOTICE*** This came from an external source. Use caution when
>>>> replying, clicking links, or opening attachments.
>>>>
>>>> Just to let you know I was developing from barebox 2024.09 as this
>>>> was a requirement for our product.
>>>> I started to move the LPAE support and follow the next branch.
>>>> Barebox is booting but currently failing to probe the PCIe NVME device.
>>>>
>>>> A bit more debugging and hopefully, I can get something soon.
>>>>
>>>>> -----Original Message-----
>>>>> From: Ahmad Fatoum <a.fatoum@pengutronix.de>
>>>>> Sent: 20 January 2026 13:41
>>>>> To: Renaud Barbier <Renaud.Barbier@ametek.com>; Barebox List
>>>>> <barebox@lists.infradead.org>
>>>>> Cc: Lucas Stach <lst@pengutronix.de>
>>>>> Subject: Re: PCIE on LS1021A
>>>>>
>>>>> ***NOTICE*** This came from an external source. Use caution when
>>>>> replying, clicking links, or opening attachments.
>>>>>
>>>>> Hello Renaud,
>>>>>
>>>>> On 1/13/26 7:26 PM, Renaud Barbier wrote:
>>>>>> Changing the NVME to the PCIe2 bus and fixing a few things in the
>>>>>> MMU
>>>>> support, I am now able to detect the NVME:
>>>>>>
>>>>>> nvme pci-126f:2263.0: serial: A012410180629000000 nvme
>>>>>> pci-126f:2263.0: model: SM681GEF AGS nvme pci-126f:2263.0:
>> firmware:
>>>>>> TFX7GB
>>>>>>
>>>>>> barebox:/ ls /dev/nvme0n1
>>>>>> barebox:/ ls /dev/nvme0n1*
>>>>>> /dev/nvme0n1 /dev/nvme0n1.0
>>>>>> /dev/nvme0n1.1 /dev/nvme0n1.2
>>>>>> /dev/nvme0n1.3 /dev/nvme0n1.4
>>>>>> ...
>>>>>>
>>>>>> Thanks to the following remapping:
>>>>>> /* PCIe1 Config and memory area remapping */
>>>>>> map_io_sections(0x4000000000ULL, IOMEM(0x24000000), 192 << 20);
>>>> /*
>>>>>> PCIE1 conf space */ //map_io_sections(0x4040000000ULL,
>>>>>> IOMEM(0x40000000), 128 << 20); /* PCIE1 mem space */
>>>>>>
>>>>>> /* PCIe2 Config and memory area remapping */
>>>>>> map_io_sections(0x4800000000ULL, IOMEM(0x34000000), 192 << 20);
>>>> /*
>>>>>> PCIe2 config space */ map_io_sections(0x4840000000ULL,
>>>>>> IOMEM(0x50000000), 128 << 20); /* PCIE2 mem space */
>>>>>>
>>>>>> For some reason, I had to comment out the remapping of the PCIe1
>>>>>> MEM
>>>>> space as the system hangs just after detecting the NVME device.
>>>>>> The PCIe1 device node is not even enabled.
>>>>>> If you have a clue, let me know.
>>>>>
>>>>> I don't have an idea off the top of my head sorry.
>>>>> If you have something roughly working, it would be good if you could
>>>>> check it works with qemu-system-arm -M virt,highmem=on and send an
>>>>> initial patch series?
>>>>>
>>>>> Cheers,
>>>>> Ahmad
>>>>>
>>>>>>
>>>>>> Cheers,
>>>>>> Renaud
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: barebox <barebox-bounces@lists.infradead.org> On Behalf Of
>>>>>>> Renaud Barbier
>>>>>>> Sent: 07 January 2026 09:44
>>>>>>> To: Ahmad Fatoum <a.fatoum@pengutronix.de>; Barebox List
>>>>>>> <barebox@lists.infradead.org>
>>>>>>> Cc: Lucas Stach <lst@pengutronix.de>
>>>>>>> Subject: RE: PCIE on LS1021A
>>>>>>>
>>>>>>> ***NOTICE*** This came from an external source. Use caution when
>>>>>>> replying, clicking links, or opening attachments.
>>>>>>>
>>>>>>> Based on your information and U-boot and I have started to work on
>>>>>>> the LPAE support. So far full of debugging and hacks.
>>>>>>>
>>>>>>> It is based on the mmu_32.c file. As I have failed to use the 3
>>>>>>> MMU tables, at present I am using only 2 as in u-boot.
>>>>>>> The 64-bit PCI space is remapped with:
>>>>>>> map_io_sections(0x4000000000ULL ,IOMEM(0x24000000UL), 192 <<
>>>> 20);
>>>>>>>
>>>>>>> To detect the NVME device, the virtulal address 0x24000000 is
>>>>>>> hard-coded into the functions dw_pcie_[wr|rd]_other_conf of
>>>>>>> drivers/pci/pcie- designware-host.c as follows:
>>>>>>> if (bus->primary == pp->root_bus_nr) {
>>>>>>> type = PCIE_ATU_TYPE_CFG0;
>>>>>>> cpu_addr = pp->cfg0_base;
>>>>>>> cfg_size = pp->cfg0_size;
>>>>>>> pp->va_cfg0_base = IOMEM(0x24000000); /* XXX */
>>>>>>> va_cfg_base = pp->va_cfg0_base;
>>>>>>>
>>>>>>> What is the method to pass the address to the driver?
>>>>>>>
>>>>>>> And I get the following:
>>>>>>> layerscape-pcie 3400000.pcie@3400000.of: host bridge
>>>>>>> /soc/pcie@3400000
>>>>>>> ranges:
>>>>>>> layerscape-pcie 3400000.pcie@3400000.of: Parsing ranges property...
>>>>>>> layerscape-pcie 3400000.pcie@3400000.of: IO
>>>>>>> 0x4000010000..0x400001ffff -> 0x0000000000
>>>>>>> layerscape-pcie 3400000.pcie@3400000.of: MEM
>>>>>>> 0x4040000000..0x407fffffff -> 0x0040000000
>>>>>>>
>>>>>>> ERROR: io_bus_addr = 0x0, io_base = 0x4000010000
>>>>>>> ERROR: mem_bus_addr = 0x4040000000 -> Based on Linux output,
>> the
>>>>>>> mem_bus_addr should be as above 0x4000.0000 to be programmed
>> in
>>>>> the
>>>>>>> ATU target register.
>>>>>>> ERROR: mem_base = 0x4040000000, offset = 0x0
>>>>>>>
>>>>>>> ERROR: layerscape-pcie 3400000.pcie@3400000.of: iATU unroll:
>>>>>>> disabled
>>>>>>>
>>>>>>> pci: pci_scan_bus for bus 0
>>>>>>> pci: last_io = 0x00010000, last_mem = 0x40000000, last_mem_pref
>> =
>>>>>>> 0x00000000
>>>>>>> pci: class = 00000604, hdr_type = 00000001
>>>>>>> pci: 00:00 [1957:0e0a]
>>>>>>> pci: pci_scan_bus for bus 1
>>>>>>> pci: last_io = 0x00010000, last_mem = 0x40000000, last_mem_pref
>> =
>>>>>>> 0x00000000
>>>>>>>
>>>>>>> pci: class = 00000108, hdr_type = 00000000
>>>>>>> pci: 01:00 [126f:2263] -> NVME device found
>>>>>>> pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
>>>>>>> ERROR: pci: &&& sub = 0x2263, 0x126f kind = NP-MEM&&&
>>>>>>> ERROR: pci: &&& write BAR 0x10 = 0x40000000 &&& ...
>>>>>>> pci: pci_scan_bus returning with max=02
>>>>>>> pci: bridge NP limit at 0x40100000
>>>>>>> pci: bridge IO limit at 0x00010000
>>>>>>> pci: pbar0: mask=ff000000 NP-MEM 16777216 bytes
>>>>>>> pci: pbar1: mask=fc000000 NP-MEM 67108864 bytes
>>>>>>> pci: pci_scan_bus returning with max=02
>>>>>>> ERROR: nvme pci-126f:2263.0: enabling bus mastering
>>>>>>>
>>>>>>> Then, the system hangs on the instruction 3 lines below:
>>>>>>> ERROR: nvme_pci_enable : 0x4000001c -> Fails to access the NVME
>>>>>>> CSTS register. It does not matter if mem_bus_addr is set to
>>>>>>> 0x4000.0000 to program the ATU to translate the address
>>>>>>> 0x40.4000.0000 to
>>>>> 0x4000.0000.
>>>>>>> if (readl(dev->bar + NVME_REG_CSTS) == -1)
>>>>>>>
>>>>>>> 0x4000.0000 is also the quadSPI memory area. So I guess I should
>>>>>>> remap the access too.
>>>>>>>
>>>>>>> Unhappily, my work is now at a stop as there is a hardware failure
>>>>>>> on my system.
>>>>>>>
>>>>>>> Note: the MMU may not be set properly as the out of-band fails to
>>>>>>> on TX timeout. I can reach the prompt after the NVME probing failed.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> Pengutronix e.K. | |
>>>>> Steuerwalder Str. 21 |
>>>>>
>>>>
>> https://urldefense.com/v3/__http://www.pengutronix.de/__;!!HKOSU0g!Dl
>>>> Z
>>>>>
>>>>
>> b2oy6FdvgOu3JutuBMr0zf4ib6x_vlFyfBU3Fgcpgud4iuzA7FLewuR6dBQULYVe
>>>>> xgDvoQqAlgtgyY1fAds9Tovg$ |
>>>>> 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
>>>>> Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
>>>
>>
>> --
>> Pengutronix e.K. | |
>> Steuerwalder Str. 21 |
>> https://urldefense.com/v3/__http://www.pengutronix.de/__;!!HKOSU0g!GF
>> VpvSNJ7ZpBEARsSI_zI5NAW7-
>> aaRK9UC_uDdJPcEQlD0S2hzfVDc2gc4yT1rTBqbsuz4_Qxy7lu2QxzSd0j2eqeL0$
>> |
>> 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
>> Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
>
--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
prev parent reply other threads:[~2026-02-03 13:12 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-09 17:31 Renaud Barbier
2022-12-09 18:01 ` Renaud Barbier
2022-12-09 18:37 ` Ahmad Fatoum
2022-12-09 18:58 ` Renaud Barbier
2022-12-09 19:01 ` Ahmad Fatoum
2022-12-13 9:40 ` Renaud Barbier
2022-12-09 19:18 ` Ahmad Fatoum
2025-11-18 17:42 ` Renaud Barbier
2025-11-28 10:14 ` Ahmad Fatoum
[not found] ` <BN8PR07MB6993379BECE786EB6B07106FECA7A@BN8PR07MB6993.namprd07.prod.outlook.com>
2025-12-11 10:46 ` Ahmad Fatoum
2026-01-07 9:43 ` Renaud Barbier
2026-01-13 18:26 ` Renaud Barbier
2026-01-20 13:41 ` Ahmad Fatoum
2026-01-28 16:39 ` Renaud Barbier
2026-02-02 9:57 ` Renaud Barbier
2026-02-02 10:13 ` Ahmad Fatoum
2026-02-03 13:06 ` Renaud Barbier
2026-02-03 13:12 ` Ahmad Fatoum [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6d4ec426-170d-4a14-a936-d821229fd9e7@pengutronix.de \
--to=a.fatoum@pengutronix.de \
--cc=Renaud.Barbier@ametek.com \
--cc=barebox@lists.infradead.org \
--cc=lst@pengutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox