mail archive of the barebox mailing list
 help / color / mirror / Atom feed
* PCI memory mapping
@ 2025-04-07 14:55 Renaud Barbier
  2025-04-08  9:54 ` Lucas Stach
  0 siblings, 1 reply; 3+ messages in thread
From: Renaud Barbier @ 2025-04-07 14:55 UTC (permalink / raw)
  To: Barebox List

Hello,
Barebox version: 2024-09

I am porting the Linux PCIE driver for Broadcom Cortex-A9 (ARMv7) chip.
So far I am able to detect the bridge and NVME device attach to it:

pci: pci_scan_bus for bus 0
pci:  last_io = 0x00000000, last_mem = 0x20000000, last_mem_pref = 0x00000000
pci: class = 00000604, hdr_type = 00000001
pci: 00:00 [14e4:b170]

pci: pci_scan_bus for bus 1
pci:  last_io = 0x00000000, last_mem = 0x20000000, last_mem_pref = 0x00000000
pci: class = 00000108, hdr_type = 00000000
pci: 01:00 [126f:2263]
ERROR: pci: last_mem = 0x20000000, 16384
pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
...
pci: class = 00000108, hdr_type = 00000000
pci: 01:f8 [126f:2263]
ERROR: pci: last_mem = 0x2007c000, 16384
pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
pci: pci_scan_bus returning with max=02
pci: bridge NP limit at 0x20100000

The PCI memory address is assigned to the device and map (pci_iomap from nvme_dev_map)  but access to this PCI space crashes the system:

pci: pci_scan_bus returning with max=02
ERROR: nvme_dev_map: bar = 0x20000000
nvme pci-126f:2263.0: enabling bus mastering
ERROR: pci:  __pci_set_master
ERROR: nvme_pci_enable: address: 0x2000001c
unable to handle paging request at address 0x2000001c
pc : [<9fe3d6e4>]    lr : [<9fe3d6d8>]
sp : 9fff7f38  ip : 00000002  fp : 00000000
r10: 0006b52c  r9 : 40000000  r8 : 9fea9b88
r7 : 7fe18500  r6 : 7fe295a8  r5 : 9fe8e5b8  r4 : 7fe29538
r3 : 20000000  r2 : 00000000  r1 : 0000000a  r0 : 00000027
Flags: nZCv  IRQs off  FIQs off  Mode SVC_32
[<9fe3d6e4>] (nvme_probe+0xd8/0x4c4) from [<9fe14650>] (device_probe+0x64/0x15c)
[<9fe14650>] (device_probe+0x64/0x15c) from [<9fe14778>] (match+0x30/0x68)

Tracing the problem, I found that if I limit the scanning in pci_scan_bus to 1 device/function
-       for (devfn = 0; devfn < 0xff; ++devfn) {
+       for (devfn = 0; devfn < 0x8; ++devfn) { ==> At 0x10 I see the crash.

Then, the NVME device is detected:
nvme pci-126f:2263.0: serial: A012410180620000000
nvme pci-126f:2263.0: model: SM681GEF AGS
nvme pci-126f:2263.0: firmware: TFX7GB

Has anybody seen this problem before (I havenoyt seen that on the LS1046A)?
I am not seeing much diff with the master branch in /drivers/pci and drivers/nvme

Cheers,
Renaud









^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: PCI memory mapping
  2025-04-07 14:55 PCI memory mapping Renaud Barbier
@ 2025-04-08  9:54 ` Lucas Stach
  2025-04-09 10:00   ` Renaud Barbier
  0 siblings, 1 reply; 3+ messages in thread
From: Lucas Stach @ 2025-04-08  9:54 UTC (permalink / raw)
  To: Renaud Barbier, Barebox List

Hi Renaud,

Am Montag, dem 07.04.2025 um 14:55 +0000 schrieb Renaud Barbier:
> Hello,
> Barebox version: 2024-09
> 
> I am porting the Linux PCIE driver for Broadcom Cortex-A9 (ARMv7) chip.
> So far I am able to detect the bridge and NVME device attach to it:
> 
> pci: pci_scan_bus for bus 0
> pci:  last_io = 0x00000000, last_mem = 0x20000000, last_mem_pref = 0x00000000
> pci: class = 00000604, hdr_type = 00000001
> pci: 00:00 [14e4:b170]
> 
> pci: pci_scan_bus for bus 1
> pci:  last_io = 0x00000000, last_mem = 0x20000000, last_mem_pref = 0x00000000
> pci: class = 00000108, hdr_type = 00000000
> pci: 01:00 [126f:2263]
> ERROR: pci: last_mem = 0x20000000, 16384
> pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
> ...
> pci: class = 00000108, hdr_type = 00000000
> pci: 01:f8 [126f:2263]
> ERROR: pci: last_mem = 0x2007c000, 16384
> pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
> pci: pci_scan_bus returning with max=02
> pci: bridge NP limit at 0x20100000
> 
I highly doubt that your NVMe device actually occupies all those BDF
addresses.

Either you host driver isn't properly reporting bus timeouts on the
PCI_VENDOR_ID config space access, to make it appear like there are
multiple devices on the bus to the topology walk, or more likely from
the symptoms you report, your host driver doesn't properly set up the
DF part of the BDF for the config space requests. In that case the
first device on the bus may correctly answer to all the config space
requests, which again will make it appear like there are multiple
devices, but the endpoint will actually get configured with the BAR
setup from the last "device" on the bus. If you then try to access the
MMIO space of the first device, there is no EP configured to handle the
request, causing a bus abort.

Regards,
Lucas



^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: PCI memory mapping
  2025-04-08  9:54 ` Lucas Stach
@ 2025-04-09 10:00   ` Renaud Barbier
  0 siblings, 0 replies; 3+ messages in thread
From: Renaud Barbier @ 2025-04-09 10:00 UTC (permalink / raw)
  To: Lucas Stach, Barebox List

For information, I had a look at the Linux (6.11.3) PCI driver , the PCI bus (PCIe in my case) probing in drivers/pci/probe.c is limited by this function due to PCIe specification.
static int only_one_child(struct pci_bus *bus)
  {
          struct pci_dev *bridge = bus->self;

          /*
           * Systems with unusual topologies set PCI_SCAN_ALL_PCIE_DEVS so
           * we scan for all possible devices, not just Device 0.
           */
          if (pci_has_flag(PCI_SCAN_ALL_PCIE_DEVS))
                  return 0;

          /*
           * A PCIe Downstream Port normally leads to a Link with only Device
           * 0 on it (PCIe spec r3.1, sec 7.3.1).  As an optimization, scan
           * only for Device 0 in that situation.
           */
          if (bridge && pci_is_pcie(bridge) && pcie_downstream_port(bridge))
                  return 1;

          return 0;
  }

int pci_scan_slot(struct pci_bus *bus, int devfn)
  {
          struct pci_dev *dev;
          int fn = 0, nr = 0;

~         if (only_one_child(bus) && (devfn > 0)) {
+                 pr_err("XXX: %s one child devfn = %d\n", __func__, devfn);
                  return 0; /* Already scanned the entire slot */
+         }
...


> -----Original Message-----
> From: Lucas Stach <l.stach@pengutronix.de>
> Sent: 08 April 2025 10:54
> To: Renaud Barbier <Renaud.Barbier@ametek.com>; Barebox List
> <barebox@lists.infradead.org>
> Subject: Re: PCI memory mapping
> 
> ***NOTICE*** This came from an external source. Use caution when replying,
> clicking links, or opening attachments.
> 
> Hi Renaud,
> 
> Am Montag, dem 07.04.2025 um 14:55 +0000 schrieb Renaud Barbier:
> > Hello,
> > Barebox version: 2024-09
> >
> > I am porting the Linux PCIE driver for Broadcom Cortex-A9 (ARMv7) chip.
> > So far I am able to detect the bridge and NVME device attach to it:
> >
> > pci: pci_scan_bus for bus 0
> > pci:  last_io = 0x00000000, last_mem = 0x20000000, last_mem_pref =
> > 0x00000000
> > pci: class = 00000604, hdr_type = 00000001
> > pci: 00:00 [14e4:b170]
> >
> > pci: pci_scan_bus for bus 1
> > pci:  last_io = 0x00000000, last_mem = 0x20000000, last_mem_pref =
> > 0x00000000
> > pci: class = 00000108, hdr_type = 00000000
> > pci: 01:00 [126f:2263]
> > ERROR: pci: last_mem = 0x20000000, 16384
> > pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes ...
> > pci: class = 00000108, hdr_type = 00000000
> > pci: 01:f8 [126f:2263]
> > ERROR: pci: last_mem = 0x2007c000, 16384
> > pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
> > pci: pci_scan_bus returning with max=02
> > pci: bridge NP limit at 0x20100000
> >
> I highly doubt that your NVMe device actually occupies all those BDF
> addresses.
> 
> Either you host driver isn't properly reporting bus timeouts on the
> PCI_VENDOR_ID config space access, to make it appear like there are multiple
> devices on the bus to the topology walk, or more likely from the symptoms
> you report, your host driver doesn't properly set up the DF part of the BDF for
> the config space requests. In that case the first device on the bus may correctly
> answer to all the config space requests, which again will make it appear like
> there are multiple devices, but the endpoint will actually get configured with
> the BAR setup from the last "device" on the bus. If you then try to access the
> MMIO space of the first device, there is no EP configured to handle the
> request, causing a bus abort.
> 
> Regards,
> Lucas

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-04-09 11:13 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-07 14:55 PCI memory mapping Renaud Barbier
2025-04-08  9:54 ` Lucas Stach
2025-04-09 10:00   ` Renaud Barbier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox