Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test SATA adapter (I/O Crest 4 port Marvell 9215) #1

Closed
geerlingguy opened this issue Oct 22, 2020 · 80 comments
Closed

Test SATA adapter (I/O Crest 4 port Marvell 9215) #1

geerlingguy opened this issue Oct 22, 2020 · 80 comments

Comments

@geerlingguy
Copy link
Owner

geerlingguy commented Oct 22, 2020

6by9 on the Pi Forums mentioned:

For those wanting to know about PCI-e compatibility, I have one here with a Pericom PI7C9X 1 to 3 way PCI-e bridge, and Marvell 9215 4 port SATA card connected to that. (My VL805 USB3 card is still to be delivered). With a couple of extra kernel modules enabled (mainly CONFIG_ATA, and CONFIG_SATA_AHCI) it's the basis of my next NAS.

<24W with a pair of 8TB SATA drives spinning and a 240GB SSD. <10W with the spinning rust in standby.

I bought this I/O Crest 4 Port SATA III PCIe card and would like to see if I can get a 4-drive RAID array going:

DSC_2840

Relevant Links:

@geerlingguy
Copy link
Owner Author

I'm going to try out the IO Crest 4-port SATA adapter.

@geerlingguy
Copy link
Owner Author

It has arrived!

@geerlingguy
Copy link
Owner Author

And... I just realized I have no SATA power supply cable, just the data cable. So I'll have to wait for one of those to come in before I can actually test one of my SATA drives.

@geerlingguy
Copy link
Owner Author

geerlingguy commented Oct 24, 2020

First light is good:

$ lspci

01:00.0 SATA controller: Marvell Technology Group Ltd. Device 9215 (rev 11) (prog-if 01 [AHCI 1.0])
	Subsystem: Marvell Technology Group Ltd. Device 9215
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin A routed to IRQ 0
	Region 0: I/O ports at 0000
	Region 1: I/O ports at 0000
	Region 2: I/O ports at 0000
	Region 3: I/O ports at 0000
	Region 4: I/O ports at 0000
	Region 5: Memory at 600040000 (32-bit, non-prefetchable) [size=2K]
	Expansion ROM at 600000000 [size=256K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit-
		Address: 00000000  Data: 0000
	Capabilities: [70] Express (v2) Legacy Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
		DevCtl:	Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop-
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <64us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR-, OBFF Not Supported
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
		LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete-, EqualizationPhase1-
			 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
	Capabilities: [e0] SATA HBA v0.0 BAR4 Offset=00000004
	Capabilities: [100 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn-

@geerlingguy
Copy link
Owner Author

Though dmesg shows that it's hitting BAR default address space limits again:

[    0.925795] brcm-pcie fd500000.pcie: host bridge /scb/pcie@7d500000 ranges:
[    0.925818] brcm-pcie fd500000.pcie:   No bus range found for /scb/pcie@7d500000, using [bus 00-ff]
[    0.925884] brcm-pcie fd500000.pcie:      MEM 0x0600000000..0x0603ffffff -> 0x00f8000000
[    0.925948] brcm-pcie fd500000.pcie:   IB MEM 0x0000000000..0x00ffffffff -> 0x0100000000
[    0.953526] brcm-pcie fd500000.pcie: link up, 5 GT/s x1 (SSC)
[    0.953827] brcm-pcie fd500000.pcie: PCI host bridge to bus 0000:00
[    0.953844] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.953866] pci_bus 0000:00: root bus resource [mem 0x600000000-0x603ffffff] (bus address [0xf8000000-0xfbffffff])
[    0.953933] pci 0000:00:00.0: [14e4:2711] type 01 class 0x060400
[    0.954172] pci 0000:00:00.0: PME# supported from D0 D3hot
[    0.957560] PCI: bus0: Fast back to back transfers disabled
[    0.957582] pci 0000:00:00.0: bridge configuration invalid ([bus ff-ff]), reconfiguring
[    0.957802] pci 0000:01:00.0: [1b4b:9215] type 00 class 0x010601
[    0.957874] pci 0000:01:00.0: reg 0x10: [io  0x8000-0x8007]
[    0.957911] pci 0000:01:00.0: reg 0x14: [io  0x8040-0x8043]
[    0.957947] pci 0000:01:00.0: reg 0x18: [io  0x8100-0x8107]
[    0.957984] pci 0000:01:00.0: reg 0x1c: [io  0x8140-0x8143]
[    0.958021] pci 0000:01:00.0: reg 0x20: [io  0x800000-0x80001f]
[    0.958058] pci 0000:01:00.0: reg 0x24: [mem 0x00900000-0x009007ff]
[    0.958095] pci 0000:01:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref]
[    0.958262] pci 0000:01:00.0: PME# supported from D3hot
[    0.961586] PCI: bus1: Fast back to back transfers disabled
[    0.961605] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    0.961674] pci 0000:00:00.0: BAR 8: assigned [mem 0x600000000-0x6000fffff]
[    0.961698] pci 0000:01:00.0: BAR 6: assigned [mem 0x600000000-0x60003ffff pref]
[    0.961722] pci 0000:01:00.0: BAR 5: assigned [mem 0x600040000-0x6000407ff]
[    0.961744] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0020]
[    0.961759] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0020]
[    0.961774] pci 0000:01:00.0: BAR 0: no space for [io  size 0x0008]
[    0.961788] pci 0000:01:00.0: BAR 0: failed to assign [io  size 0x0008]
[    0.961803] pci 0000:01:00.0: BAR 2: no space for [io  size 0x0008]
[    0.961817] pci 0000:01:00.0: BAR 2: failed to assign [io  size 0x0008]
[    0.961831] pci 0000:01:00.0: BAR 1: no space for [io  size 0x0004]
[    0.961845] pci 0000:01:00.0: BAR 1: failed to assign [io  size 0x0004]
[    0.961860] pci 0000:01:00.0: BAR 3: no space for [io  size 0x0004]
[    0.961873] pci 0000:01:00.0: BAR 3: failed to assign [io  size 0x0004]
[    0.961891] pci 0000:00:00.0: PCI bridge to [bus 01]
[    0.961914] pci 0000:00:00.0:   bridge window [mem 0x600000000-0x6000fffff]
[    0.962217] pcieport 0000:00:00.0: enabling device (0140 -> 0142)
[    0.962439] pcieport 0000:00:00.0: PME: Signaling with IRQ 55
[    0.962813] pcieport 0000:00:00.0: AER: enabled with IRQ 55

@geerlingguy
Copy link
Owner Author

geerlingguy commented Oct 24, 2020

I just increased the BAR allocation following the directions in this Gist, but when I rebooted (without the card in), I got:

[    0.926161] brcm-pcie fd500000.pcie: host bridge /scb/pcie@7d500000 ranges:
[    0.926184] brcm-pcie fd500000.pcie:   No bus range found for /scb/pcie@7d500000, using [bus 00-ff]
[    0.926247] brcm-pcie fd500000.pcie:      MEM 0x0600000000..0x063fffffff -> 0x00c0000000
[    0.926312] brcm-pcie fd500000.pcie:   IB MEM 0x0000000000..0x00ffffffff -> 0x0100000000
[    1.521386] brcm-pcie fd500000.pcie: link down

Powering off completely, then booting again, it works. So note to self: if you get a link down, try a hard power reset instead of reboot.

@geerlingguy
Copy link
Owner Author

Ah... looking closer, those 'failed to assign' errors are for IO BARs, which are unsupported on the Pi.

So... I posted in the BAR space thread on Pi Forums asking 6by9 if that user has had the same logs and if they can be safely ignored. Still waiting on a way to power my drive so I can do an end-to-end test :)

@kitlith
Copy link

kitlith commented Oct 27, 2020

something else that may be interesting is if you can get a sas adapter/raid card working. I know I was looking into SBCs w/ pcie awhile back for the purpose of building a low power/low heat host for some sas drives I have. (ended up just throwing it in a computer and not running 24/7)

@geerlingguy
Copy link
Owner Author

That would be an interesting thing to test, though it'll have to wait a bit as I'm trying to get through some other cards and might also test 2.5 Gbps or 5 Gbps networking if I am able to!

@geerlingguy geerlingguy changed the title Test SATA adapter(s) Test SATA adapter (I/O Crest 4 port Marvell 9215) Oct 27, 2020
@geerlingguy
Copy link
Owner Author

Without the kernel modules enabled, lsblk shows no device:

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
mmcblk0     179:0    0 29.8G  0 disk 
├─mmcblk0p1 179:1    0  256M  0 part /boot
└─mmcblk0p2 179:2    0 29.6G  0 part /

Going to try adding those modules and see what happens!

@geerlingguy
Copy link
Owner Author

geerlingguy commented Oct 27, 2020

# Install dependencies
sudo apt install -y git bc bison flex libssl-dev make libncurses5-dev

# Clone source
git clone --depth=1 https://github.com/raspberrypi/linux

# Apply default configuration
cd linux
export KERNEL=kernel7l # use kernel8 for 64-bit, or kernel7l for 32-bit
make bcm2711_defconfig

# Customize the .config further with menuconfig
make menuconfig
# Enable the following:
# Device Drivers:
#   -> Serial ATA and Parallel ATA drivers (libata)
#     -> AHCI SATA support
#     -> Marvell SATA support
#
# Alternatively add the following in .config manually:
# CONFIG_ATA=m
# CONFIG_ATA_VERBOSE_ERROR=y
# CONFIG_SATA_PMP=y
# CONFIG_SATA_AHCI=m
# CONFIG_SATA_MOBILE_LPM_POLICY=0
# CONFIG_ATA_SFF=y
# CONFIG_ATA_BMDMA=y
# CONFIG_SATA_MV=m

nano .config
# (edit CONFIG_LOCALVERSION and add a suffix that helps you identify your build)

# Build the kernel and copy everything into place
make -j4 zImage modules dtbs # 'Image' on 64-bit
sudo make modules_install
sudo cp arch/arm/boot/dts/*.dtb /boot/
sudo cp arch/arm/boot/dts/overlays/*.dtb* /boot/overlays/
sudo cp arch/arm/boot/dts/overlays/README /boot/overlays/
sudo cp arch/arm/boot/zImage /boot/$KERNEL.img

@geerlingguy
Copy link
Owner Author

Yahoo, it worked!

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    1 223.6G  0 disk 
├─sda1        8:1    1   256M  0 part /media/pi/boot
└─sda2        8:2    1 223.3G  0 part /media/pi/rootfs
mmcblk0     179:0    0  29.8G  0 disk 
├─mmcblk0p1 179:1    0   256M  0 part /boot
└─mmcblk0p2 179:2    0  29.6G  0 part /

@geerlingguy
Copy link
Owner Author

geerlingguy commented Oct 28, 2020

Repartitioning the drive:

sudo fdisk /dev/sda
d 1    # delete partition 1
d 2    # delete partition 2
n    # create new partition
p    # primary (default)
1    # partition 1 (default)
2048    # First sector (default)
468862127    # Last sector (default)
w    # write new partition table

Got the following:

The partition table has been altered.
Failed to remove partition 1 from system: Device or resource busy
Failed to remove partition 2 from system: Device or resource busy
Failed to add partition 1 to system: Device or resource busy

The kernel still uses the old partitions. The new table will be used at the next reboot. 
Syncing disks.

Rebooted the Pi, then:

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    1 223.6G  0 disk 
└─sda1        8:1    1 223.6G  0 part 
mmcblk0     179:0    0  29.8G  0 disk 
├─mmcblk0p1 179:1    0   256M  0 part /boot
└─mmcblk0p2 179:2    0  29.6G  0 part /

To format the device, use mkfs:

$ sudo mkfs.ext4 /dev/sda1
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done                            
Creating filesystem with 58607510 4k blocks and 14655488 inodes
Filesystem UUID: dd4fa95d-edbf-4696-a9e1-ddf1f17da580
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done 

Then mount it somewhere:

$ sudo mkdir /mnt/sata-sda
$ sudo mount /dev/sda1 /mnt/sata-sda
$ mount
...
/dev/sda1 on /mnt/sata-sda type ext4 (rw,relatime)

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
...
/dev/sda1       220G   61M  208G   1% /mnt/sata-sda

@geerlingguy
Copy link
Owner Author

geerlingguy commented Oct 28, 2020

Performance testing of the Kingston SA400S37/240G drive:

Test Result
hdparm 314.79 MB/s
dd 189.00 MB/s
random 4K read 22.98 MB/s
random 4K write 55.02 MB/s

Compare that to the same drive over USB 3.0 using a USB to SATA adapter:

Test Result
hdparm 296.71 MB/s
dd 149.00 MB/s
random 4K read 20.59 MB/s
random 4K write 28.54 MB/s

So not a night-and-day difference like with the NVMe drives, but definitely and noticeably faster. I'm now waiting on another SSD and a power splitter to arrive so I can test multiple SATA SSDs on this card.

And someone just mentioned they have some RAID cards they'd be willing to send me. Might have to pony up for a bunch of hard drives and have my desk turn into some sort of frankemonster NAS-of-many-drives soon!

@mo-g
Copy link

mo-g commented Oct 30, 2020

I'm curious about other OS's. Obviously, Raspbian is a good basis - but as I recall, Fedora Pi 64-bit uses their own custom kernel. I'd be interested in seeing what they've "left in" from the standard kernel config.

I'm looking forward to picking one of these up in a month or so when they become available to the public, then I'll give it a try!

Side note for your list page - could you include PCI ID's as well as just the brand names of the cards? It'll help avoid confusion where cards have multiple revisions, as well as help non-US users identify comparable cards in their own markets.

Great work in the meantime! 👍

@mi-hol
Copy link

mi-hol commented Nov 2, 2020

And someone just mentioned they have some RAID cards they'd be willing to send me. Might have to pony up for a bunch of hard drives and have my desk turn into some sort of frankemonster NAS-of-many-drives soon!

It would be great to test a RAID card based on Marvell 88SE9128 chipset, because it is used by many suppliers

@geerlingguy
Copy link
Owner Author

Trying again today (but cross-compiling this time since it's oh-so-much faster) now that I have two drives and the appropriate power adapters. I'm planning on just testing a file copy between the drives for now, I'll get into other tests later.

@geerlingguy
Copy link
Owner Author

Hmm... putting this on pause. My cross compilation is not dropping in the AHCI module for some reason, probably a bad .config :/

@geerlingguy
Copy link
Owner Author

Also, the adapter gets hot after prolonged use.

Repository owner deleted a comment from mi-hol Nov 7, 2020
@geerlingguy
Copy link
Owner Author

(For anyone interested in testing on an LSI/IBM SAS card, check out #18)

@geerlingguy
Copy link
Owner Author

My desk is becoming a war zone:

IMG_2720

Plan is to set up a RAID (probably either 0 if I feel more YOLO-y or 1/10 if I'm more stable-minded) with either 2 or 4 drives, using mdadm.

I was having trouble with the SAS card, not sure if the cards are bad or they just don't work at all with the Pi :(

@geerlingguy
Copy link
Owner Author

Testing also with an NVMe using the IO Crest PCIe switch:

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    1 223.6G  0 disk 
sdb           8:16   1 223.6G  0 disk 
└─sdb1        8:17   1 223.6G  0 part 
mmcblk0     179:0    0  29.8G  0 disk 
├─mmcblk0p1 179:1    0   256M  0 part /boot
└─mmcblk0p2 179:2    0  29.6G  0 part /
nvme0n1     259:0    0 232.9G  0 disk

I'll post some benchmarks copying files between one of the SSDs and the NVMe; will be interesting to see how many MB/sec they can pump through the switch.

@geerlingguy
Copy link
Owner Author

For a direct file copy from one drive to another:

# fallocate -l 10G /mnt/nvme/test.img
# pv /mnt/nvme/test.img > /mnt/sata-sda/test.img

I got an average of 190 MiB/sec, or about 1.52 Gbps. So two-way, that's 3.04 Gbps (under the 3.2 Gbps I was hoping for, but that's maybe down to PCIe switching?

It looks like CPU goes to 99% as SDA takes more than 50% of the CPU—see atop results during a copy:

Screen Shot 2020-11-10 at 9 57 52 AM

@geerlingguy
Copy link
Owner Author

geerlingguy commented Nov 10, 2020

Also comparing raw disk speeds through the PCIe switch:

Kingston SSD

Test Result
hdparm 364.23 MB/s
dd 148.00 MB/s
random 4K read 28.89 MB/s
random 4K write 58.01 MB/s

Samsung EVO 970 NVMe

Test Result
hdparm 363.81 MB/s
dd 166.00 MB/s
random 4K read 46.50 MB/s
random 4K write 75.41 MB/s

These were on 64-bit Pi OS... so the numbers are a little higher than the 32-bit Pi OS results from earlier in the thread. But the good news is the PCIe switching seems to not cause any major performance penalty.

@geerlingguy
Copy link
Owner Author

geerlingguy commented Nov 10, 2020

Software RAID0 testing using mdadm:

# Install mdadm.
sudo apt install -y mdadm

# Create a RAID0 array using sda1 and sdb1.
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sd[a-b]1

# Create a mount point for the new RAID device.
sudo mkdir /mnt/raid0

# Format the RAID device.
sudo mkfs.ext4 /dev/md0

# Mount the RAID device.
sudo mount /dev/md0 /mnt/raid0

Benchmarking the device:

Test Result
hdparm 293.35 MB/s
dd 168.00 MB/s
random 4K read 24.96 MB/s
random 4K write 52.26 MB/s

And during the 4K tests in iozone, I can see the sda/sdb devices are basically getting the same bottlenecks, except with a tiny bit of extra overhead from software-based RAID control:

Screen Shot 2020-11-10 at 10 18 00 AM

Then to stop and remove the RAID0 array:

sudo umount /mnt/raid0
sudo mdadm --stop /dev/md0
sudo mdadm --zero-superblock /dev/sd[a-b]1
sudo mdadm --remove /dev/md0

@push-gh
Copy link

push-gh commented Oct 19, 2021

Since google might land you here, like it did me on a search for "cm4 ubuntu sata", the latest development version Ubuntu Impish Indri has SATA support. Simply "sudo apt install linux-modules-extra-raspi" and then "modprobe ahci" or reboot.

Thanks. I saw that the required kernel configs were enabled in the kernel config file, but didn't find them in the modules directory. I thought an issue with the distribution and was going to compile the kernel. I didn't know that the extra modules are delivered as a separate package. fortunately I found your comment.

@stamaali4
Copy link

Thanks @BeauSlim for the inputs, today i tested the card with Ubuntu Impish Indri and got all the 4 drives detected; however i see repeated errors mentioned below and troubleshooting them now;

[ 788.484701] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[ 788.484925] sd 0:0:0:0: [sda] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 788.484954] sd 0:0:0:0: [sda] Stopping disk
[ 788.485013] sd 0:0:0:0: [sda] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 788.879084] ata2.15: SATA link down (SStatus 0 SControl 320)
[ 790.521329] ata2.15: failed to read PMP GSCR[0] (Emask=0x100)
[ 790.521374] ata2.15: PMP revalidation failed (errno=-5)

If you know anything about the errors and suggest what to look at would be of great help...

@BeauSlim
Copy link

[ 788.484701] sd 0:0:0:0: [sda] Synchronizing SCSI cache [ 788.484925] sd 0:0:0:0: [sda] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [ 788.484954] sd 0:0:0:0: [sda] Stopping disk [ 788.485013] sd 0:0:0:0: [sda] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [ 788.879084] ata2.15: SATA link down (SStatus 0 SControl 320) [ 790.521329] ata2.15: failed to read PMP GSCR[0] (Emask=0x100) [ 790.521374] ata2.15: PMP revalidation failed (errno=-5)

Yeah, this Pi SATA stuff is definitely a bit tricky. Googling errors will get you a lot of people saying "your drive is dead", but I bet if you plug that card into a PC, everything will work perfectly even under heavy load.

I don't have a 9215. I have a Marvell 9230 and a JMicron 585. The 9230 card runs well aside from a lack of any way to change the RAID config.

PMP seems to be referring to a port multiplier? Are your 4 drives in an external enclosure? If so, I'd try connecting drives directly. If not, definitely try different cables.

If I push my JMB585 with 4 or 5 drives in a stripe or software RAID 10, it gives me a bunch of errors, but they are mostly "failed command: READ FPDMA QUEUED" which is different from yours. Adding "extraargs=libata.force=noncq" to my cmdline.txt solves that but hurts SATA performance. For other libata.force options to try (like using SATA I speeds), see https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html

You might also try adding "extraargs=pcie_aspm=off" to your cmdline.txt to turn off PCIe power management.

There is probably a firmware update for your card that you could try.

@stamaali4
Copy link

Thanks @BeauSlim for your inputs; yes i have 4 drives in an IOCRest external enclosure. Definitely drives are not dead as i have tested them with RADXA QUADSATA HAT and they work fine in it will try turning off PCIe power management option & also try different set of cables and update with results.

@l0gical
Copy link

l0gical commented Oct 22, 2021

"You might also try adding "extraargs=pcie_aspm=off" to your cmdline.txt to turn off PCIe power management."

I may also try this, my 9215 works absolutely fine with 3x SATA 8TB Drives, the only issue I get occasionally is the drive power down/up sound and a couple of the disks change from say SDA/SDC to SDD/SDE, it does however break OMV when that happens.

@mi-hol
Copy link

mi-hol commented Dec 17, 2021

The 9230 card runs well aside from a lack of any way to change the RAID config.

Does this mean, there is no CLI to enable/change hardware RAID modes?

@BeauSlim
Copy link

The 9230 card runs well aside from a lack of any way to change the RAID config.

Does this mean, there is no CLI to enable/change hardware RAID modes?

That is correct. The Marvell hardware RAID config (MRU) is available only for x86/x64 processors.

You can put the card into a Windows or Linux PC, connect the disks you plan to use, configure RAID, and then move the card to your Pi setup. You might even be able to have a 9230-based card in the PC and just move the disks since the RAID config is stored on the drives themselves, not on the card.

This is probably fine if you just wanted to use striping or Hyperduo SSD caching, but if you want redundancy you will have no indication that a mirror has failed.

@mi-hol
Copy link

mi-hol commented Dec 19, 2021

I'm testing an ASM1061R based controller basically identical with these
Setup is identical to your description below

put the card into a Windows or Linux PC, connect the disks you plan to use, configure RAID, and then move the card to your Pi setup. You might even be able to have a 9230-based card in the PC and just move the disks since the RAID config is stored on the drives themselves, not on the card.

Issue:

if you want redundancy you will have no indication that a mirror has failed.

is affecting my controller too and was even confirmed by distributor's technical support in this FAQ
"Question:
I did not find a way to get a alert if a disk in a raid-1 set fails. the controller does not even stop the POST processes when a disk failed. I would expected that some RED blinking WARNING comes up or something and the PC only continues the POST if the degraded raid status gets committed. Documentation is very very poor..

Answer:
Hello Kalle,
thanks for your request. We're are sorry, that's the way this product works.
Kind regrads
InLine Support Team"

@geerlingguy should such severe limitations not be documented in the "Raspberry Pi PCI Express device compatibility database" ?

@geerlingguy
Copy link
Owner Author

Testing on the Raspberry Pi 5:

pi@pi5:~ $ lspci
0000:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2712 (rev 21)
0000:01:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11)
0001:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2712 (rev 21)
0001:01:00.0 Ethernet controller: Device 1de4:0001

pi@pi5:~ $ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    1 223.6G  0 disk 
├─sda1        8:1    1   200M  0 part 
└─sda2        8:2    1 223.4G  0 part 
mmcblk0     179:0    0 119.1G  0 disk 
├─mmcblk0p1 179:1    0   256M  0 part /boot
└─mmcblk0p2 179:2    0 118.8G  0 part /

At PCIe Gen 2.0, I'm getting some link errors—but otherwise the card seems to pass through clean-ish at least:

[   47.906098] ata1: softreset failed (device not ready)
[   48.382111] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   48.387657] ata1.00: ATA-10: KINGSTON SA400S37240G, SBFKB1D2, max UDMA/133
[   48.390106] ata1.00: 468862128 sectors, multi 1: LBA48 NCQ (depth 32), AA
[   48.394334] ata1.00: configured for UDMA/133
[   48.394449] scsi 0:0:0:0: Direct-Access     ATA      KINGSTON SA400S3 B1D2 PQ: 0 ANSI: 5
[   48.394861] sd 0:0:0:0: [sda] 468862128 512-byte logical blocks: (240 GB/224 GiB)
[   48.394875] sd 0:0:0:0: [sda] Write Protect is off
[   48.394878] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[   48.394896] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   48.394920] sd 0:0:0:0: [sda] Preferred minimum I/O size 512 bytes
[   48.425872] sd 0:0:0:0: Attached scsi generic sg0 type 0
[   48.458218] ata1.00: exception Emask 0x10 SAct 0x600000 SErr 0x380000 action 0x6 frozen
[   48.458227] ata1.00: irq_stat 0x08000000, interface fatal error
[   48.458229] ata1: SError: { 10B8B Dispar BadCRC }
[   48.458234] ata1.00: failed command: READ FPDMA QUEUED
[   48.458237] ata1.00: cmd 60/10:a8:00:00:00/00:00:00:00:00/40 tag 21 ncq dma 8192 in
                        res 40/00:b0:10:00:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error)
[   48.458244] ata1.00: status: { DRDY }
[   48.458246] ata1.00: failed command: READ FPDMA QUEUED
[   48.458248] ata1.00: cmd 60/10:b0:10:00:00/00:00:00:00:00/40 tag 22 ncq dma 8192 in
                        res 40/00:b0:10:00:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error)
[   48.458253] ata1.00: status: { DRDY }
[   48.458258] ata1: hard resetting link
[   48.934105] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   53.982112] ata1.00: qc timeout after 5000 msecs (cmd 0xec)
[   53.982130] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[   53.982135] ata1.00: revalidation failed (errno=-5)
[   53.982143] ata1: hard resetting link
[   54.458109] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   54.459118] ata1.00: configured for UDMA/133
[   54.459157] ata1: EH complete
[   54.490108] ata1: limiting SATA link speed to 3.0 Gbps
[   54.490113] ata1.00: exception Emask 0x10 SAct 0x6000000 SErr 0x380000 action 0x6 frozen
[   54.490117] ata1.00: irq_stat 0x08000000, interface fatal error
[   54.490120] ata1: SError: { 10B8B Dispar BadCRC }
[   54.490126] ata1.00: failed command: READ FPDMA QUEUED
[   54.490130] ata1.00: cmd 60/10:c8:00:00:00/00:00:00:00:00/40 tag 25 ncq dma 8192 in
                        res 40/00:d0:10:00:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error)
[   54.490139] ata1.00: status: { DRDY }
[   54.490143] ata1.00: failed command: READ FPDMA QUEUED
[   54.490146] ata1.00: cmd 60/10:d0:10:00:00/00:00:00:00:00/40 tag 26 ncq dma 8192 in
                        res 40/00:d0:10:00:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error)
[   54.490154] ata1.00: status: { DRDY }
[   54.490160] ata1: hard resetting link
[   54.966104] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[   54.966147] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x100)
[   54.966150] ata1.00: revalidation failed (errno=-5)
[   60.126109] ata1: hard resetting link
[   60.602105] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[   65.758102] ata1.00: qc timeout after 5000 msecs (cmd 0xec)
[   65.758117] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[   65.758120] ata1.00: revalidation failed (errno=-5)
[   65.758128] ata1: limiting SATA link speed to 1.5 Gbps
[   65.758132] ata1: hard resetting link
[   66.234102] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[   66.234338] ata1.00: configured for UDMA/133
[   66.234363] ata1: EH complete
[   66.278102] ata1.00: exception Emask 0x10 SAct 0x2 SErr 0x300000 action 0x6 frozen
[   66.278106] ata1.00: irq_stat 0x08000000, interface fatal error
[   66.278108] ata1: SError: { Dispar BadCRC }
[   66.278112] ata1.00: failed command: READ FPDMA QUEUED
[   66.278114] ata1.00: cmd 60/10:08:90:44:f2/00:00:1b:00:00/40 tag 1 ncq dma 8192 in
                        res 40/00:08:90:44:f2/00:00:1b:00:00/40 Emask 0x10 (ATA bus error)
[   66.278121] ata1.00: status: { DRDY }

I think the PCIe issues are down to the FFC cable and PCIe interference :(

@geerlingguy
Copy link
Owner Author

I have some questions in to Raspberry Pi surrounding SATA support, and PCIe link quality. It seems like both cards I've tested run into some errors (more so than I get with NVMe...).

@SorX14
Copy link

SorX14 commented Jan 4, 2024

Got my hands on a Pimoroni NVMe base and used a M.2 to PCIe adapter.

Please excuse the sketchy setup - just wanted to quickly test.

image
$ lspci
0000:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2712 (rev 21)
0000:01:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11)
0001:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2712 (rev 21)
0001:01:00.0 Ethernet controller: Device 1de4:0001

And connected 4 drives (these are old HDDs that have various partitions on):

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    1 698.6G  0 disk
└─sda1        8:1    1 698.6G  0 part /mnt/sda
sdb           8:16   1 698.6G  0 disk
└─sdb1        8:17   1 698.6G  0 part /mnt/sdb
sdc           8:32   1 111.8G  0 disk
├─sdc1        8:33   1   512M  0 part
├─sdc2        8:34   1   513M  0 part
├─sdc3        8:35   1     1K  0 part
└─sdc5        8:37   1 110.8G  0 part /mnt/sdc
sdd           8:48   1 465.8G  0 disk
└─sdd1        8:49   1 465.8G  0 part
mmcblk0     179:0    0  58.9G  0 disk
├─mmcblk0p1 179:1    0   512M  0 part /boot/firmware
└─mmcblk0p2 179:2    0  58.4G  0 part /

And successfully mounted all except sdd which seems to be a dead HDD (although looking at the pic in retrospect the SATA cable doesn't look to be fully seated 🤷 )

$ dmesg
...
[  778.260800] ata4.00: exception Emask 0x0 SAct 0x20 SErr 0x0 action 0x0
[  778.260807] ata4.00: irq_stat 0x40000008
[  778.260811] ata4.00: failed command: READ FPDMA QUEUED
[  778.260813] ata4.00: cmd 60/20:28:00:08:00/00:00:00:00:00/40 tag 5 ncq dma 16384 in
                        res 51/40:20:00:08:00/00:00:00:00:00/40 Emask 0x409 (media error) <F>
[  778.260822] ata4.00: status: { DRDY ERR }
[  778.260825] ata4.00: error: { UNC }
[  778.263264] ata4.00: configured for UDMA/133
[  778.263278] sd 3:0:0:0: [sdd] tag#5 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=3s
[  778.263283] sd 3:0:0:0: [sdd] tag#5 Sense Key : 0x3 [current]
[  778.263286] sd 3:0:0:0: [sdd] tag#5 ASC=0x11 ASCQ=0x4
[  778.263290] sd 3:0:0:0: [sdd] tag#5 CDB: opcode=0x28 28 00 00 00 08 00 00 00 20 00
[  778.263294] I/O error, dev sdd, sector 2048 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 2
[  778.263299] Buffer I/O error on dev sdd, logical block 128, async page read
[  778.263303] Buffer I/O error on dev sdd, logical block 129, async page read
[  778.263347] ata4: EH complete
[  781.644815] ata4.00: exception Emask 0x0 SAct 0x800 SErr 0x0 action 0x0
[  781.644822] ata4.00: irq_stat 0x40000008
...

Running pibenchmark yields the following in hardware identification:

...
Drives:
  Local Storage: total: 1.99 TiB used: 447.45 GiB (22.0%)
  ID-1: /dev/mmcblk0 model: USD00 size: 58.94 GiB
  ID-2: /dev/sda vendor: Western Digital model: WD7500BPKT-80PK4T0 size: 698.64 GiB
  ID-3: /dev/sdb vendor: Western Digital model: WD7500BPVT-22HXZT1 size: 698.64 GiB
  ID-4: /dev/sdc vendor: Samsung model: SSD 850 EVO 120GB size: 111.79 GiB
  ID-5: /dev/sdd vendor: Hitachi model: HTS725050A9A362 size: 465.76 GiB
  Message: No optical or floppy data found.
...

sda results: https://pibenchmarks.com/benchmark/76956/
sdb results: https://pibenchmarks.com/benchmark/76955/
sdc results: https://pibenchmarks.com/benchmark/76957/
sdd results: DNQ

I then ran two benchmarks in parallel on sda and sdc which both completed (didn't submit results).

All this to say that I think this card now works as expected. I was using the default PCIe link speed.

@jamesy0ung
Copy link

For anyone looking to use this card on a CM4IO, I designed a bracket for it that keeps in in place, so it doesn't get loose and cause a system lockup.

https://www.printables.com/model/789947-cm4-si-pex40064-bracket/related

@zogthegreat
Copy link

zogthegreat commented Sep 9, 2024

Hi everyone!

I just installed a ASMedia ASM1064 4 Port SATA III Card onto a P02 PCIe Adapter Board and it was detected immediately upon boot:

pi@NAS-Pi:~ $ lspci
0000:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries BCM2712 PCIe Bridge (rev 21)
0000:01:00.0 SATA controller: ASMedia Technology Inc. ASM1064 Serial ATA Controller (rev 02)
0001:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries BCM2712 PCIe Bridge (rev 21)
0001:01:00.0 Ethernet controller: Raspberry Pi Ltd RP1 PCIe 2.0 South Bridge

However, I can't seem to get the Pi to see the drive that I have plugged into it. Should I recompile the kernel or am I having a different problem?

[EDIT]

checking my logs, I see messages for the AHCI controller being unavailable:

pi@NAS-Pi:~ $ journalctl -p 0..3 -r
Sep 09 18:07:45 NAS-Pi bluetoothd[811]: sap-server: Operation not permitted (1)
Sep 09 18:07:45 NAS-Pi bluetoothd[811]: profiles/sap/server.c:sap_server_register() Sap driver initialization failed.
Sep 09 18:07:45 NAS-Pi wpa_supplicant[819]: nl80211: kernel reports: Registration to specific type not supported
Sep 09 18:07:44 NAS-Pi bluetoothd[811]: src/plugin.c:plugin_init() Failed to init bap plugin
Sep 09 18:07:44 NAS-Pi bluetoothd[811]: src/plugin.c:plugin_init() Failed to init mcp plugin
Sep 09 18:07:44 NAS-Pi bluetoothd[811]: src/plugin.c:plugin_init() Failed to init vcp plugin
Sep 09 18:07:43 NAS-Pi kernel: ahci 0000:01:00.0: AHCI controller unavailable!
Sep 09 18:07:43 NAS-Pi kernel: ahci 0000:01:00.0: AHCI controller unavailable!
Sep 09 18:07:43 NAS-Pi kernel: ahci 0000:01:00.0: AHCI controller unavailable!
Sep 09 18:07:43 NAS-Pi kernel: ahci 0000:01:00.0: AHCI controller unavailable!
Sep 09 18:07:43 NAS-Pi kernel: ahci 0000:01:00.0: AHCI controller unavailable!
Sep 09 18:07:43 NAS-Pi kernel: ahci 0000:01:00.0: AHCI controller unavailable!

pi@NAS-Pi:~ $ dmesg | grep -iC 3 "sata"
[ 3.274371] ahci 0000:01:00.0: enabling device (0000 -> 0002)
[ 3.285683] ahci 0000:01:00.0: SSS flag set, parallel bus scan disabled
[ 3.296822] input: pwr_button as /devices/platform/pwr_button/input/input1
[ 3.305594] ahci 0000:01:00.0: AHCI 0001.0301 32 slots 24 ports 6 Gbps 0xffff0f impl SATA mode
[ 3.316329] ahci 0000:01:00.0: flags: 64bit ncq sntf stag pm led only pio slum part deso sadm sds apst
[ 3.333651] scsi host0: ahci
[ 3.336915] scsi host1: ahci
[ 3.439277] scsi host23: ahci
[ 3.439597] ata1: SATA max UDMA/133 abar m8192@0x1b00082000 port 0x1b00082100 irq 166
[ 3.481783] ata2: SATA max UDMA/133 abar m8192@0x1b00082000 port 0x1b00082180 irq 166
[ 3.489888] ata3: SATA max UDMA/133 abar m8192@0x1b00082000 port 0x1b00082200 irq 166
[ 3.497997] ata4: SATA max UDMA/133 abar m8192@0x1b00082000 port 0x1b00082280 irq 166
[ 3.506108] ata5: DUMMY
[ 3.508687] ata6: DUMMY
[ 3.511266] ata7: DUMMY
[ 3.513841] ata8: DUMMY
[ 3.516433] ata9: SATA max UDMA/133 abar m8192@0x1b00082000 port 0x1b00082500 irq 166
[ 3.524587] ata10: SATA max UDMA/133 abar m8192@0x1b00082000 port 0x1b00082580 irq 166
[ 3.532858] ata11: SATA max UDMA/133 abar m8192@0x1b00082000 port 0x1b00082600 irq 166

........... repeated entries cut ..................

[ 7.022355] ata1: found unknown device (class 0)
[ 7.027240] ata1: SATA link down (SStatus 0 SControl 300)
[ 7.344795] ata2: SATA link down (SStatus 0 SControl 300)
[ 8.048127] ata3: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF)
[ 8.312022] ahci 0000:01:00.0: AHCI controller unavailable!
[ 8.607207] ahci 0000:01:00.0: AHCI controller unavailable!
[ 15.715690] ata4: failed to resume link (SControl FFFFFFFF)
[ 20.351100] ata4: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF)
[ 22.093865] ahci 0000:01:00.0: AHCI controller unavailable!
[ 22.678491] ahci 0000:01:00.0: AHCI controller unavailable!
[ 30.655706] ata9: failed to resume link (SControl FFFFFFFF)
[ 35.291184] ata9: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF)
[ 37.035697] ahci 0000:01:00.0: AHCI controller unavailable!
[ 37.620200] ahci 0000:01:00.0: AHCI controller unavailable!
[ 45.599690] ata10: failed to resume link (SControl FFFFFFFF)
[ 50.235158] ata10: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF)
[ 51.977983] ahci 0000:01:00.0: AHCI controller unavailable!
[ 52.562453] ahci 0000:01:00.0: AHCI controller unavailable!
[ 60.539690] ata11: failed to resume link (SControl FFFFFFFF)
[ 65.175161] ata11: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF)
[ 66.918030] ahci 0000:01:00.0: AHCI controller unavailable!
[ 67.502519] ahci 0000:01:00.0: AHCI controller unavailable!
[ 75.479691] ata12: failed to resume link (SControl FFFFFFFF)
[ 80.115143] ata12: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF)
[ 81.857942] ahci 0000:01:00.0: AHCI controller unavailable!

........... repeated entries cut ..................

[ 146.630318] EXT4-fs (mmcblk0p2): mounted filesystem 56f80fa2-e005-4cca-86e6-19da1069914d ro with ordered data mode. Quota mode: none.
[ 147.524517] Segment Routing with IPv6
[ 147.528371] In-situ OAM (IOAM) with IPv6
[ 150.183706] ata17: failed to resume link (SControl FFFFFFFF)
[ 154.819196] ata17: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF)
[ 156.562033] ahci 0000:01:00.0: AHCI controller unavailable!
[ 157.146544] ahci 0000:01:00.0: AHCI controller unavailable!
[ 165.123702] ata18: failed to resume link (SControl FFFFFFFF)
[ 169.759219] ata18: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF)
[ 171.502060] ahci 0000:01:00.0: AHCI controller unavailable!
[ 172.086531] ahci 0000:01:00.0: AHCI controller unavailable!
[ 180.063689] ata19: failed to resume link (SControl FFFFFFFF)
[ 184.699163] ata19: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF)
[ 186.441985] ahci 0000:01:00.0: AHCI controller unavailable!

........... repeated entries cut ..................

[ 261.141938] ahci 0000:01:00.0: AHCI controller unavailable!
[ 261.173976] systemd[1]: systemd 252.30-1~deb12u2 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
[ 261.207586] systemd[1]: Detected architecture arm64.

@PixlRainbow
Copy link

@zogthegreat

SATA link down

I wonder if that might be the drive having trouble powering on? Is it being powered from an external PSU?

@zogthegreat
Copy link

Yes, but I used an SSD, so I can't really tell if the drive is powering up. I'll use a normal SATA drive tomorrow, and try again with a different PSU and buck converters. I'll post my results.

@zogthegreat
Copy link

I retried today with an ATX PSU to power the drive. I used a normal SATA HDD so that I could feel if it was spinning. The drive was sort of seen:

3.864156] ata24: SATA max UDMA/133 abar m8192@0x1b00082000 port 0x1b00082c80 irq 168
[ 7.002270] ata1: found unknown device (class 0)
[ 7.007153] ata1: SATA link down (SStatus 0 SControl 300)

But I think that my problem is here:

pi@NAS-Pi:~ $ dmesg | grep -iC 3 "sata"
[ 3.584292] ahci 0000:01:00.0: version 3.0
[ 3.584315] ahci 0000:01:00.0: enabling device (0000 -> 0002)
[ 3.590658] ahci 0000:01:00.0: SSS flag set, parallel bus scan disabled
[ 3.597727] ahci 0000:01:00.0: AHCI 0001.0301 32 slots 24 ports 6 Gbps 0xffff0f impl SATA mode
[ 3.606772] ahci 0000:01:00.0: flags: 64bit ncq sntf stag pm led only pio slum part deso sadm sds

The card that I am using only has 4 ports, yet for some reason the Pi is seeing 24 ports. I'm going to see if there are any drivers or maybe a source code available.

@zogthegreat
Copy link

zogthegreat commented Sep 10, 2024

> # Build the kernel and copy everything into place
> make -j4 zImage modules dtbs # 'Image' on 64-bit
> sudo make modules_install
> sudo cp arch/arm/boot/dts/*.dtb /boot/
> sudo cp arch/arm/boot/dts/overlays/*.dtb* /boot/overlays/
> sudo cp arch/arm/boot/dts/overlays/README /boot/overlays/
> sudo cp arch/arm/boot/zImage /boot/$KERNEL.img

@geerlingguy I'm trying to compile the kernel as you described. However, when I get to "sudo cp arch/arm/boot/dts/*.dtb /boot/" I get an error:

pi@NAS-Pi:~/linux $ sudo cp arch/arm/boot/dts/*.dtb /boot/
cp: cannot stat 'arch/arm/boot/dts/*.dtb': No such file or directory

Any suggestions?

[EDIT]

Here is what's in my arch/arm/boot/dts directory after compiling:

pi@NAS-Pi:~/linux $ ls arch/arm/boot/dts
actions     arm           cirrus                     hisilicon  microchip  overlays  sigmastar  tps6507x.dtsi  xilinx
airoha      armv7-m.dtsi  cnxt                       hpe        moxa       qcom      socionext  tps65217.dtsi
allwinner   aspeed        cros-adc-thermistors.dtsi  intel      nspire     realtek   st         tps65910.dtsi
alphascale  axis          cros-ec-keyboard.dtsi      Makefile   nuvoton    renesas   sunplus    unisoc
amazon      broadcom      cros-ec-sbs.dtsi           marvell    nvidia     rockchip  synaptics  vt8500
amlogic     calxeda       gemini                     mediatek   nxp        samsung   ti         xen

@6by9
Copy link

6by9 commented Sep 11, 2024

> # Build the kernel and copy everything into place
> make -j4 zImage modules dtbs # 'Image' on 64-bit
> sudo make modules_install
> sudo cp arch/arm/boot/dts/*.dtb /boot/
> sudo cp arch/arm/boot/dts/overlays/*.dtb* /boot/overlays/
> sudo cp arch/arm/boot/dts/overlays/README /boot/overlays/
> sudo cp arch/arm/boot/zImage /boot/$KERNEL.img

@geerlingguy I'm trying to compile the kernel as you described. However, when I get to "sudo cp arch/arm/boot/dts/*.dtb /boot/" I get an error:

pi@NAS-Pi:~/linux $ sudo cp arch/arm/boot/dts/*.dtb /boot/
cp: cannot stat 'arch/arm/boot/dts/*.dtb': No such file or directory

Any suggestions?

Docs from Raspberry Pi on building the kernel are at https://www.raspberrypi.com/documentation/computers/linux_kernel.html#native-build
Between 6.1 and 6.6 all the Broadcom DT files for arm32 moved into arch/arm/boot/dts/broadcom. The copy command has to be updated to match, and is in the Pi docs. (arm64 DT files have always been in arch/arm/64/boot/dts/broadcom).

I'm fairly certain that the relevant modules for the Marvell SATA cards are enabled in the default Pi kernels, so there is no need to rebuild the kernel yourself for that reason.

@geerlingguy
Copy link
Owner Author

Also debugging a similar error over here: #85

The answer according to raspberrypi/linux#6214 is to add the following lines to /boot/firmware/config.txt and reboot:

dtoverlay=pciex1-compat-pi5,no-mip
dtoverlay=pcie-32bit-dma-pi5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests