On Saturday 25 March 2006 17:37, Duncan <1i5t5.duncan@...> wrote
about '[gentoo-amd64] Re: 2.6.16 and ndiswrapper':
> I only found another late in the cycle, when I upgraded to 8 gigs of
> memory from 1 gig. .15 and stable releases thereof work. .16 doesn't.
> I haven't had time to trace that one further, however, and it's possible
> I just don't quite have the kernel configured correctly and .15 just
> happens to work anyway. This issue has to do with SATA, actually, I
> think SCSI. With 8 gig of memory I'd normally configure IOMMU on but
> that doesn't work with .15 or .16. With it off, .15 works, but .16
> fails when it tries to load the (libata based therefore SCSI based) SATA
> RAID, saying the SCSI device nodes don't exist! My root is on RAID so
> this is early kernel, where it first tries to load
> SCSI-then-RAID-then-rootfs-read-only, so it's well before anything
> userspace side is running, so it /has/ to be a kernel issue. There's a
> changelog entry saying they eliminated bounce-buffers for SCSI that I
> think is the problem, since bounce-buffers are >4 gig memory related,
> but as I said, I haven't traced it yet, so I can't say for sure.
I had something like this happening on my 4GB system. With IOMMU off, I'd
only have access to like 2G of memory. With IOMMU on, I had various
results, ranging from kernel panic before initrd is loaded to missing
memory (small, like maybe the size of my PCI IO window?) to a completely
working system with devices properly mapped to memory addresses beyond my
physical RAM. These various behaviors were controlled, oddly enough, by
my BIOS settings.
I'm running a Tyan Dual-Opteron Dual-PCIe board... If you've got a similar
board, I can share my BIOS and kernel settings and maybe you can resolve
your IOMMU issues.
"If there's one thing we've established over the years,
it's that the vast majority of our users don't have the slightest
clue what's best for them in terms of package stability."
-- Gentoo Developer Ciaran McCreesh