public inbox for gentoo-user@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-user] Package compile failures with "internal compiler error: Segmentation fault".
@ 2024-09-03 23:28 Dale
  2024-09-04  0:12 ` [gentoo-user] " Grant Edwards
                   ` (3 more replies)
  0 siblings, 4 replies; 55+ messages in thread
From: Dale @ 2024-09-03 23:28 UTC (permalink / raw
  To: gentoo-user

Howdy,

I was trying to re-emerge some packages.  The ones I was working on
failed with "internal compiler error: Segmentation fault" or similar
being the common reason for failing.  I did get gcc to compile and
install.  But other packages are failing, but some are compiling just
fine.  Here's a partial list at least. 

net-libs/webkit-gtk
kde-plasma/kpipewire
sys-devel/clang
sys-devel/llvm


When I couldn't get a couple to complete. I just went to my chroot and
started a emerge -e world.  Then the packages above started failing as
well in the chroot.  This all started when gkrellm would not open due to
a missing module.  Some info on gcc.


root@Gentoo-1 / # gcc-config -l
 [1] x86_64-pc-linux-gnu-13 *
root@Gentoo-1 / #


Output of one failed package.


In file included from
/var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/platform/graphics/GraphicsLayer.h:46,
                 from
/var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/platform/graphics/GraphicsLayerContentsDisplayDelegate.h:28,
                 from
/var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/html/canvas/CanvasRenderingContext.h:29,
                 from
/var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/html/canvas/GPUBasedCanvasRenderingContext.h:29,
                 from
/var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/html/canvas/WebGLRenderingContextBase.h:33,
                 from
/var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/html/canvas/WebGLStencilTexturing.h:29,
                 from
/var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/html/canvas/WebGLStencilTexturing.cpp:29,
                 from
/var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2_build/WebCore/DerivedSources/unified-sources/UnifiedSource-950a39b6-33.cpp:1:
/var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/platform/ScrollableArea.h:96:153:
internal compiler error: in layout_decl, at stor-layout.cc:642
   96 |     virtual bool requestScrollToPosition(const ScrollPosition&,
const ScrollPositionChangeOptions& =
ScrollPositionChangeOptions::createProgrammatic()) { return false; }
     
|                                                                                                                                                        
^
0x1d56132 internal_error(char const*, ...)
        ???:0
0x6dd3d1 fancy_abort(char const*, int, char const*)
        ???:0
0x769dc4 start_preparsed_function(tree_node*, tree_node*, int)
        ???:0
0x85cd68 c_parse_file()
        ???:0
0x955f41 c_common_parse_file()
        ???:0


And another package:


/usr/lib/gcc/x86_64-pc-linux-gnu/13/include/g++-v13/tuple: In
instantiation of ‘constexpr std::__tuple_element_t<__i,
std::tuple<_UTypes ...> >& std::get(const tuple<_UTypes ...>&) [with
long unsigned int __i = 0; _Elements =
{clang::CodeGen::CoverageMappingModuleGen*,
default_delete<clang::CodeGen::CoverageMappingModuleGen>};
__tuple_element_t<__i, tuple<_UTypes ...> > =
clang::CodeGen::CoverageMappingModuleGen*]’:
/usr/lib/gcc/x86_64-pc-linux-gnu/13/include/g++-v13/bits/unique_ptr.h:199:62:  
required from ‘std::__uniq_ptr_impl<_Tp, _Dp>::pointer
std::__uniq_ptr_impl<_Tp, _Dp>::_M_ptr() const [with _Tp =
clang::CodeGen::CoverageMappingModuleGen; _Dp =
std::default_delete<clang::CodeGen::CoverageMappingModuleGen>; pointer =
clang::CodeGen::CoverageMappingModuleGen*]’
/usr/lib/gcc/x86_64-pc-linux-gnu/13/include/g++-v13/bits/unique_ptr.h:470:27:  
required from ‘std::unique_ptr<_Tp, _Dp>::pointer std::unique_ptr<_Tp,
_Dp>::get() const [with _Tp = clang::CodeGen::CoverageMappingModuleGen;
_Dp = std::default_delete<clang::CodeGen::CoverageMappingModuleGen>;
pointer = clang::CodeGen::CoverageMappingModuleGen*]’
/var/tmp/portage/sys-devel/clang-16.0.6/work/clang/lib/CodeGen/CodeGenModule.h:668:31:  
required from here
/usr/lib/gcc/x86_64-pc-linux-gnu/13/include/g++-v13/tuple:1810:43:
internal compiler error: Segmentation fault
 1810 |     { return std::__get_helper<__i>(__t); }
      |                                           ^
0x1d56132 internal_error(char const*, ...)
        ???:0
0x9816d6 ggc_set_mark(void const*)
        ???:0
0x8cc377 gt_ggc_mx_lang_tree_node(void*)
        ???:0
0x8cccfc gt_ggc_mx_lang_tree_node(void*)
        ???:0
0x8ccddf gt_ggc_mx_lang_tree_node(void*)
        ???:0
0x8ccda1 gt_ggc_mx_lang_tree_node(void*)
        ???:0


As you can tell, compiler error is a common theme.  All of them I looked
at seem to be very similar to that.  I think there is a theme and likely
common cause of the error but no idea where to start. 

Anyone have any ideas on what is causing this?  Searches reveal
everything from bad kernel, bad gcc, bad hardware and such.  They may as
well throw in a bad mouse while at it.  LOL  A couple seemed to solve
this by upgrading to a newer version of gcc.  Thing is, I think this is
supposed to be a stable version of gcc. 

Open to ideas.  I hope I don't have to move back to the old rig while
sorting this out.  O_O  Oh, I updated my old rig this past weekend.  Not
a single problem on it.  Everything updated just fine. 

Thanks.

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-03 23:28 [gentoo-user] Package compile failures with "internal compiler error: Segmentation fault" Dale
@ 2024-09-04  0:12 ` Grant Edwards
  2024-09-04  0:39   ` Dale
  2024-09-04  7:53   ` Raffaele Belardi
  2024-09-04  4:26 ` [gentoo-user] " Eli Schwartz
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 55+ messages in thread
From: Grant Edwards @ 2024-09-04  0:12 UTC (permalink / raw
  To: gentoo-user

On 2024-09-03, Dale <rdalek1967@gmail.com> wrote:

> I was trying to re-emerge some packages.  The ones I was working on
> failed with "internal compiler error: Segmentation fault" or similar
> being the common reason for failing.

In my experience, that usually means failing RAM.  I'd try running
memtest86 for a day or two.

--
Grant





^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04  0:12 ` [gentoo-user] " Grant Edwards
@ 2024-09-04  0:39   ` Dale
  2024-09-04  4:16     ` corbin bird
                       ` (2 more replies)
  2024-09-04  7:53   ` Raffaele Belardi
  1 sibling, 3 replies; 55+ messages in thread
From: Dale @ 2024-09-04  0:39 UTC (permalink / raw
  To: gentoo-user

Grant Edwards wrote:
> On 2024-09-03, Dale <rdalek1967@gmail.com> wrote:
>
>> I was trying to re-emerge some packages.  The ones I was working on
>> failed with "internal compiler error: Segmentation fault" or similar
>> being the common reason for failing.
> In my experience, that usually means failing RAM.  I'd try running
> memtest86 for a day or two.
>
> --
> Grant

I've seen that before too.  I'm hoping not.  I may shutdown my rig,
remove and reinstall the memory and then test it for a bit.  May be a
bad connection.  It has worked well for the past couple months tho. 
Still, it is possible to either be a bad connection or just going bad. 

Dang those memory sticks ain't cheap.  o_~

Thanks.  See if anyone else has any other ideas. 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04  0:39   ` Dale
@ 2024-09-04  4:16     ` corbin bird
  2024-09-06 20:15     ` Dale
  2024-09-07 22:12     ` Wols Lists
  2 siblings, 0 replies; 55+ messages in thread
From: corbin bird @ 2024-09-04  4:16 UTC (permalink / raw
  To: gentoo-user


On 9/3/24 19:39, Dale wrote:
> Grant Edwards wrote:
>> On 2024-09-03, Dale <rdalek1967@gmail.com> wrote:
>>
>>> I was trying to re-emerge some packages.  The ones I was working on
>>> failed with "internal compiler error: Segmentation fault" or similar
>>> being the common reason for failing.
>> In my experience, that usually means failing RAM.  I'd try running
>> memtest86 for a day or two.
>>
>> --
>> Grant
> I've seen that before too.  I'm hoping not.  I may shutdown my rig,
> remove and reinstall the memory and then test it for a bit.  May be a
> bad connection.  It has worked well for the past couple months tho.
> Still, it is possible to either be a bad connection or just going bad.
>
> Dang those memory sticks ain't cheap.  o_~
>
> Thanks.  See if anyone else has any other ideas.
>
> Dale
>
> :-)  :-)
>
Please refresh my memory, what brand of CPU ( Intel or AMD ) is in your 
new rig?

----

If AMD, binutils can build -broken- without failing the compile process.

gcc starts segfaulting constantly.

workaround :

setup a package.env for gcc and binutils.

     gcc.conf contents :

CFLAGS="-march=generic -O2 -pipe"

CXXFLAGS="-march=generic -O2 -pipe"


use the old version of gcc to rebuild both binutils and gcc ( the new 
version ).

leave this fix in place, and prevent this error from reoccurring.


Hope this helps,

Corbin



^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-03 23:28 [gentoo-user] Package compile failures with "internal compiler error: Segmentation fault" Dale
  2024-09-04  0:12 ` [gentoo-user] " Grant Edwards
@ 2024-09-04  4:26 ` Eli Schwartz
  2024-09-04 10:48 ` [gentoo-user] " Dale
  2024-09-25 20:41 ` Dale
  3 siblings, 0 replies; 55+ messages in thread
From: Eli Schwartz @ 2024-09-04  4:26 UTC (permalink / raw
  To: gentoo-user


[-- Attachment #1.1: Type: text/plain, Size: 2733 bytes --]

On 9/3/24 7:28 PM, Dale wrote:
> Howdy,
> 
> I was trying to re-emerge some packages.  The ones I was working on
> failed with "internal compiler error: Segmentation fault" or similar
> being the common reason for failing.  I did get gcc to compile and
> install.  But other packages are failing, but some are compiling just
> fine.  Here's a partial list at least. 
> 
> net-libs/webkit-gtk
> kde-plasma/kpipewire
> sys-devel/clang
> sys-devel/llvm




> /var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/platform/ScrollableArea.h:96:153:
> internal compiler error: in layout_decl, at stor-layout.cc:642
>    96 |     virtual bool requestScrollToPosition(const ScrollPosition&,
> const ScrollPositionChangeOptions& =
> ScrollPositionChangeOptions::createProgrammatic()) { return false; }
>      
> |                                                                                                                                                        
> ^
> 0x1d56132 internal_error(char const*, ...)
>         ???:0
> 0x6dd3d1 fancy_abort(char const*, int, char const*)
>         ???:0
> 0x769dc4 start_preparsed_function(tree_node*, tree_node*, int)
>         ???:0
> 0x85cd68 c_parse_file()
>         ???:0
> 0x955f41 c_common_parse_file()
>         ???:0




> /var/tmp/portage/sys-devel/clang-16.0.6/work/clang/lib/CodeGen/CodeGenModule.h:668:31:  
> required from here
> /usr/lib/gcc/x86_64-pc-linux-gnu/13/include/g++-v13/tuple:1810:43:
> internal compiler error: Segmentation fault
>  1810 |     { return std::__get_helper<__i>(__t); }
>       |                                           ^
> 0x1d56132 internal_error(char const*, ...)
>         ???:0
> 0x9816d6 ggc_set_mark(void const*)
>         ???:0
> 0x8cc377 gt_ggc_mx_lang_tree_node(void*)
>         ???:0
> 0x8cccfc gt_ggc_mx_lang_tree_node(void*)
>         ???:0
> 0x8ccddf gt_ggc_mx_lang_tree_node(void*)
>         ???:0
> 0x8ccda1 gt_ggc_mx_lang_tree_node(void*)
>         ???:0


Yeah, clang especially isn't a very rare package. And you're getting
errors that range from frontend crashes to GC crashes. This is very
unlikely to be a compiler bug on a specific file with unusual code.

An issue with your RAM (or some other hardware issue? overheating?
overclocking? it's possible...) is a very likely diagnosis, unfortunately.

Good luck.


-- 
Eli Schwartz


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 236 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04  0:12 ` [gentoo-user] " Grant Edwards
  2024-09-04  0:39   ` Dale
@ 2024-09-04  7:53   ` Raffaele Belardi
  1 sibling, 0 replies; 55+ messages in thread
From: Raffaele Belardi @ 2024-09-04  7:53 UTC (permalink / raw
  To: gentoo-user



On 4 September 2024 02:12:51 CEST, Grant Edwards <grant.b.edwards@gmail.com> wrote:
>On 2024-09-03, Dale <rdalek1967@gmail.com> wrote:
>
>> I was trying to re-emerge some packages.  The ones I was working on
>> failed with "internal compiler error: Segmentation fault" or similar
>> being the common reason for failing.
>
>In my experience, that usually means failing RAM.  I'd try running
>memtest86 for a day or two.
>
>--
>Grant
>

Also out of memory might cause a segmentation fault. Dale, do you have a swap partition or file? At least three of the packages you mention are quite memory hungry, depending on your -j options in make.conf. You should see an OOM error in the syslog if this is the case, is there any hint in it?

Raf


^ permalink raw reply	[flat|nested] 55+ messages in thread

* [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-03 23:28 [gentoo-user] Package compile failures with "internal compiler error: Segmentation fault" Dale
  2024-09-04  0:12 ` [gentoo-user] " Grant Edwards
  2024-09-04  4:26 ` [gentoo-user] " Eli Schwartz
@ 2024-09-04 10:48 ` Dale
  2024-09-04 11:05   ` Frank Steinmetzger
  2024-09-04 11:37   ` Dale
  2024-09-25 20:41 ` Dale
  3 siblings, 2 replies; 55+ messages in thread
From: Dale @ 2024-09-04 10:48 UTC (permalink / raw
  To: gentoo-user

Dale wrote:
> Howdy,
>
> I was trying to re-emerge some packages.  The ones I was working on
> failed with "internal compiler error: Segmentation fault" or similar
> being the common reason for failing.  I did get gcc to compile and
> install.  But other packages are failing, but some are compiling just
> fine.  

<<<SNIP>>>

> Open to ideas.  I hope I don't have to move back to the old rig while
> sorting this out.  O_O  Oh, I updated my old rig this past weekend.  Not
> a single problem on it.  Everything updated just fine. 
>
> Thanks.
>
> Dale
>
> :-)  :-) 
>


Here's the update.  Grant and a couple others were right.  So was some
of my search results, which varied widely.  Shortly after my reply to
Grant, I shutdown the new rig.  I set my old rig back up so I can watch
TV.  Then I booted the ever handy Ventoy USB stick and ran the memtest
included with the Gentoo Live image.  Within minutes, rut roh.  :-(  The
middle section of the test screen went red.  It spit out a bunch of "f"
characters.  I took a pic but haven't downloaded it yet but I figure it
has been seen before.  Pic is to get new memory stick.  Figure I need to
prove it is bad.

Anyway, I removed one stick of memory, it still failed.  Then I switched
sticks.  That one passed the test with no errors.  I put the original
stick back in but in another slot.  Within minutes, it failed the same
as before.  So, I put the good stick in and booted into the Gentoo Live
image.  I mounted my OS, chrooted in and did a emerge -ae @system.  The
bad stick is on my desk.  It looks sad.  Like me. 

Once the emerge was done, I booted back into the OS itself.  Everything
seems to be working but I'm back down to 32GBs of memory.  I'm trying to
start a emerge -ae world but got to start a new thread first.  It claims
some weird things that sounds weird but I don't think it is related to
the memory issue. 

I wonder how much fun getting this memory replaced is going to be.  o_O 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 10:48 ` [gentoo-user] " Dale
@ 2024-09-04 11:05   ` Frank Steinmetzger
  2024-09-04 11:21     ` Dale
  2024-09-04 14:21     ` Grant Edwards
  2024-09-04 11:37   ` Dale
  1 sibling, 2 replies; 55+ messages in thread
From: Frank Steinmetzger @ 2024-09-04 11:05 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 909 bytes --]

Am Wed, Sep 04, 2024 at 05:48:29AM -0500 schrieb Dale:

> I wonder how much fun getting this memory replaced is going to be.  o_O 

I once had a bad stick of Crucial Ballistix DDR3. I think it also started 
with GCC segfaults. So I took a picture of the failing memtest, e-mailed 
that to crucial and they sent me instructions what to do.

I keep the packaging of all my tech stuff, so I put the sticks into their 
blister (I bought it as a kit, so I had to send in both sticks), put a paper 
note in for which one was faulty and sent them off to Crucial in Ireland. 
After two weeks or so I got a new kit in the mail. Thankfully by that time I 
had two kits for the maximum of 4 × 8 GiB, so I was able to continue using 
my PC.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

An empty head is easier to nod with.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 11:05   ` Frank Steinmetzger
@ 2024-09-04 11:21     ` Dale
  2024-09-04 15:57       ` Peter Humphrey
  2024-09-04 19:09       ` Grant Edwards
  2024-09-04 14:21     ` Grant Edwards
  1 sibling, 2 replies; 55+ messages in thread
From: Dale @ 2024-09-04 11:21 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1382 bytes --]

Frank Steinmetzger wrote:
> Am Wed, Sep 04, 2024 at 05:48:29AM -0500 schrieb Dale:
>
>> I wonder how much fun getting this memory replaced is going to be.  o_O 
> I once had a bad stick of Crucial Ballistix DDR3. I think it also started 
> with GCC segfaults. So I took a picture of the failing memtest, e-mailed 
> that to crucial and they sent me instructions what to do.
>
> I keep the packaging of all my tech stuff, so I put the sticks into their 
> blister (I bought it as a kit, so I had to send in both sticks), put a paper 
> note in for which one was faulty and sent them off to Crucial in Ireland. 
> After two weeks or so I got a new kit in the mail. Thankfully by that time I 
> had two kits for the maximum of 4 × 8 GiB, so I was able to continue using 
> my PC.
>

I ordered another set of memory sticks.  I figure I will have to send
them both back which means no memory at all.  I wasn't planning to go to
128GBs yet but guess I am now.  Once new ones come in, I'll start
working on getting them swapped.  I don't recall ever having a memory
stick go bad before.  I've had to reseat one before but not just plain
go bad. 

I noticed that that qtweb package now wants 32GBs of space to build
now.  Dang, I feel for someone using a Raspberry Pi.  That thing is
getting really big.  I didn't set up swap so I had to create a swapfile. 

Dale

:-)  :-) 

[-- Attachment #2: Type: text/html, Size: 2144 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 10:48 ` [gentoo-user] " Dale
  2024-09-04 11:05   ` Frank Steinmetzger
@ 2024-09-04 11:37   ` Dale
  2024-09-04 14:23     ` Grant Edwards
  1 sibling, 1 reply; 55+ messages in thread
From: Dale @ 2024-09-04 11:37 UTC (permalink / raw
  To: gentoo-user

Dale wrote:
>
> Here's the update.  Grant and a couple others were right.  So was some
> of my search results, which varied widely.  Shortly after my reply to
> Grant, I shutdown the new rig.  I set my old rig back up so I can watch
> TV.  Then I booted the ever handy Ventoy USB stick and ran the memtest
> included with the Gentoo Live image.  Within minutes, rut roh.  :-(  The
> middle section of the test screen went red.  It spit out a bunch of "f"
> characters.  I took a pic but haven't downloaded it yet but I figure it
> has been seen before.  Pic is to get new memory stick.  Figure I need to
> prove it is bad.
>
> Anyway, I removed one stick of memory, it still failed.  Then I switched
> sticks.  That one passed the test with no errors.  I put the original
> stick back in but in another slot.  Within minutes, it failed the same
> as before.  So, I put the good stick in and booted into the Gentoo Live
> image.  I mounted my OS, chrooted in and did a emerge -ae @system.  The
> bad stick is on my desk.  It looks sad.  Like me. 
>
> Once the emerge was done, I booted back into the OS itself.  Everything
> seems to be working but I'm back down to 32GBs of memory.  I'm trying to
> start a emerge -ae world but got to start a new thread first.  It claims
> some weird things that sounds weird but I don't think it is related to
> the memory issue. 
>
> I wonder how much fun getting this memory replaced is going to be.  o_O 
>
> Dale
>
> :-)  :-) 
>


I forgot to ask, is there anything else that bad memory could affect? 
I'm doing the emerge -e world to make sure no programs were affected but
what about other stuff?  Could this affect hard drive data for example? 
Just the things I created while stick was bad I'd hope. 

Just wondering what I should look out for here. 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 11:05   ` Frank Steinmetzger
  2024-09-04 11:21     ` Dale
@ 2024-09-04 14:21     ` Grant Edwards
  1 sibling, 0 replies; 55+ messages in thread
From: Grant Edwards @ 2024-09-04 14:21 UTC (permalink / raw
  To: gentoo-user

On 2024-09-04, Frank Steinmetzger <Warp_7@gmx.de> wrote:
> Am Wed, Sep 04, 2024 at 05:48:29AM -0500 schrieb Dale:
>
>> I wonder how much fun getting this memory replaced is going to be.  o_O 
>
> I once had a bad stick of Crucial Ballistix DDR3. I think it also started 
> with GCC segfaults. So I took a picture of the failing memtest, e-mailed 
> that to crucial and they sent me instructions what to do.

Yep, I got free replacement from Crucial once years ago also.  It was
pretty easy, but it took a week or two.

--
Grant




^ permalink raw reply	[flat|nested] 55+ messages in thread

* [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 11:37   ` Dale
@ 2024-09-04 14:23     ` Grant Edwards
  2024-09-04 15:58       ` Peter Humphrey
  0 siblings, 1 reply; 55+ messages in thread
From: Grant Edwards @ 2024-09-04 14:23 UTC (permalink / raw
  To: gentoo-user

On 2024-09-04, Dale <rdalek1967@gmail.com> wrote:

> I forgot to ask, is there anything else that bad memory could affect? 
> I'm doing the emerge -e world to make sure no programs were affected but
> what about other stuff?  Could this affect hard drive data for example?

Unfortunately, yes.  I have had some failing RAM that resulted in some
files getting corrupted.

--
Grant



^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 11:21     ` Dale
@ 2024-09-04 15:57       ` Peter Humphrey
  2024-09-04 19:09       ` Grant Edwards
  1 sibling, 0 replies; 55+ messages in thread
From: Peter Humphrey @ 2024-09-04 15:57 UTC (permalink / raw
  To: gentoo-user

On Wednesday 4 September 2024 12:21:19 BST Dale wrote:

> I wasn't planning to go to 128GBs yet but guess I am now.

I considered doubling up to 128GB a few months ago, but the technical help 
people at Armari (the workstation builder) told me that I'd need to jump 
through a few hoops. Not only would the chips need to have been selected for 
that purpose, but I'd have to do a fair amount of BIOS tuning while running 
performance tests.

I decided not to push my luck.

-- 
Regards,
Peter.





^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 14:23     ` Grant Edwards
@ 2024-09-04 15:58       ` Peter Humphrey
  2024-09-04 19:28         ` Dale
  0 siblings, 1 reply; 55+ messages in thread
From: Peter Humphrey @ 2024-09-04 15:58 UTC (permalink / raw
  To: gentoo-user

On Wednesday 4 September 2024 15:23:13 BST Grant Edwards wrote:
> On 2024-09-04, Dale <rdalek1967@gmail.com> wrote:
> > I forgot to ask, is there anything else that bad memory could affect? 

How long have you got?  ;-)

-- 
Regards,
Peter.





^ permalink raw reply	[flat|nested] 55+ messages in thread

* [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 11:21     ` Dale
  2024-09-04 15:57       ` Peter Humphrey
@ 2024-09-04 19:09       ` Grant Edwards
  2024-09-04 21:08         ` Frank Steinmetzger
  1 sibling, 1 reply; 55+ messages in thread
From: Grant Edwards @ 2024-09-04 19:09 UTC (permalink / raw
  To: gentoo-user

On 2024-09-04, Dale <rdalek1967@gmail.com> wrote:

> I ordered another set of memory sticks. I figure I will have to send
> them both back which means no memory at all. I wasn't planning to go to
> 128GBs yet but guess I am now. [...]

Good luck.

The last time I had one fail, I needed the machine for work and
couldn't wait for the replacement to ship. So, I went to either
MicroCenter or Best Buy and picked up another pair of SIMMs with the
exact same specs (different brand, of course).  A couple weeks later,
my replacemnts arrived. "Yippe!" I say to myself, "twice as much RAM!"

I plugged them in alongside the recently purchased pair. Wouldn't
work. Either pair of SIMMs worked fine by themselves, but the only way
I could get both pairs to work together was to drop the clock speed
down to about a third the speed they were supposed to support.

I read and re-read the motherboard specs and manual. I spent hours
tweaking different memory settings in the BIOS, but no joy.  Now I've
got a backup pair of SIMMs sitting on the shelf.

--
Grant



^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 15:58       ` Peter Humphrey
@ 2024-09-04 19:28         ` Dale
  0 siblings, 0 replies; 55+ messages in thread
From: Dale @ 2024-09-04 19:28 UTC (permalink / raw
  To: gentoo-user

Peter Humphrey wrote:
> On Wednesday 4 September 2024 15:23:13 BST Grant Edwards wrote:
>> On 2024-09-04, Dale <rdalek1967@gmail.com> wrote:
>>> I forgot to ask, is there anything else that bad memory could affect? 
> How long have you got?  ;-)
>


Well, the new files I downloaded, I can let Qbittorrent check what it
downloaded.  If it passes the tests, then I can copy those over and
replace what I copied over in the last week or so.  I think this stick
of memory only went bad in the past couple days.  Shouldn't be many
files.  I'd hope reading wouldn't corrupt a file. 

Emerge -e world is still going strong.  Still wondering about problem in
other thread tho. 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 19:09       ` Grant Edwards
@ 2024-09-04 21:08         ` Frank Steinmetzger
  2024-09-04 21:22           ` Grant Edwards
  0 siblings, 1 reply; 55+ messages in thread
From: Frank Steinmetzger @ 2024-09-04 21:08 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1322 bytes --]

Am Wed, Sep 04, 2024 at 07:09:43PM -0000 schrieb Grant Edwards:

> On 2024-09-04, Dale <rdalek1967@gmail.com> wrote:
> 
> > I ordered another set of memory sticks. I figure I will have to send
> > them both back which means no memory at all. I wasn't planning to go to
> > 128GBs yet but guess I am now. [...]
> 
> Good luck.
> 
> […]
> I plugged them in alongside the recently purchased pair. Wouldn't
> work. Either pair of SIMMs worked fine by themselves, but the only way
> I could get both pairs to work together was to drop the clock speed
> down to about a third the speed they were supposed to support.

Indeed that was my first thought when Dale mentioned getting another pair. I 
don’t know if it’s true for all Ryzen chips, but if you use four sticks, 
they may not work at the maximum speed advertised by AMD (not counting in 
overlcocking). If you kept the settings to Auto you shouldn’t get problems, 
but RAM may work slower then. OTOH, since you don’t do hard-core gaming or 
scientific number-crunching, it is unlikely you will notice a difference in 
your every-day computing.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

How can I know what I’m thinking before I hear what I’m saying?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 21:08         ` Frank Steinmetzger
@ 2024-09-04 21:22           ` Grant Edwards
  2024-09-04 21:53             ` Dale
  0 siblings, 1 reply; 55+ messages in thread
From: Grant Edwards @ 2024-09-04 21:22 UTC (permalink / raw
  To: gentoo-user

On 2024-09-04, Frank Steinmetzger <Warp_7@gmx.de> wrote:
> Am Wed, Sep 04, 2024 at 07:09:43PM -0000 schrieb Grant Edwards:
>> […]
>> I plugged them in alongside the recently purchased pair. Wouldn't
>> work. Either pair of SIMMs worked fine by themselves, but the only way
>> I could get both pairs to work together was to drop the clock speed
>> down to about a third the speed they were supposed to support.
>
> Indeed that was my first thought when Dale mentioned getting another
> pair. I don’t know if it’s true for all Ryzen chips, but if you use
> four sticks, they may not work at the maximum speed advertised by
> AMD (not counting in overlcocking). If you kept the settings to Auto
> you shouldn’t get problems, but RAM may work slower then.

Yea, I thought auto should work, but it didn't. I had to manually
lower the RAM clock speed to get all four to work at the same
time. The BIOS screens were a bit mind-boggling (very high on
graphics, dazzle, and flash -- very low on usability). So it's
possible I didn't really have auto mode correctly enabled.

> OTOH, since you don’t do hard-core gaming or scientific
> number-crunching, it is unlikely you will notice a difference in
> your every-day computing.

In my case I compared an "emerge" that took several minutes, and it
took significantly longer with the lower RAM clock speed. I decided I
was better off with fewer GB of faster RAM.




^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 21:22           ` Grant Edwards
@ 2024-09-04 21:53             ` Dale
  2024-09-04 22:07               ` Grant Edwards
  0 siblings, 1 reply; 55+ messages in thread
From: Dale @ 2024-09-04 21:53 UTC (permalink / raw
  To: gentoo-user

Grant Edwards wrote:
> On 2024-09-04, Frank Steinmetzger <Warp_7@gmx.de> wrote:
>> Am Wed, Sep 04, 2024 at 07:09:43PM -0000 schrieb Grant Edwards:
>>> […]
>>> I plugged them in alongside the recently purchased pair. Wouldn't
>>> work. Either pair of SIMMs worked fine by themselves, but the only way
>>> I could get both pairs to work together was to drop the clock speed
>>> down to about a third the speed they were supposed to support.
>> Indeed that was my first thought when Dale mentioned getting another
>> pair. I don’t know if it’s true for all Ryzen chips, but if you use
>> four sticks, they may not work at the maximum speed advertised by
>> AMD (not counting in overlcocking). If you kept the settings to Auto
>> you shouldn’t get problems, but RAM may work slower then.
> Yea, I thought auto should work, but it didn't. I had to manually
> lower the RAM clock speed to get all four to work at the same
> time. The BIOS screens were a bit mind-boggling (very high on
> graphics, dazzle, and flash -- very low on usability). So it's
> possible I didn't really have auto mode correctly enabled.
>
>> OTOH, since you don’t do hard-core gaming or scientific
>> number-crunching, it is unlikely you will notice a difference in
>> your every-day computing.
> In my case I compared an "emerge" that took several minutes, and it
> took significantly longer with the lower RAM clock speed. I decided I
> was better off with fewer GB of faster RAM.
>
>

At one point, I looked for a set of four sticks of the memory.  I
couldn't find any.  They only come in sets of two.  I read somewhere
that the mobo expects each pair to be matched.  Thing is, things
change.  My mobo may work different or something or they figured out
some better coding.  The downside, my mobo is a somewhat older tech. 

Given the number of components on each stick, it's pretty amazing they
work at all.  I recall our conversation about the number of transistors
on a CPU.  I suspect memory is right up there with it.  32GBs on a stick
is a lot.  I think they have larger sticks too.  That means even more. 
Also, mine puts on a fancy light show.  It has LEDs that change colors. 
Annoying at times.  Needs a off switch.  LOL

My emerge -e world is still chugging along.  Not a single failure yet. 
I started Qbittorrent and it complained about files.  I did run fsck on
the file system tho and it did fix a few things, like it always does. 
It always finds something to improve on.  I told QB to force a recheck,
within minutes, it was off to the races again.  It appears to be fixing
any bad files.  Sort of neat that it can do that. 

Dale

:-)  :-) 

P. S.  With my TV not connected, my monitor situation went weird.  I
guess when I get the new rig back to normal, it will work like it did
before. 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 21:53             ` Dale
@ 2024-09-04 22:07               ` Grant Edwards
  2024-09-04 22:14                 ` Dale
  2024-09-04 22:38                 ` Michael
  0 siblings, 2 replies; 55+ messages in thread
From: Grant Edwards @ 2024-09-04 22:07 UTC (permalink / raw
  To: gentoo-user

On 2024-09-04, Dale <rdalek1967@gmail.com> wrote:

> At one point, I looked for a set of four sticks of the memory.  I
> couldn't find any.  They only come in sets of two.  I read somewhere
> that the mobo expects each pair to be matched.

Yep, that's definitely how it was supposed to work. I fully expected
my two (identically spec'ed) sets of two work. All the documentation I
could find said it should. It just didn't. :/

--
Grant



^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 22:07               ` Grant Edwards
@ 2024-09-04 22:14                 ` Dale
  2024-09-04 22:38                 ` Michael
  1 sibling, 0 replies; 55+ messages in thread
From: Dale @ 2024-09-04 22:14 UTC (permalink / raw
  To: gentoo-user

Grant Edwards wrote:
> On 2024-09-04, Dale <rdalek1967@gmail.com> wrote:
>
>> At one point, I looked for a set of four sticks of the memory.  I
>> couldn't find any.  They only come in sets of two.  I read somewhere
>> that the mobo expects each pair to be matched.
> Yep, that's definitely how it was supposed to work. I fully expected
> my two (identically spec'ed) sets of two work. All the documentation I
> could find said it should. It just didn't. :/
>
> --
> Grant
>

Well, when I get them all in, I'll post back how it went, unless I P. S.
it somewhere.  It could be a while.  They claim they will be here
Friday.  Who knows.  Then comes the testing part.  Of course, testing
the set I had didn't work out either.  o_O

QB is fixing quite a few files tho.  Hmmmm.  It could take years for me
to watch all those videos.  O_O

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 22:07               ` Grant Edwards
  2024-09-04 22:14                 ` Dale
@ 2024-09-04 22:38                 ` Michael
  2024-09-05  0:11                   ` Dale
  2024-09-05  9:08                   ` Frank Steinmetzger
  1 sibling, 2 replies; 55+ messages in thread
From: Michael @ 2024-09-04 22:38 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1521 bytes --]

On Wednesday 4 September 2024 23:07:17 BST Grant Edwards wrote:
> On 2024-09-04, Dale <rdalek1967@gmail.com> wrote:
> > At one point, I looked for a set of four sticks of the memory.  I
> > couldn't find any.  They only come in sets of two.  I read somewhere
> > that the mobo expects each pair to be matched.
> 
> Yep, that's definitely how it was supposed to work. I fully expected
> my two (identically spec'ed) sets of two work. All the documentation I
> could find said it should. It just didn't. :/
> 
> --
> Grant

Often you have to dial down latency and/or increase voltage when you add more 
RAM modules.  It is a disappointment when faster memory has to be slowed down 
because those extra two sticks you bought on ebay at a good price, are of a 
slightly lower spec.

Some MoBos are more tolerant than others.  I have had systems which failed to 
work when the additional RAM modules were not part of a matching kit.  I've 
had others which would work no matter what you threw at them.  High 
performance MoBos which have highly strung specs, tend to require lowering 
frequency/increasing latency when you add more RAM.

Regarding Dale's question, which has already been answered - yes, anything the 
bad memory has touched is suspect of corruption.  Without ECC RAM a dodgy 
module can cause a lot of damage before it is discovered.  This is why I 
*always* run memtest86+ overnight whenever I get a new system, or add new RAM.  
I've only had one fail over the years, but I'd better be safe than sorry.  ;-)

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 22:38                 ` Michael
@ 2024-09-05  0:11                   ` Dale
  2024-09-05  8:05                     ` Michael
  2024-09-05  9:08                   ` Frank Steinmetzger
  1 sibling, 1 reply; 55+ messages in thread
From: Dale @ 2024-09-05  0:11 UTC (permalink / raw
  To: gentoo-user

Michael wrote:
> On Wednesday 4 September 2024 23:07:17 BST Grant Edwards wrote:
>> On 2024-09-04, Dale <rdalek1967@gmail.com> wrote:
>>> At one point, I looked for a set of four sticks of the memory.  I
>>> couldn't find any.  They only come in sets of two.  I read somewhere
>>> that the mobo expects each pair to be matched.
>> Yep, that's definitely how it was supposed to work. I fully expected
>> my two (identically spec'ed) sets of two work. All the documentation I
>> could find said it should. It just didn't. :/
>>
>> --
>> Grant
> Often you have to dial down latency and/or increase voltage when you add more 
> RAM modules.  It is a disappointment when faster memory has to be slowed down 
> because those extra two sticks you bought on ebay at a good price, are of a 
> slightly lower spec.
>
> Some MoBos are more tolerant than others.  I have had systems which failed to 
> work when the additional RAM modules were not part of a matching kit.  I've 
> had others which would work no matter what you threw at them.  High 
> performance MoBos which have highly strung specs, tend to require lowering 
> frequency/increasing latency when you add more RAM.
>
> Regarding Dale's question, which has already been answered - yes, anything the 
> bad memory has touched is suspect of corruption.  Without ECC RAM a dodgy 
> module can cause a lot of damage before it is discovered.  This is why I 
> *always* run memtest86+ overnight whenever I get a new system, or add new RAM.  
> I've only had one fail over the years, but I'd better be safe than sorry.  ;-)


When I built this rig, I first booted the Gentoo Live boot image and
just played around a bit.  Mostly to let the CPU grease settle in a
bit.  Then I ran memtest through a whole test until it said it passed. 
Only then did I start working on the install.  The rig has ran without
issue until I noticed gkrellm temps were stuck.  They wasn't updating as
temps change.  So, I closed gkrellm but then it wouldn't open again. 
Ran it in a console and saw the error about missing module or
something.  Then I tried to figure out that problem which lead to seg
fault errors.  Well, that lead to the thread and the discovery of a bad
memory stick.  I check gkrellm often so it was most likely less than a
day.  Could have been only hours.  Knowing I check gkrellm often, it was
likely only a matter of a couple hours or so.  The only reason it might
have went longer, the CPU was mostly idle.  I watch more often when the
CPU is busy, updates etc. 

I just hope I can put in all four sticks and it work once the bad set is
replaced.  I miss having 64GBs of memory already. 

Oh, QB is redoing a lot of files.  It seems it picked up on some . . .
issues.  :/

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05  0:11                   ` Dale
@ 2024-09-05  8:05                     ` Michael
  2024-09-05  8:36                       ` Dale
  0 siblings, 1 reply; 55+ messages in thread
From: Michael @ 2024-09-05  8:05 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1243 bytes --]

On Thursday 5 September 2024 01:11:13 BST Dale wrote:

> When I built this rig, I first booted the Gentoo Live boot image and
> just played around a bit.  Mostly to let the CPU grease settle in a
> bit.  Then I ran memtest through a whole test until it said it passed. 
> Only then did I start working on the install.  The rig has ran without
> issue until I noticed gkrellm temps were stuck.  They wasn't updating as
> temps change.  So, I closed gkrellm but then it wouldn't open again. 
> Ran it in a console and saw the error about missing module or
> something.  Then I tried to figure out that problem which lead to seg
> fault errors.  Well, that lead to the thread and the discovery of a bad
> memory stick.  I check gkrellm often so it was most likely less than a
> day.  Could have been only hours.  Knowing I check gkrellm often, it was
> likely only a matter of a couple hours or so.  The only reason it might
> have went longer, the CPU was mostly idle.  I watch more often when the
> CPU is busy, updates etc. 

Ah!  It seems it died while in active service.  :-)

There's no way to protect against this kind of failure in real time, short of 
running a server spec. board with ECC RAM.  An expensive proposition for a 
home PC.

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05  8:05                     ` Michael
@ 2024-09-05  8:36                       ` Dale
  2024-09-05  8:42                         ` Michael
  0 siblings, 1 reply; 55+ messages in thread
From: Dale @ 2024-09-05  8:36 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2844 bytes --]

Michael wrote:
> On Thursday 5 September 2024 01:11:13 BST Dale wrote:
>
>> When I built this rig, I first booted the Gentoo Live boot image and
>> just played around a bit.  Mostly to let the CPU grease settle in a
>> bit.  Then I ran memtest through a whole test until it said it passed. 
>> Only then did I start working on the install.  The rig has ran without
>> issue until I noticed gkrellm temps were stuck.  They wasn't updating as
>> temps change.  So, I closed gkrellm but then it wouldn't open again. 
>> Ran it in a console and saw the error about missing module or
>> something.  Then I tried to figure out that problem which lead to seg
>> fault errors.  Well, that lead to the thread and the discovery of a bad
>> memory stick.  I check gkrellm often so it was most likely less than a
>> day.  Could have been only hours.  Knowing I check gkrellm often, it was
>> likely only a matter of a couple hours or so.  The only reason it might
>> have went longer, the CPU was mostly idle.  I watch more often when the
>> CPU is busy, updates etc. 
> Ah!  It seems it died while in active service.  :-)
>
> There's no way to protect against this kind of failure in real time, short of 
> running a server spec. board with ECC RAM.  An expensive proposition for a 
> home PC.


Yea, this mobo doesn't support that.  It does seem that the files for
Qbittorrent, QB, has some serious issues.  I got it to recheck them all
and almost all of them had something QB detected that made it download
them all again.  I think it has checksums for chunks of a file as well
as a checksum for the entire file.  I figure it got a mismatch for the
whole file and went to work.  I wish I could have just let it find the
bad chunks instead of the whole file.  Some torrents are hard to get. 

I've ran fsck before mounting on every file system so far.  I ran it on
the OS file systems while booted from the Live image.  The others I just
did before mounting.  I realize this doesn't mean the files themselves
are OK but at least the file system under them is OK.  I'm not sure how
to know if any damage was done between when the memory stick failed and
when I started the repair process.  I could find the ones I copied from
place to place and check them but other than watching every single
video, I'm not sure how to know if one is bad or not.  So far,
thumbnails work.  o_O

If Amazon is going to get that memory here Friday, it better get a move
on.  It hasn't shipped yet.  Amazon is bad to ship from warehouse to
warehouse and get it close before actually shipping it with USPS, FedEx
or something.  It says Friday so it will likely be here Friday.  I
haven't had one late yet.  Had them early tho.  ;-)  I actually paid for
shipping on this one.  I really should get prime. 

Come on memory sticks.  :-D

Dale

:-)  :-) 

[-- Attachment #2: Type: text/html, Size: 3735 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05  8:36                       ` Dale
@ 2024-09-05  8:42                         ` Michael
  2024-09-05 10:53                           ` Dale
  0 siblings, 1 reply; 55+ messages in thread
From: Michael @ 2024-09-05  8:42 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 941 bytes --]

On Thursday 5 September 2024 09:36:36 BST Dale wrote:

> I've ran fsck before mounting on every file system so far.  I ran it on
> the OS file systems while booted from the Live image.  The others I just
> did before mounting.  I realize this doesn't mean the files themselves
> are OK but at least the file system under them is OK.

This could put your mind mostly at rest, at least the OS structure is OK and 
the error was not running for too long.


> I'm not sure how
> to know if any damage was done between when the memory stick failed and
> when I started the repair process.  I could find the ones I copied from
> place to place and check them but other than watching every single
> video, I'm not sure how to know if one is bad or not.  So far,
> thumbnails work.  o_O

If you have a copy of these files on another machine, you can run rsync with 
--checksum.  This will only (re)copy the file over if the checksum is 
different.


[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04 22:38                 ` Michael
  2024-09-05  0:11                   ` Dale
@ 2024-09-05  9:08                   ` Frank Steinmetzger
  2024-09-05  9:36                     ` Michael
  1 sibling, 1 reply; 55+ messages in thread
From: Frank Steinmetzger @ 2024-09-05  9:08 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 848 bytes --]

Am Wed, Sep 04, 2024 at 11:38:01PM +0100 schrieb Michael:

> Some MoBos are more tolerant than others.

> Regarding Dale's question, which has already been answered - yes, anything the 
> bad memory has touched is suspect of corruption.  Without ECC RAM a dodgy 
> module can cause a lot of damage before it is discovered.

Actually I was wondering: DDR5 has built-in ECC. But that’s not the same as the 
server-grade stuff, because it all happens inside the module with no 
communication to the CPU or the OS. So what is the point of it if it still 
causes errors like in Dale’s case?

Maybe that it only catches 1-bit errors, but Dale has more broken bits?

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Says the zero to the eight: “nice belt”.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05  9:08                   ` Frank Steinmetzger
@ 2024-09-05  9:36                     ` Michael
  2024-09-05 10:01                       ` Frank Steinmetzger
  0 siblings, 1 reply; 55+ messages in thread
From: Michael @ 2024-09-05  9:36 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1480 bytes --]

On Thursday 5 September 2024 10:08:08 BST Frank Steinmetzger wrote:
> Am Wed, Sep 04, 2024 at 11:38:01PM +0100 schrieb Michael:
> > Some MoBos are more tolerant than others.
> > 
> > Regarding Dale's question, which has already been answered - yes, anything
> > the bad memory has touched is suspect of corruption.  Without ECC RAM a
> > dodgy module can cause a lot of damage before it is discovered.
> 
> Actually I was wondering: DDR5 has built-in ECC. But that’s not the same as
> the server-grade stuff, because it all happens inside the module with no
> communication to the CPU or the OS. So what is the point of it if it still
> causes errors like in Dale’s case?
> 
> Maybe that it only catches 1-bit errors, but Dale has more broken bits?

Or it could be Dale's kit is DDR4?

Either way, as you say DDR5 is manufactured with On-Die ECC capable of 
correcting a single-bit error, necessary because DDR5 chip density has 
increased to the point where single-bit flip errors become unavoidable.  It 
also allows manufacturers to ship chips which would otherwise fail the JEDEC 
specification.  On-Die ECC will only correct bit flips *within* the memory 
chip.

Conventional Side-Band ECC with one additional chip dedicated to ECC 
correction is capable of correcting errors while data is being moved by the 
memory controller between the memory module and CPU/GPU.  It performs much 
more heavy lifting and this is why ECC memory is slower.

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05  9:36                     ` Michael
@ 2024-09-05 10:01                       ` Frank Steinmetzger
  2024-09-05 10:59                         ` Dale
  0 siblings, 1 reply; 55+ messages in thread
From: Frank Steinmetzger @ 2024-09-05 10:01 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 538 bytes --]

Am Thu, Sep 05, 2024 at 10:36:19AM +0100 schrieb Michael:

> > Maybe that it only catches 1-bit errors, but Dale has more broken bits?
> 
> Or it could be Dale's kit is DDR4?

You may be right. We talked about AM5 at great length during the concept 
phase and then I think I actually asked back because in one mail he 
mentioned to have bought an AM4 CPU (5000 series). :D

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Damn Chinese keyboald dlivel!

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05  8:42                         ` Michael
@ 2024-09-05 10:53                           ` Dale
  2024-09-05 11:08                             ` Michael
  0 siblings, 1 reply; 55+ messages in thread
From: Dale @ 2024-09-05 10:53 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1495 bytes --]

Michael wrote:
> On Thursday 5 September 2024 09:36:36 BST Dale wrote:
>
>> I've ran fsck before mounting on every file system so far.  I ran it on
>> the OS file systems while booted from the Live image.  The others I just
>> did before mounting.  I realize this doesn't mean the files themselves
>> are OK but at least the file system under them is OK.
> This could put your mind mostly at rest, at least the OS structure is OK and 
> the error was not running for too long.
>

That does help. 


>> I'm not sure how
>> to know if any damage was done between when the memory stick failed and
>> when I started the repair process.  I could find the ones I copied from
>> place to place and check them but other than watching every single
>> video, I'm not sure how to know if one is bad or not.  So far,
>> thumbnails work.  o_O
> If you have a copy of these files on another machine, you can run rsync with 
> --checksum.  This will only (re)copy the file over if the checksum is 
> different.
>


I made my backups last weekend.  I'm sure it was working fine then. 
After all, it would have failed to compile packages if it was bad.  I'm
thinking about checking against that copy like you mentioned but I have
other files I've added since then.  I figure if I remove the delete
option, that will solve that.  It can't compare but it can leave them be. 

I think I'm going to wait until the new memory comes in before I do
anything tho.  Including making backups. 

Dale

:-)  :-) 

[-- Attachment #2: Type: text/html, Size: 2760 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05 10:01                       ` Frank Steinmetzger
@ 2024-09-05 10:59                         ` Dale
  0 siblings, 0 replies; 55+ messages in thread
From: Dale @ 2024-09-05 10:59 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 839 bytes --]

Frank Steinmetzger wrote:
> Am Thu, Sep 05, 2024 at 10:36:19AM +0100 schrieb Michael:
>
>>> Maybe that it only catches 1-bit errors, but Dale has more broken bits?
>> Or it could be Dale's kit is DDR4?
> You may be right. We talked about AM5 at great length during the concept 
> phase and then I think I actually asked back because in one mail he 
> mentioned to have bought an AM4 CPU (5000 series). :D

I looked, it is DDR4.  G.Skill F4-3600C18D-64GTRS is the brand and part
number.  I picked it because I've had that brand before and never had
trouble and it was on the ASUS list.  I did switch down from AM5 to
AM4.  AM5 doesn't have enough PCIe slots for me. 


>
> Damn Chinese keyboald dlivel!

That is familiar.  I'm starting to get used to this keyboard.  Sort of. 
I see things like that often tho. 

Dale

:-)  :-) 

[-- Attachment #2: Type: text/html, Size: 2078 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05 10:53                           ` Dale
@ 2024-09-05 11:08                             ` Michael
  2024-09-05 11:30                               ` Dale
  0 siblings, 1 reply; 55+ messages in thread
From: Michael @ 2024-09-05 11:08 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1828 bytes --]

On Thursday 5 September 2024 11:53:16 BST Dale wrote:
> Michael wrote:
> > On Thursday 5 September 2024 09:36:36 BST Dale wrote:
> >> I've ran fsck before mounting on every file system so far.  I ran it on
> >> the OS file systems while booted from the Live image.  The others I just
> >> did before mounting.  I realize this doesn't mean the files themselves
> >> are OK but at least the file system under them is OK.
> > 
> > This could put your mind mostly at rest, at least the OS structure is OK
> > and the error was not running for too long.
> 
> That does help. 
> 
> >> I'm not sure how
> >> to know if any damage was done between when the memory stick failed and
> >> when I started the repair process.  I could find the ones I copied from
> >> place to place and check them but other than watching every single
> >> video, I'm not sure how to know if one is bad or not.  So far,
> >> thumbnails work.  o_O
> > 
> > If you have a copy of these files on another machine, you can run rsync
> > with --checksum.  This will only (re)copy the file over if the checksum
> > is different.
> 
> I made my backups last weekend.  I'm sure it was working fine then. 
> After all, it would have failed to compile packages if it was bad.  I'm
> thinking about checking against that copy like you mentioned but I have
> other files I've added since then.  I figure if I remove the delete
> option, that will solve that.  It can't compare but it can leave them be. 

Use rsync with:

 --checksum

and

 --dry-run 

Then it will compare files in situ without doing anything else.

If you have a directory or only a few files it is easy and quick to run.

You can also run find to identify which files were changed during the period 
you were running with the dodgy RAM.  Thankfully you didn't run for too long 
before you spotted it.

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05 11:08                             ` Michael
@ 2024-09-05 11:30                               ` Dale
  2024-09-05 18:55                                 ` Frank Steinmetzger
  0 siblings, 1 reply; 55+ messages in thread
From: Dale @ 2024-09-05 11:30 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1735 bytes --]

Michael wrote:
> On Thursday 5 September 2024 11:53:16 BST Dale wrote:
>>
>> I made my backups last weekend.  I'm sure it was working fine then. 
>> After all, it would have failed to compile packages if it was bad.  I'm
>> thinking about checking against that copy like you mentioned but I have
>> other files I've added since then.  I figure if I remove the delete
>> option, that will solve that.  It can't compare but it can leave them be. 
> Use rsync with:
>
>  --checksum
>
> and
>
>  --dry-run 
>
> Then it will compare files in situ without doing anything else.
>
> If you have a directory or only a few files it is easy and quick to run.
>
> You can also run find to identify which files were changed during the period 
> you were running with the dodgy RAM.  Thankfully you didn't run for too long 
> before you spotted it.


I have just shy of 45,000 files in 780 directories or so.  Almost 6,000
in another.  Some files are small, some are several GBs or so.  Thing
is, backups go from a single parent directory if you will.  Plus, I'd
want to compare them all anyway.  Just to be sure.

I also went back and got QB to do a manual file test.  It seems to be
doing better.  There's over 4,000 torrents. Some 32TBs of data.  I think
it's going to take a while.  o_^  As it is, I set the speed to tiny
amounts until I get this sorted.  Don't want to accidentally share a bad
file. 

Dale

:-)  :-) 

P. S.  My trees need some rain today.  It's getting very dry.  I been
watering some trees.  My Swamp Chestnut trees are massive.  Hate to lose
those things.  Likely 100 years old according to my tree guru.  In the
fall, I wear a construction helmet.  Those things hurt when they fall
and hit my head. 

[-- Attachment #2: Type: text/html, Size: 2527 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05 11:30                               ` Dale
@ 2024-09-05 18:55                                 ` Frank Steinmetzger
  2024-09-05 22:06                                   ` Michael
  0 siblings, 1 reply; 55+ messages in thread
From: Frank Steinmetzger @ 2024-09-05 18:55 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1508 bytes --]

Am Thu, Sep 05, 2024 at 06:30:54AM -0500 schrieb Dale:

> > Use rsync with:
> >
> >  --checksum
> >
> > and
> >
> >  --dry-run 

I suggest calculating a checksum file from your active files. Then you don’t 
have to read the files over and over for each backup iteration you compare 
it against.

> > You can also run find to identify which files were changed during the period 
> > you were running with the dodgy RAM.  Thankfully you didn't run for too long 
> > before you spotted it.

This. No need to check everything you ever stored. Just the most recent 
stuff, or at maximum, since you got the new PC.

> I have just shy of 45,000 files in 780 directories or so.  Almost 6,000
> in another.  Some files are small, some are several GBs or so.  Thing
> is, backups go from a single parent directory if you will.  Plus, I'd
> want to compare them all anyway.  Just to be sure.

I aqcuired the habit of writing checksum files in all my media directories 
such as music albums, tv series and such, whenever I create one such 
directory. That way even years later I can still check whether the files are 
intact. I actually experienced broken music files from time to time (mostly 
on the MicroSD card in my tablet). So with checksum files, I can verify which 
file is bad and which (on another machine) is still good.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Lettered up the mixes?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05 18:55                                 ` Frank Steinmetzger
@ 2024-09-05 22:06                                   ` Michael
  2024-09-06  0:43                                     ` Dale
  2024-09-07 22:48                                     ` Wols Lists
  0 siblings, 2 replies; 55+ messages in thread
From: Michael @ 2024-09-05 22:06 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1857 bytes --]

On Thursday 5 September 2024 19:55:56 BST Frank Steinmetzger wrote:
> Am Thu, Sep 05, 2024 at 06:30:54AM -0500 schrieb Dale:
> > > Use rsync with:
> > >  --checksum
> > > 
> > > and
> > > 
> > >  --dry-run
> 
> I suggest calculating a checksum file from your active files. Then you don’t
> have to read the files over and over for each backup iteration you compare
> it against.
> 
> > > You can also run find to identify which files were changed during the
> > > period you were running with the dodgy RAM.  Thankfully you didn't run
> > > for too long before you spotted it.
> 
> This. No need to check everything you ever stored. Just the most recent
> stuff, or at maximum, since you got the new PC.
> 
> > I have just shy of 45,000 files in 780 directories or so.  Almost 6,000
> > in another.  Some files are small, some are several GBs or so.  Thing
> > is, backups go from a single parent directory if you will.  Plus, I'd
> > want to compare them all anyway.  Just to be sure.
> 
> I aqcuired the habit of writing checksum files in all my media directories
> such as music albums, tv series and such, whenever I create one such
> directory. That way even years later I can still check whether the files are
> intact. I actually experienced broken music files from time to time (mostly
> on the MicroSD card in my tablet). So with checksum files, I can verify
> which file is bad and which (on another machine) is still good.

There is also dm-verity for a more involved solution.  I think for Dale 
something like this should work:

find path-to-directory/ -type f | xargs md5sum > digest.log

then to compare with a backup of the same directory you could run:

md5sum -c digest.log | grep FAILED

Someone more knowledgeable should be able to knock out some clever python 
script to do the same at speed.

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05 22:06                                   ` Michael
@ 2024-09-06  0:43                                     ` Dale
  2024-09-06 12:21                                       ` Michael
  2024-09-07 22:48                                     ` Wols Lists
  1 sibling, 1 reply; 55+ messages in thread
From: Dale @ 2024-09-06  0:43 UTC (permalink / raw
  To: gentoo-user

Michael wrote:
> On Thursday 5 September 2024 19:55:56 BST Frank Steinmetzger wrote:
>> Am Thu, Sep 05, 2024 at 06:30:54AM -0500 schrieb Dale:
>>>> Use rsync with:
>>>>  --checksum
>>>>
>>>> and
>>>>
>>>>  --dry-run
>> I suggest calculating a checksum file from your active files. Then you don’t
>> have to read the files over and over for each backup iteration you compare
>> it against.
>>
>>>> You can also run find to identify which files were changed during the
>>>> period you were running with the dodgy RAM.  Thankfully you didn't run
>>>> for too long before you spotted it.
>> This. No need to check everything you ever stored. Just the most recent
>> stuff, or at maximum, since you got the new PC.
>>
>>> I have just shy of 45,000 files in 780 directories or so.  Almost 6,000
>>> in another.  Some files are small, some are several GBs or so.  Thing
>>> is, backups go from a single parent directory if you will.  Plus, I'd
>>> want to compare them all anyway.  Just to be sure.
>> I aqcuired the habit of writing checksum files in all my media directories
>> such as music albums, tv series and such, whenever I create one such
>> directory. That way even years later I can still check whether the files are
>> intact. I actually experienced broken music files from time to time (mostly
>> on the MicroSD card in my tablet). So with checksum files, I can verify
>> which file is bad and which (on another machine) is still good.
> There is also dm-verity for a more involved solution.  I think for Dale 
> something like this should work:
>
> find path-to-directory/ -type f | xargs md5sum > digest.log
>
> then to compare with a backup of the same directory you could run:
>
> md5sum -c digest.log | grep FAILED
>
> Someone more knowledgeable should be able to knock out some clever python 
> script to do the same at speed.


I'll be honest here, on two points.  I'd really like to be able to do
this but I have no idea where to or how to even start.  My setup for
series type videos.  In a parent directory, where I'd like a tool to
start, is about 600 directories.  On a few occasions, there is another
directory inside that one.  That directory under the parent is the name
of the series.  Sometimes I have a sub directory that has temp files;
new files I have yet to rename, considering replacing in the main series
directory etc.  I wouldn't mind having a file with a checksum for each
video in the top directory, and even one in the sub directory.  As a
example.

TV_Series/

├── 77 Sunset Strip (1958)
│   └── torrent
├── Adam-12 (1968)
├── Airwolf (1984)


I got a part of the output of tree.  The directory 'torrent' under 77
Sunset is temporary usually but sometimes a directory is there for
videos about the making of a video, history of it or something.  What
I'd like, a program that would generate checksums for each file under
say 77 Sunset and it could skip or include the directory under it. 
Might be best if I could switch it on or off.  Obviously, I may not want
to do this for my whole system.  I'd like to be able to target
directories.  I have another large directory, lets say not a series but
sometimes has remakes, that I'd also like to do.  It is kinda set up
like the above, parent directory with a directory underneath and on
occasion one more under that. 

One thing I worry about is not just memory problems, drive failure but
also just some random error or even bit rot.  Some of these files are
rarely changed or even touched.  I'd like a way to detect problems and
there may even be a software tool that does this with some setup,
reminds me of Kbackup where you can select what to backup or leave out
on a directory or even individual file level. 

While this could likely be done with a script of some kind, my scripting
skills are minimum at best, I suspect there is software out there
somewhere that can do this.  I have no idea what or where it could be
tho.  Given my lack of scripting skills, I'd be afraid I'd do something
bad and it delete files or something.  O_O  LOL 

I been watching videos again, those I was watching during the time the
memory was bad.  I've replaced three so far.  I think I noticed this
within a few hours.  Then it took a little while for me to figure out
the problem and shutdown to run the memtest.  I doubt many files were
affected unless it does something we don't know about.  I do plan to try
to use rsync checksum and dryrun when I get back up and running.  Also,
QB is finding a lot of its files are fine as well.  It's still
rechecking them.  It's a lot of files. 

Right now, I suspect my backup copy is likely better than my main copy. 
Once I get the memory in and can really run some software, then I'll run
rsync with those compare options and see what it says.  I just got to
remember to reverse things.  Backup is the source not the destination. 
If this works, I may run that each time, help detect problems maybe. 
Maybe?? 

Oh, memory made it to the Memphis hub.  Should be here tomorrow. 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-06  0:43                                     ` Dale
@ 2024-09-06 12:21                                       ` Michael
  2024-09-06 21:41                                         ` Frank Steinmetzger
  0 siblings, 1 reply; 55+ messages in thread
From: Michael @ 2024-09-06 12:21 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 7585 bytes --]

On Friday 6 September 2024 01:43:18 BST Dale wrote:
> Michael wrote:
> > On Thursday 5 September 2024 19:55:56 BST Frank Steinmetzger wrote:
> >> Am Thu, Sep 05, 2024 at 06:30:54AM -0500 schrieb Dale:
> >>>> Use rsync with:
> >>>>  --checksum
> >>>> 
> >>>> and
> >>>> 
> >>>>  --dry-run
> >> 
> >> I suggest calculating a checksum file from your active files. Then you
> >> don’t have to read the files over and over for each backup iteration you
> >> compare it against.
> >> 
> >>>> You can also run find to identify which files were changed during the
> >>>> period you were running with the dodgy RAM.  Thankfully you didn't run
> >>>> for too long before you spotted it.
> >> 
> >> This. No need to check everything you ever stored. Just the most recent
> >> stuff, or at maximum, since you got the new PC.
> >> 
> >>> I have just shy of 45,000 files in 780 directories or so.  Almost 6,000
> >>> in another.  Some files are small, some are several GBs or so.  Thing
> >>> is, backups go from a single parent directory if you will.  Plus, I'd
> >>> want to compare them all anyway.  Just to be sure.
> >> 
> >> I aqcuired the habit of writing checksum files in all my media
> >> directories
> >> such as music albums, tv series and such, whenever I create one such
> >> directory. That way even years later I can still check whether the files
> >> are intact. I actually experienced broken music files from time to time
> >> (mostly on the MicroSD card in my tablet). So with checksum files, I can
> >> verify which file is bad and which (on another machine) is still good.
> > 
> > There is also dm-verity for a more involved solution.  I think for Dale
> > something like this should work:
> > 
> > find path-to-directory/ -type f | xargs md5sum > digest.log
> > 
> > then to compare with a backup of the same directory you could run:
> > 
> > md5sum -c digest.log | grep FAILED
> > 
> > Someone more knowledgeable should be able to knock out some clever python
> > script to do the same at speed.
> 
> I'll be honest here, on two points.  I'd really like to be able to do
> this but I have no idea where to or how to even start.  My setup for
> series type videos.  In a parent directory, where I'd like a tool to
> start, is about 600 directories.  On a few occasions, there is another
> directory inside that one.  That directory under the parent is the name
> of the series.  Sometimes I have a sub directory that has temp files;
> new files I have yet to rename, considering replacing in the main series
> directory etc.  I wouldn't mind having a file with a checksum for each
> video in the top directory, and even one in the sub directory.  As a
> example.
> 
> TV_Series/
> 
> ├── 77 Sunset Strip (1958)
> │   └── torrent
> ├── Adam-12 (1968)
> ├── Airwolf (1984)
> 
> 
> I got a part of the output of tree.  The directory 'torrent' under 77
> Sunset is temporary usually but sometimes a directory is there for
> videos about the making of a video, history of it or something.  What
> I'd like, a program that would generate checksums for each file under
> say 77 Sunset and it could skip or include the directory under it. 
> Might be best if I could switch it on or off.  Obviously, I may not want
> to do this for my whole system.  I'd like to be able to target
> directories.  I have another large directory, lets say not a series but
> sometimes has remakes, that I'd also like to do.  It is kinda set up
> like the above, parent directory with a directory underneath and on
> occasion one more under that. 

As an example, let's assume you have the following fs tree:

VIDEO
  ├──TV_Series/
  |  ├── 77 Sunset Strip (1958)
  |  │   └── torrent
  |  ├── Adam-12 (1968)
  |  ├── Airwolf (1984)
  |
  ├──Documentaries
  ├──Films
  ├──etc.

You could run:

$ find VIDEO -type f | xargs md5sum > digest.log

The file digest.log will contain md5sum hashes of each of your files within 
the VIDEO directory and its subdirectories.

To check if any of these files have changed, become corrupted, etc. you can 
run:

$ md5sum -c digest.log | grep FAILED

If you want to compare the contents of the same VIDEO directory on a back up, 
you can copy the same digest file with its hashes over to the backup top 
directory and run again:

$ md5sum -c digest.log | grep FAILED

Any files listed with "FAILED" next to them have changed since the backup was 
originally created.  Any files with "FAILED open or read" have been deleted, 
or are inaccessible.

You don't have to use md5sum, you can use sha1sum, sha256sum, etc. but md5sum 
will be quicker.  The probability of ending up with a hash clash across two 
files must be very small.

You can save the digest file with a date, PC name, top directory name next to 
it, to make it easy to identify when it was created and its origin.  
Especially useful if you move it across systems.


> One thing I worry about is not just memory problems, drive failure but
> also just some random error or even bit rot.  Some of these files are
> rarely changed or even touched.  I'd like a way to detect problems and
> there may even be a software tool that does this with some setup,
> reminds me of Kbackup where you can select what to backup or leave out
> on a directory or even individual file level. 
> 
> While this could likely be done with a script of some kind, my scripting
> skills are minimum at best, I suspect there is software out there
> somewhere that can do this.  I have no idea what or where it could be
> tho.  Given my lack of scripting skills, I'd be afraid I'd do something
> bad and it delete files or something.  O_O  LOL 

The above two lines is just one way, albeit rather manual way, to achieve 
this.  Someone with coding skills should be able to write up a script to more 
or less automate this, if you can't find something ready-made in the 
interwebs.


> I been watching videos again, those I was watching during the time the
> memory was bad.  I've replaced three so far.  I think I noticed this
> within a few hours.  Then it took a little while for me to figure out
> the problem and shutdown to run the memtest.  I doubt many files were
> affected unless it does something we don't know about.  I do plan to try
> to use rsync checksum and dryrun when I get back up and running.  Also,
> QB is finding a lot of its files are fine as well.  It's still
> rechecking them.  It's a lot of files. 
> 
> Right now, I suspect my backup copy is likely better than my main copy. 
> Once I get the memory in and can really run some software, then I'll run
> rsync with those compare options and see what it says.  I just got to
> remember to reverse things.  Backup is the source not the destination. 
> If this works, I may run that each time, help detect problems maybe. 
> Maybe?? 

This should work in rsync terms:

rsync -v --checksum --delete --recursive --dry-run SOURCE/ DESTINATION

It will output a list of files which have been deleted from the SOURCE and 
will need to be deleted at the DESTINATION directory.

It will also provide a list of changed files at SOURCE which will be copied 
over to the destination.

When you use --checksum the rsync command will take longer than when you 
don't, because it will be calculating a hash for each source and destination 
file to determine it if has changed, rather than relying on size and 
timestamp.

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04  0:39   ` Dale
  2024-09-04  4:16     ` corbin bird
@ 2024-09-06 20:15     ` Dale
  2024-09-06 23:17       ` Michael
  2024-09-07 22:12     ` Wols Lists
  2 siblings, 1 reply; 55+ messages in thread
From: Dale @ 2024-09-06 20:15 UTC (permalink / raw
  To: gentoo-user

Dale wrote:
> Grant Edwards wrote:
>> On 2024-09-03, Dale <rdalek1967@gmail.com> wrote:
>>
>>> I was trying to re-emerge some packages.  The ones I was working on
>>> failed with "internal compiler error: Segmentation fault" or similar
>>> being the common reason for failing.
>> In my experience, that usually means failing RAM.  I'd try running
>> memtest86 for a day or two.
>>
>> --
>> Grant
> I've seen that before too.  I'm hoping not.  I may shutdown my rig,
> remove and reinstall the memory and then test it for a bit.  May be a
> bad connection.  It has worked well for the past couple months tho. 
> Still, it is possible to either be a bad connection or just going bad. 
>
> Dang those memory sticks ain't cheap.  o_~
>
> Thanks.  See if anyone else has any other ideas. 
>
> Dale
>
> :-)  :-) 
>


Update.  New memory sticks i bought came in today.  I ran memtest from
Gentoo Live boot media and it passed.  Of course, the last pair passed
when new too so let's hope this one lasts longer.  Much longer. 

Now to start the warranty swap process.  :/

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-06 12:21                                       ` Michael
@ 2024-09-06 21:41                                         ` Frank Steinmetzger
  2024-09-07  9:37                                           ` Michael
                                                             ` (2 more replies)
  0 siblings, 3 replies; 55+ messages in thread
From: Frank Steinmetzger @ 2024-09-06 21:41 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 5734 bytes --]

Am Fri, Sep 06, 2024 at 01:21:20PM +0100 schrieb Michael:

> > > find path-to-directory/ -type f | xargs md5sum > digest.log
> > > 
> > > then to compare with a backup of the same directory you could run:
> > > 
> > > md5sum -c digest.log | grep FAILED

I had a quick look at the manpage: with md5sum --quiet you can omit the grep 
part.

> > > Someone more knowledgeable should be able to knock out some clever python
> > > script to do the same at speed.

And that is exactly what I have written for myself over the last 11 years. I 
call it dh (short for dirhash). As I described in the previous mail, I use 
it to create one hash files per directory. But it also supports one hash 
file per data file and – a rather new feature – one hash file at the root of 
a tree. Have a look here: https://github.com/felf/dh
Clone the repo or simply download the one file and put it into your path.

> > I'll be honest here, on two points.  I'd really like to be able to do
> > this but I have no idea where to or how to even start.  My setup for
> > series type videos.  In a parent directory, where I'd like a tool to
> > start, is about 600 directories.  On a few occasions, there is another
> > directory inside that one.  That directory under the parent is the name
> > of the series.

In its default, my tool ignores directories which have subdirectories. It 
only hashes files in dirs that have no subdirs (leaves in the tree). But 
this can be overridden with the -f option.

My tool also has an option to skip a number of directories and to process 
only a certain number of directories.

> > Sometimes I have a sub directory that has temp files;
> > new files I have yet to rename, considering replacing in the main series
> > directory etc.  I wouldn't mind having a file with a checksum for each
> > video in the top directory, and even one in the sub directory.  As a
> > example.
> > 
> > TV_Series/
> > 
> > ├── 77 Sunset Strip (1958)
> > │   └── torrent
> > ├── Adam-12 (1968)
> > ├── Airwolf (1984)

So with my tool you would do
$ dh -f -F all TV_Series
`-F all` causes a checksum file to be created for each data file.

> > What
> > I'd like, a program that would generate checksums for each file under
> > say 77 Sunset and it could skip or include the directory under it.

Unfortunately I don’t have a skip feature yet that skips specific 
directories. I could add a feature that looks for a marker file and then 
skips that directory (and its subdirs).

> > Might be best if I could switch it on or off.  Obviously, I may not want
> > to do this for my whole system.  I'd like to be able to target
> > directories.  I have another large directory, lets say not a series but
> > sometimes has remakes, that I'd also like to do.  It is kinda set up
> > like the above, parent directory with a directory underneath and on
> > occasion one more under that. 
> 
> As an example, let's assume you have the following fs tree:
> 
> VIDEO
>   ├──TV_Series/
>   |  ├── 77 Sunset Strip (1958)
>   |  │   └── torrent
>   |  ├── Adam-12 (1968)
>   |  ├── Airwolf (1984)
>   |
>   ├──Documentaries
>   ├──Films
>   ├──etc.
> 
> You could run:
> 
> $ find VIDEO -type f | xargs md5sum > digest.log
> 
> The file digest.log will contain md5sum hashes of each of your files within 
> the VIDEO directory and its subdirectories.
> 
> To check if any of these files have changed, become corrupted, etc. you can 
> run:
> 
> $ md5sum -c digest.log | grep FAILED
> 
> If you want to compare the contents of the same VIDEO directory on a back up, 
> you can copy the same digest file with its hashes over to the backup top 
> directory and run again:
> 
> $ md5sum -c digest.log | grep FAILED

My tool does this as well. ;-)
In check mode, it recurses, looks for hash files and if it finds them, 
checks all hashes. There is also an option to only check paths and 
filenames, not hashes. This allows to quickly find files that have been 
renamed or deleted since the hash file was created.

> > One thing I worry about is not just memory problems, drive failure but
> > also just some random error or even bit rot.  Some of these files are
> > rarely changed or even touched.  I'd like a way to detect problems and
> > there may even be a software tool that does this with some setup,
> > reminds me of Kbackup where you can select what to backup or leave out
> > on a directory or even individual file level. 

Well that could be covered with ZFS, especially with a redundant pool so it 
can repair itself. Otherwise it will only identify the bitrot, but not be 
able to fix it.

> > Right now, I suspect my backup copy is likely better than my main copy. 

The problem is: if they differ, how do you know which one is good apart from 
watching one from start to finish? You could use vbindiff to first find the 
part that changed. That will at least tell you where the difference is, so 
you could seek to the area of the position in the video.

> This should work in rsync terms:
> 
> rsync -v --checksum --delete --recursive --dry-run SOURCE/ DESTINATION
> 
> It will output a list of files which have been deleted from the SOURCE and 
> will need to be deleted at the DESTINATION directory.

If you look at changed *and* deleted files in one run, better use -i instead 
of -v.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

If two processes are running concurrently,
the less important will take processor time away from the more important one.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-06 20:15     ` Dale
@ 2024-09-06 23:17       ` Michael
  2024-09-07  3:02         ` Dale
  0 siblings, 1 reply; 55+ messages in thread
From: Michael @ 2024-09-06 23:17 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 391 bytes --]

On Friday 6 September 2024 21:15:32 BST Dale wrote:

> Update.  New memory sticks i bought came in today.  I ran memtest from
> Gentoo Live boot media and it passed.  Of course, the last pair passed
> when new too so let's hope this one lasts longer.  Much longer. 

Run each new stick on its own overnight.  Some times errors do not show up 
until a few full cycles of tests have been run.

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-06 23:17       ` Michael
@ 2024-09-07  3:02         ` Dale
  0 siblings, 0 replies; 55+ messages in thread
From: Dale @ 2024-09-07  3:02 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1782 bytes --]

Michael wrote:
> On Friday 6 September 2024 21:15:32 BST Dale wrote:
>
>> Update.  New memory sticks i bought came in today.  I ran memtest from
>> Gentoo Live boot media and it passed.  Of course, the last pair passed
>> when new too so let's hope this one lasts longer.  Much longer. 
> Run each new stick on its own overnight.  Some times errors do not show up 
> until a few full cycles of tests have been run.

I've already booted into my OS now.  So far, it's seems OK.  Of course,
the last ones didn't fail for a few months.  I can't test that long
anyway.  ;-)  At least they not bad out of the box.  At first test anyway. 

I think the way the tests run now, it runs several different tests on
each section looking for it to return a incorrect result.  I think I saw
it say 10 tests or something.  The memtest I used was on the Gentoo Live
image from a few months ago.  It tests 1GB at a time.  Takes a while to
complete each test.  I know there are many ways to test memory tho.  I
don't recall ever having a stick of memory to go bad before.  For some
old junk rigs that are pretty much to old to care, I've put some cheap
brands in them and they still worked.  Oh, one of my f's returned a 7. 
I took a picture.  I printed it and included it with the memory.  That
way they may can tell where to start their test.  It was right at 7GB
mark. 

I did start the return process.  I filled out a online form and they
sent me a email a few hours later with a RMA.  I got a label printed and
boxed it up.  I'll go to the post office tomorrow.  It'll take several
days to get there tho.  No idea on how long it takes G.Skill to turn it
around.  Then it has to slide all the way back to me.  I figure two
weeks for shipping alone. 

Dale

:-)  :-) 

[-- Attachment #2: Type: text/html, Size: 2614 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-06 21:41                                         ` Frank Steinmetzger
@ 2024-09-07  9:37                                           ` Michael
  2024-09-07 16:28                                             ` Frank Steinmetzger
  2024-09-07 17:08                                           ` Mark Knecht
  2024-09-14 19:46                                           ` Dale
  2 siblings, 1 reply; 55+ messages in thread
From: Michael @ 2024-09-07  9:37 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1599 bytes --]

On Friday 6 September 2024 22:41:33 BST Frank Steinmetzger wrote:
> Am Fri, Sep 06, 2024 at 01:21:20PM +0100 schrieb Michael:
> > > > find path-to-directory/ -type f | xargs md5sum > digest.log
> > > > 
> > > > then to compare with a backup of the same directory you could run:
> > > > 
> > > > md5sum -c digest.log | grep FAILED
> 
> I had a quick look at the manpage: with md5sum --quiet you can omit the grep
> part.

Good catch.  You can tell I didn't spend much effort to come up with this. ;-)


> > > > Someone more knowledgeable should be able to knock out some clever
> > > > python
> > > > script to do the same at speed.
> 
> And that is exactly what I have written for myself over the last 11 years. I
> call it dh (short for dirhash). As I described in the previous mail, I use
> it to create one hash files per directory. But it also supports one hash
> file per data file and – a rather new feature – one hash file at the root
> of a tree. Have a look here: https://github.com/felf/dh
> Clone the repo or simply download the one file and put it into your path.

Nice!  I've tested it briefly here.  You've put quite some effort into this.  
Thank you Frank!

Probably not your use case, but I wonder how it can be used to compare SOURCE 
to DESTINATION where SOURCE is the original fs and DESTINATION is some backup, 
without having to copy over manually all different directory/subdirectory 
Checksums.md5 files.

I suppose rsync can be used for the comparison to a backup fs anyway, your 
script would be duplicating a function unnecessarily.

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-07  9:37                                           ` Michael
@ 2024-09-07 16:28                                             ` Frank Steinmetzger
  0 siblings, 0 replies; 55+ messages in thread
From: Frank Steinmetzger @ 2024-09-07 16:28 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1920 bytes --]

Am Sat, Sep 07, 2024 at 10:37:04AM +0100 schrieb Michael:
> On Friday 6 September 2024 22:41:33 BST Frank Steinmetzger wrote:

> > > > > Someone more knowledgeable should be able to knock out some clever
> > > > > python
> > > > > script to do the same at speed.
> > 
> > And that is exactly what I have written for myself over the last 11 years. I
> > call it dh (short for dirhash). As I described in the previous mail, I use
> > it to create one hash files per directory. But it also supports one hash
> > file per data file and – a rather new feature – one hash file at the root
> > of a tree. Have a look here: https://github.com/felf/dh
> > Clone the repo or simply download the one file and put it into your path.
> 
> Nice!  I've tested it briefly here.  You've put quite some effort into this.  
> Thank you Frank!
> 
> Probably not your use case, but I wonder how it can be used to compare SOURCE 
> to DESTINATION where SOURCE is the original fs and DESTINATION is some backup, 
> without having to copy over manually all different directory/subdirectory 
> Checksums.md5 files.

When I have this problem, I usually diff the checksum files with mc or vim, 
because I don’t usually have to check many directories and files. You could 
use Krusader, a two-panel file manager. This has a synchronise tool with a 
file filter, so you synchronize two sides, check for file content and filter 
for *.md5.

> I suppose rsync can be used for the comparison to a backup fs anyway, your 
> script would be duplicating a function unnecessarily.

I believe rsync is capable of only syncing only files that match a pattern. 
But it was not very easy to achieve, I think.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

They say that memory is the second thing to go...
I forgot what the first thing was.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-06 21:41                                         ` Frank Steinmetzger
  2024-09-07  9:37                                           ` Michael
@ 2024-09-07 17:08                                           ` Mark Knecht
  2024-09-14 19:46                                           ` Dale
  2 siblings, 0 replies; 55+ messages in thread
From: Mark Knecht @ 2024-09-07 17:08 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1102 bytes --]

On Fri, Sep 6, 2024 at 2:42 PM Frank Steinmetzger <Warp_7@gmx.de> wrote:
>
> Am Fri, Sep 06, 2024 at 01:21:20PM +0100 schrieb Michael:
>
> > > > find path-to-directory/ -type f | xargs md5sum > digest.log
> > > >
> > > > then to compare with a backup of the same directory you could run:
> > > >
> > > > md5sum -c digest.log | grep FAILED
>
> I had a quick look at the manpage: with md5sum --quiet you can omit the
grep
> part.
>
> > > > Someone more knowledgeable should be able to knock out some clever
python
> > > > script to do the same at speed.
>
> And that is exactly what I have written for myself over the last 11
years. I
> call it dh (short for dirhash). As I described in the previous mail, I use
> it to create one hash files per directory. But it also supports one hash
> file per data file and – a rather new feature – one hash file at the root
of
> a tree. Have a look here: https://github.com/felf/dh
> Clone the repo or simply download the one file and put it into your path.
>

Thanks for sharing this Frank.

Much appreciated.

Cheers,
Mark

[-- Attachment #2: Type: text/html, Size: 1455 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-04  0:39   ` Dale
  2024-09-04  4:16     ` corbin bird
  2024-09-06 20:15     ` Dale
@ 2024-09-07 22:12     ` Wols Lists
  2024-09-08  1:59       ` Dale
  2024-09-08  9:15       ` Michael
  2 siblings, 2 replies; 55+ messages in thread
From: Wols Lists @ 2024-09-07 22:12 UTC (permalink / raw
  To: gentoo-user

On 04/09/2024 01:39, Dale wrote:
> I've seen that before too.  I'm hoping not.  I may shutdown my rig,
> remove and reinstall the memory and then test it for a bit.  May be a
> bad connection.  It has worked well for the past couple months tho.
> Still, it is possible to either be a bad connection or just going bad.

I've had *MOST* of my self-built systems force me to remove and replace 
the ram several times before the system was happy.

And when a shop "fixed" my computer for me (replacing a mobo that wasn't 
broken - I told them I thought it needed a bios upgrade and I was 
right!) they also messed up the ram. Memory is supposed to go in in 
matched pairs. So what do they do? One stick in each pair of slots - the 
thing ran like a sloth on tranquillisers! As soon as I realised what 
they'd done and put both sticks in the same pair, it was MUCH faster.

Cheers,
Wol


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-05 22:06                                   ` Michael
  2024-09-06  0:43                                     ` Dale
@ 2024-09-07 22:48                                     ` Wols Lists
  2024-09-08  9:37                                       ` Michael
  1 sibling, 1 reply; 55+ messages in thread
From: Wols Lists @ 2024-09-07 22:48 UTC (permalink / raw
  To: gentoo-user

On 05/09/2024 23:06, Michael wrote:
> There is also dm-verity for a more involved solution.  I think for Dale
> something like this should work:

Snag is, I think dm-verity (or do you actually mean dm-integrity, which 
is what I use) merely checks that what you read from disk is what you 
wrote to disk. If the ram corrupted it before it was written, I don't 
think either of them will detect it.

Cheers,
Wol


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-07 22:12     ` Wols Lists
@ 2024-09-08  1:59       ` Dale
  2024-09-08 13:32         ` Michael
  2024-09-08  9:15       ` Michael
  1 sibling, 1 reply; 55+ messages in thread
From: Dale @ 2024-09-08  1:59 UTC (permalink / raw
  To: gentoo-user

Wols Lists wrote:
> On 04/09/2024 01:39, Dale wrote:
>> I've seen that before too.  I'm hoping not.  I may shutdown my rig,
>> remove and reinstall the memory and then test it for a bit.  May be a
>> bad connection.  It has worked well for the past couple months tho.
>> Still, it is possible to either be a bad connection or just going bad.
>
> I've had *MOST* of my self-built systems force me to remove and
> replace the ram several times before the system was happy.
>
> And when a shop "fixed" my computer for me (replacing a mobo that
> wasn't broken - I told them I thought it needed a bios upgrade and I
> was right!) they also messed up the ram. Memory is supposed to go in
> in matched pairs. So what do they do? One stick in each pair of slots
> - the thing ran like a sloth on tranquillisers! As soon as I realised
> what they'd done and put both sticks in the same pair, it was MUCH
> faster.
>
> Cheers,
> Wol
>
>


I noticed on the set I had to return, the serial numbers were in
sequence.  One was right after the other.  I don't know if that makes
them a matched set or if they run some test to match them. 

From my understanding tho, each 'bank' or pair has to be a matched set. 
I did finally find a set of four but it is a different brand.  From what
I read to tho, ASUS trains itself each time you boot up.  It finds the
best setting for each set of memory.  It does say that it is usually set
to a slower speed tho when all four are installed.  Just have to wait
and see I guess.  Oh, when I boot the first couple times with new
memory, it takes quite a bit longer on the BIOS boot screen.  After a
couple times, it doesn't seem to take so long.  Not sure what, but it
does something. 

This new way sure is strange. 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-07 22:12     ` Wols Lists
  2024-09-08  1:59       ` Dale
@ 2024-09-08  9:15       ` Michael
  2024-09-08 20:19         ` Wol
  1 sibling, 1 reply; 55+ messages in thread
From: Michael @ 2024-09-08  9:15 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 1981 bytes --]

On Saturday 7 September 2024 23:12:41 BST Wols Lists wrote:
> On 04/09/2024 01:39, Dale wrote:
> > I've seen that before too.  I'm hoping not.  I may shutdown my rig,
> > remove and reinstall the memory and then test it for a bit.  May be a
> > bad connection.  It has worked well for the past couple months tho.
> > Still, it is possible to either be a bad connection or just going bad.
> 
> I've had *MOST* of my self-built systems force me to remove and replace
> the ram several times before the system was happy.
> 
> And when a shop "fixed" my computer for me (replacing a mobo that wasn't
> broken - I told them I thought it needed a bios upgrade and I was
> right!) they also messed up the ram. Memory is supposed to go in in
> matched pairs. So what do they do? One stick in each pair of slots - the
> thing ran like a sloth on tranquillisers! 

The placement of DIMMs depends on the MoBo, its manual would show in which 
slot should DIMM modules be added and the (maximum) size of each stick the 
MoBo can cope with.  Normally OEMs provide a list of tested memory brands and 
models for their MoBos (QVL) and it is recommended to buy something on the 
list, rather than improvise.

On ASUS MoBos with 4 slots and 2 DIMMs it is recommended you use slot B2 for 
one module, slots B2 and A2 for a pair of matched modules and the the 
remaining two slots A1 & B1 for a second pair of matched modules.  So, what 
the shop did would be reasonable, unless the MoBo OEM asked for a different 
configuration.


> As soon as I realised what
> they'd done and put both sticks in the same pair, it was MUCH faster.
> 
> Cheers,
> Wol

Sometimes, you have to place only one module of a matched pair in and boot the 
system, let the firmware probe and test the DIMM, before you shut it down to 
add more memory to it.  Whenever RAM does not behave as it should when 
installing it, it is a prompt for me to go back to the OEM manual for guidance 
on the peculiarities of their product.

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-07 22:48                                     ` Wols Lists
@ 2024-09-08  9:37                                       ` Michael
  0 siblings, 0 replies; 55+ messages in thread
From: Michael @ 2024-09-08  9:37 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 825 bytes --]

On Saturday 7 September 2024 23:48:43 BST Wols Lists wrote:
> On 05/09/2024 23:06, Michael wrote:
> > There is also dm-verity for a more involved solution.  I think for Dale
> > something like this should work:
> Snag is, I think dm-verity (or do you actually mean dm-integrity, which
> is what I use) merely checks that what you read from disk is what you
> wrote to disk. If the ram corrupted it before it was written, I don't
> think either of them will detect it.
> 
> Cheers,
> Wol

My bad, apologies, dm-verity is used to verify the boot path and deals with 
read-only fs.  With FEC it would also be able to recover from some limited 
data corruption.  I meant to write *dm-integrity*!  Thanks for correcting me.  
Either way, if the data being written is corrupted due to faulty RAM, the 
result will be corrupted too.

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-08  1:59       ` Dale
@ 2024-09-08 13:32         ` Michael
  0 siblings, 0 replies; 55+ messages in thread
From: Michael @ 2024-09-08 13:32 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 3263 bytes --]

On Sunday 8 September 2024 02:59:04 BST Dale wrote:
> Wols Lists wrote:
> > On 04/09/2024 01:39, Dale wrote:
> >> I've seen that before too.  I'm hoping not.  I may shutdown my rig,
> >> remove and reinstall the memory and then test it for a bit.  May be a
> >> bad connection.  It has worked well for the past couple months tho.
> >> Still, it is possible to either be a bad connection or just going bad.
> > 
> > I've had *MOST* of my self-built systems force me to remove and
> > replace the ram several times before the system was happy.
> > 
> > And when a shop "fixed" my computer for me (replacing a mobo that
> > wasn't broken - I told them I thought it needed a bios upgrade and I
> > was right!) they also messed up the ram. Memory is supposed to go in
> > in matched pairs. So what do they do? One stick in each pair of slots
> > - the thing ran like a sloth on tranquillisers! As soon as I realised
> > what they'd done and put both sticks in the same pair, it was MUCH
> > faster.
> > 
> > Cheers,
> > Wol
> 
> I noticed on the set I had to return, the serial numbers were in
> sequence.  One was right after the other.  I don't know if that makes
> them a matched set or if they run some test to match them. 

Both.  They run a test, if one fails in their hands as opposed to yours, they 
pick up the next module and test that.  So you typically end up with numbers 
in a matched kit which are close enough.


> From my understanding tho, each 'bank' or pair has to be a matched set. 
> I did finally find a set of four but it is a different brand.  From what
> I read to tho, ASUS trains itself each time you boot up.  It finds the
> best setting for each set of memory.  It does say that it is usually set
> to a slower speed tho when all four are installed.

It depends if your MoBo comes with 'daisy chain' or 'T topology' RAM slot 
configuration.  Most consumer grade come with 'daisy chain' configuration and 
ASUS may also have an "Optimem" function/feature as they call it.

With 'daisy chain' you should achieve higher max. frequency if you fit 2 
matched DIMMs in the slots the manual suggests (typically B2 & A2), than if 
you fit 4 DIMMs to achieve the same total RAM size.

With 'T topology' you'll achieve a lower frequency with 2 DIMMs, but a higher 
frequency with 4 DIMMs at the same total RAM size, than you would with a 
'daisy chain' MoBo.

The ASUS "Optimem" is some automagic run by the firmware of their 'daisy 
chain' MoBos in terms of voltage and signal sequencing, to do the best job it 
can when you have 4 DIMMs installed.


> Just have to wait
> and see I guess.  Oh, when I boot the first couple times with new
> memory, it takes quite a bit longer on the BIOS boot screen.  After a
> couple times, it doesn't seem to take so long.  Not sure what, but it
> does something. 

The memory controller on the CPU probes the memory module(s) by varying 
voltage and latency until it achieves a reliable result.  If you have enabled 
DOCP as advised here and if provided in the BIOS also selected the RAM 
frequency of the DIMMs you bought, then the probing ought to take less time:

https://www.asus.com/support/faq/1042256/

Unless ... there's something wrong with the system (power, faulty RAM modules, 
buggy BIOS, etc.).

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-08  9:15       ` Michael
@ 2024-09-08 20:19         ` Wol
  0 siblings, 0 replies; 55+ messages in thread
From: Wol @ 2024-09-08 20:19 UTC (permalink / raw
  To: gentoo-user

On 08/09/2024 10:15, Michael wrote:
> The placement of DIMMs depends on the MoBo, its manual would show in which
> slot should DIMM modules be added and the (maximum) size of each stick the
> MoBo can cope with.  Normally OEMs provide a list of tested memory brands and
> models for their MoBos (QVL) and it is recommended to buy something on the
> list, rather than improvise.

Both old and new mobos are, iirc, 4 x 32GB, and they just swapped the 
RAM over. But again iirc, the new mobo they supplied had two colours of 
ram slots, something like "black, red, black, red". To me that's obvious 
- both sticks in one colour! So - and I guess it was an apprentice who 
didn't know what he was doing - just shoved the ram in the first two slots.

Two major blunders from a shop - a brand new mobo won't boot - the FIRST 
suspect is an out-of-date bios. And then the second blunder - don't 
check the ram is in the right slots. That's one shop I certainly won't 
visit again.

(The only reason I asked a shop to fix it was because I didn't have an 
old chip so couldn't boot the board to upgrade the bios to work with the 
chip I had ...)

Cheers,
Wol


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-06 21:41                                         ` Frank Steinmetzger
  2024-09-07  9:37                                           ` Michael
  2024-09-07 17:08                                           ` Mark Knecht
@ 2024-09-14 19:46                                           ` Dale
  2024-09-15 22:29                                             ` Frank Steinmetzger
  2 siblings, 1 reply; 55+ messages in thread
From: Dale @ 2024-09-14 19:46 UTC (permalink / raw
  To: gentoo-user

Frank Steinmetzger wrote:
> Am Fri, Sep 06, 2024 at 01:21:20PM +0100 schrieb Michael:
>
>>>> find path-to-directory/ -type f | xargs md5sum > digest.log
>>>>
>>>> then to compare with a backup of the same directory you could run:
>>>>
>>>> md5sum -c digest.log | grep FAILED
> I had a quick look at the manpage: with md5sum --quiet you can omit the grep 
> part.
>
>>>> Someone more knowledgeable should be able to knock out some clever python
>>>> script to do the same at speed.
> And that is exactly what I have written for myself over the last 11 years. I 
> call it dh (short for dirhash). As I described in the previous mail, I use 
> it to create one hash files per directory. But it also supports one hash 
> file per data file and – a rather new feature – one hash file at the root of 
> a tree. Have a look here: https://github.com/felf/dh
> Clone the repo or simply download the one file and put it into your path.
>
>>> I'll be honest here, on two points.  I'd really like to be able to do
>>> this but I have no idea where to or how to even start.  My setup for
>>> series type videos.  In a parent directory, where I'd like a tool to
>>> start, is about 600 directories.  On a few occasions, there is another
>>> directory inside that one.  That directory under the parent is the name
>>> of the series.
> In its default, my tool ignores directories which have subdirectories. It 
> only hashes files in dirs that have no subdirs (leaves in the tree). But 
> this can be overridden with the -f option.
>
> My tool also has an option to skip a number of directories and to process 
> only a certain number of directories.
>
>>> Sometimes I have a sub directory that has temp files;
>>> new files I have yet to rename, considering replacing in the main series
>>> directory etc.  I wouldn't mind having a file with a checksum for each
>>> video in the top directory, and even one in the sub directory.  As a
>>> example.
>>>
>>> TV_Series/
>>>
>>> ├── 77 Sunset Strip (1958)
>>> │   └── torrent
>>> ├── Adam-12 (1968)
>>> ├── Airwolf (1984)
> So with my tool you would do
> $ dh -f -F all TV_Series
> `-F all` causes a checksum file to be created for each data file.
>
>>> What
>>> I'd like, a program that would generate checksums for each file under
>>> say 77 Sunset and it could skip or include the directory under it.
> Unfortunately I don’t have a skip feature yet that skips specific 
> directories. I could add a feature that looks for a marker file and then 
> skips that directory (and its subdirs).
>

I was running the command again and when I was checking on it, it
stopped with this error. 



  File "/root/dh", line 1209, in <module>
    main()
  File "/root/dh", line 1184, in main
    directory_hash(dir_path, '', dir_files, checksums)
  File "/root/dh", line 1007, in directory_hash
    os.path.basename(old_sums[filename][1])
                     ~~~~~~~~^^^^^^^^^^
KeyError: 'Some Video.mp4'



I was doing a second run because I updated some files.  So, it was
skipping some and creating new for some new ones.  This is the command I
was running, which may not be the best way. 


/root/dh -c -f -F 1Checksums.md5 -v


That make any sense to you?  That's all it spit out. 

Also, what is the best way to handle this type of situation.  Let's say
I have a set of videos.  Later on I get a better set of videos, higher
resolution or something.  I copy those to a temporary directory then use
your dmv script from a while back to replace the old files with the new
files but with identical names.  Thing is, file is different, sometimes
a lot different.  What is the best way to get it to update the checksums
for the changed files?  Is the command above correct? 

I'm sometimes pretty good at finding software bugs.  But hey, it just
makes your software better.  ;-) 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-14 19:46                                           ` Dale
@ 2024-09-15 22:29                                             ` Frank Steinmetzger
  2024-09-16 10:24                                               ` Dale
  0 siblings, 1 reply; 55+ messages in thread
From: Frank Steinmetzger @ 2024-09-15 22:29 UTC (permalink / raw
  To: gentoo-user

[-- Attachment #1: Type: text/plain, Size: 2756 bytes --]

Am Sat, Sep 14, 2024 at 02:46:35PM -0500 schrieb Dale:

> I was running the command again and when I was checking on it, it
> stopped with this error. 
> 
> 
> 
>   File "/root/dh", line 1209, in <module>
>     main()
>   File "/root/dh", line 1184, in main
>     directory_hash(dir_path, '', dir_files, checksums)
>   File "/root/dh", line 1007, in directory_hash
>     os.path.basename(old_sums[filename][1])
>                      ~~~~~~~~^^^^^^^^^^
> KeyError: 'Some Video.mp4'

What was the exact command with which you ran it?
Apparently the directory has a file 'Some Video.mp4', which was not listed 
in an existing checksum file.

I also noticed a problem recently which happens if you give dh a directory 
as argument which has no checksum file in it. Or something like it, I can’t 
reproduce it from memory right now. I have been doing some refactoring 
recently in order to get one-file-per-tree mode working.

> I was doing a second run because I updated some files.  So, it was
> skipping some and creating new for some new ones.  This is the command I
> was running, which may not be the best way. 
> 
> 
> /root/dh -c -f -F 1Checksums.md5 -v

Yeah, using the -c option will clobber any old checksums and re-read all 
files fresh. If you only changed a few files, using the -u option will 
drastically increase speed because only the changed files will be read.
Use the -d option to clean up dangling entries from checksum files.


> Also, what is the best way to handle this type of situation.  Let's say
> I have a set of videos.  Later on I get a better set of videos, higher
> resolution or something.  I copy those to a temporary directory then use
> your dmv script from a while back to replace the old files with the new
> files but with identical names.  Thing is, file is different, sometimes
> a lot different.  What is the best way to get it to update the checksums
> for the changed files?  Is the command above correct? 

dh has some smarts built-in. If you changed a file, then its modification 
timestamp will get udpated. When dh runs in -u mode and it finds a file 
whose timestamp is newer than its associated checksum file, that means the 
file may have been altered since the creation of that checksum. So dh will 
re-hash the file and replace the checksum in the checksum file.


> I'm sometimes pretty good at finding software bugs.  But hey, it just
> makes your software better.  ;-) 

Me too, usually. If it’s not my software, anyways. ^^
But I think you may be the first other of that tool other than me.

-- 
Grüße | Greetings | Salut | Qapla’
Someone who eats oats for 200 years becomes very old.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-15 22:29                                             ` Frank Steinmetzger
@ 2024-09-16 10:24                                               ` Dale
  0 siblings, 0 replies; 55+ messages in thread
From: Dale @ 2024-09-16 10:24 UTC (permalink / raw
  To: gentoo-user

Frank Steinmetzger wrote:
> Am Sat, Sep 14, 2024 at 02:46:35PM -0500 schrieb Dale:
>
>> I was running the command again and when I was checking on it, it
>> stopped with this error. 
>>
>>
>>
>>   File "/root/dh", line 1209, in <module>
>>     main()
>>   File "/root/dh", line 1184, in main
>>     directory_hash(dir_path, '', dir_files, checksums)
>>   File "/root/dh", line 1007, in directory_hash
>>     os.path.basename(old_sums[filename][1])
>>                      ~~~~~~~~^^^^^^^^^^
>> KeyError: 'Some Video.mp4'
> What was the exact command with which you ran it?
> Apparently the directory has a file 'Some Video.mp4', which was not listed 
> in an existing checksum file.

I'm fairly sure it was this command. 


/root/dh -c -f -F 1Checksums.md5 -v


I may have changed the -c to -u because I think it was the second run. 
I'd start with the thought it was -u if it were me.  There's another
command running right now and I cleared the scrollback part.  Once it
finishes, I can up arrow and be more sure.  At the moment, I'm letting
it test the files against the checksum it created, to be sure everything
is good.  It's almost half way through and no problems so far. 

I might add, I did a second run with -u, which I think produced the
error above, and it seems to have missed some directories.  When looking
I noticed some directories didn't have a checksum file in it.  That's
when I ran it a second time.  It skipped the ones it already did but
found the ones that was missed in first run.  There are almost 46,000
files in almost 800 directories.  Is there some tool your script relies
on that could make one of those numbers to high? 

> I also noticed a problem recently which happens if you give dh a directory 
> as argument which has no checksum file in it. Or something like it, I can’t 
> reproduce it from memory right now. I have been doing some refactoring 
> recently in order to get one-file-per-tree mode working.
>
>> I was doing a second run because I updated some files.  So, it was
>> skipping some and creating new for some new ones.  This is the command I
>> was running, which may not be the best way. 
>>
>>
>> /root/dh -c -f -F 1Checksums.md5 -v
> Yeah, using the -c option will clobber any old checksums and re-read all 
> files fresh. If you only changed a few files, using the -u option will 
> drastically increase speed because only the changed files will be read.
> Use the -d option to clean up dangling entries from checksum files.

That was my thinking.  When I update a set, I'll likely just cd to that
directory and update, -u, that one directory instead of everything. 
That will save time and all.  Doing everything takes days.  LOL


>
>> Also, what is the best way to handle this type of situation.  Let's say
>> I have a set of videos.  Later on I get a better set of videos, higher
>> resolution or something.  I copy those to a temporary directory then use
>> your dmv script from a while back to replace the old files with the new
>> files but with identical names.  Thing is, file is different, sometimes
>> a lot different.  What is the best way to get it to update the checksums
>> for the changed files?  Is the command above correct? 
> dh has some smarts built-in. If you changed a file, then its modification 
> timestamp will get udpated. When dh runs in -u mode and it finds a file 
> whose timestamp is newer than its associated checksum file, that means the 
> file may have been altered since the creation of that checksum. So dh will 
> re-hash the file and replace the checksum in the checksum file.
>

Sounds good.  I wasn't sure if it would see the change or not. 


>> I'm sometimes pretty good at finding software bugs.  But hey, it just
>> makes your software better.  ;-) 
> Me too, usually. If it’s not my software, anyways. ^^
> But I think you may be the first other of that tool other than me.
>


One thing I've noticed, when I run this tool, my video sputters at
times.  It does fine when not running.  This tool makes that set of hard
drives pretty busy.  It might be nice to add a ionice setting.  I'd just
set it inside the script.  If a person wants, they can edit and change
it.  Just set it to a little lower than normal stuff should be fine.  If
I restart smplayer, I may set its ionice to a higher priority.  Just a
thought and likely easy enough to do.  I don't want to stop your script
given it is so far along.

It sometimes pops up a question.  I figured out that I type in the
answer with the letter that is in parentheses.  Could you explain the
options a bit just to be sure I understand it correctly? 

So far, this is a nice tool.  It should find corruption, like my bad
memory stick, bit rot, bad drive or anything else that could corrupt a
file.  Even power failure I'd think.  It takes a while to do the
checksums but the script itself is fast.  Once you really happy with
this and feel like it is ready, you should really make a announcement
that it is ready.  Anyone who does a lot of write once and read many but
are concerned with files becoming corrupt over time for any reason
should be interested in this tool.  It wouldn't work well for files that
change a lot but there are tons of situations where once a file is
generated, video just being one of them, then never changes after that. 

Given the number of files I have, how I change things, I should be able
to find any bugs or needed features.  Example, making my video sputter
and needing a lower drive priority.  You may not run into that but I
noticed it here.  Different use case. 

By the way.  Still using that other script you threw together a good
while back.  I used it the other day to update some better videos I found. 

Thanks much for both tools.  I wish this old dog could learn new
tricks.  ROFL  I'm having trouble remembering old tricks.  :/ 

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".
  2024-09-03 23:28 [gentoo-user] Package compile failures with "internal compiler error: Segmentation fault" Dale
                   ` (2 preceding siblings ...)
  2024-09-04 10:48 ` [gentoo-user] " Dale
@ 2024-09-25 20:41 ` Dale
  3 siblings, 0 replies; 55+ messages in thread
From: Dale @ 2024-09-25 20:41 UTC (permalink / raw
  To: gentoo-user

Dale wrote:
> Howdy,
>
> I was trying to re-emerge some packages.  The ones I was working on
> failed with "internal compiler error: Segmentation fault" or similar
> being the common reason for failing.  I did get gcc to compile and
> install.  But other packages are failing, but some are compiling just
> fine.  Here's a partial list at least. 
>
> net-libs/webkit-gtk
> kde-plasma/kpipewire
> sys-devel/clang
> sys-devel/llvm
>
>
> When I couldn't get a couple to complete. I just went to my chroot and
> started a emerge -e world.  Then the packages above started failing as
> well in the chroot.  This all started when gkrellm would not open due to
> a missing module.  Some info on gcc.
>
>
> root@Gentoo-1 / # gcc-config -l
>  [1] x86_64-pc-linux-gnu-13 *
> root@Gentoo-1 / #
>
>
> Output of one failed package.
>
>
> In file included from
> /var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/platform/graphics/GraphicsLayer.h:46,
>                  from
> /var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/platform/graphics/GraphicsLayerContentsDisplayDelegate.h:28,
>                  from
> /var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/html/canvas/CanvasRenderingContext.h:29,
>                  from
> /var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/html/canvas/GPUBasedCanvasRenderingContext.h:29,
>                  from
> /var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/html/canvas/WebGLRenderingContextBase.h:33,
>                  from
> /var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/html/canvas/WebGLStencilTexturing.h:29,
>                  from
> /var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/html/canvas/WebGLStencilTexturing.cpp:29,
>                  from
> /var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2_build/WebCore/DerivedSources/unified-sources/UnifiedSource-950a39b6-33.cpp:1:
> /var/tmp/portage/net-libs/webkit-gtk-2.44.2/work/webkitgtk-2.44.2/Source/WebCore/platform/ScrollableArea.h:96:153:
> internal compiler error: in layout_decl, at stor-layout.cc:642
>    96 |     virtual bool requestScrollToPosition(const ScrollPosition&,
> const ScrollPositionChangeOptions& =
> ScrollPositionChangeOptions::createProgrammatic()) { return false; }
>      
> |                                                                                                                                                        
> ^
> 0x1d56132 internal_error(char const*, ...)
>         ???:0
> 0x6dd3d1 fancy_abort(char const*, int, char const*)
>         ???:0
> 0x769dc4 start_preparsed_function(tree_node*, tree_node*, int)
>         ???:0
> 0x85cd68 c_parse_file()
>         ???:0
> 0x955f41 c_common_parse_file()
>         ???:0
>
>
> And another package:
>
>
> /usr/lib/gcc/x86_64-pc-linux-gnu/13/include/g++-v13/tuple: In
> instantiation of ‘constexpr std::__tuple_element_t<__i,
> std::tuple<_UTypes ...> >& std::get(const tuple<_UTypes ...>&) [with
> long unsigned int __i = 0; _Elements =
> {clang::CodeGen::CoverageMappingModuleGen*,
> default_delete<clang::CodeGen::CoverageMappingModuleGen>};
> __tuple_element_t<__i, tuple<_UTypes ...> > =
> clang::CodeGen::CoverageMappingModuleGen*]’:
> /usr/lib/gcc/x86_64-pc-linux-gnu/13/include/g++-v13/bits/unique_ptr.h:199:62:  
> required from ‘std::__uniq_ptr_impl<_Tp, _Dp>::pointer
> std::__uniq_ptr_impl<_Tp, _Dp>::_M_ptr() const [with _Tp =
> clang::CodeGen::CoverageMappingModuleGen; _Dp =
> std::default_delete<clang::CodeGen::CoverageMappingModuleGen>; pointer =
> clang::CodeGen::CoverageMappingModuleGen*]’
> /usr/lib/gcc/x86_64-pc-linux-gnu/13/include/g++-v13/bits/unique_ptr.h:470:27:  
> required from ‘std::unique_ptr<_Tp, _Dp>::pointer std::unique_ptr<_Tp,
> _Dp>::get() const [with _Tp = clang::CodeGen::CoverageMappingModuleGen;
> _Dp = std::default_delete<clang::CodeGen::CoverageMappingModuleGen>;
> pointer = clang::CodeGen::CoverageMappingModuleGen*]’
> /var/tmp/portage/sys-devel/clang-16.0.6/work/clang/lib/CodeGen/CodeGenModule.h:668:31:  
> required from here
> /usr/lib/gcc/x86_64-pc-linux-gnu/13/include/g++-v13/tuple:1810:43:
> internal compiler error: Segmentation fault
>  1810 |     { return std::__get_helper<__i>(__t); }
>       |                                           ^
> 0x1d56132 internal_error(char const*, ...)
>         ???:0
> 0x9816d6 ggc_set_mark(void const*)
>         ???:0
> 0x8cc377 gt_ggc_mx_lang_tree_node(void*)
>         ???:0
> 0x8cccfc gt_ggc_mx_lang_tree_node(void*)
>         ???:0
> 0x8ccddf gt_ggc_mx_lang_tree_node(void*)
>         ???:0
> 0x8ccda1 gt_ggc_mx_lang_tree_node(void*)
>         ???:0
>
>
> As you can tell, compiler error is a common theme.  All of them I looked
> at seem to be very similar to that.  I think there is a theme and likely
> common cause of the error but no idea where to start. 
>
> Anyone have any ideas on what is causing this?  Searches reveal
> everything from bad kernel, bad gcc, bad hardware and such.  They may as
> well throw in a bad mouse while at it.  LOL  A couple seemed to solve
> this by upgrading to a newer version of gcc.  Thing is, I think this is
> supposed to be a stable version of gcc. 
>
> Open to ideas.  I hope I don't have to move back to the old rig while
> sorting this out.  O_O  Oh, I updated my old rig this past weekend.  Not
> a single problem on it.  Everything updated just fine. 
>
> Thanks.
>
> Dale
>
> :-)  :-) 
>


Here's a update.  I got the replacement memory sticks today.  Usually I
do my trip to town on Thursday morning but given there is a storm coming
my way, again, I went today.  So, before I left I installed all 4 sticks
of memory, booted my awesome Ventoy USB stick and then ran memtest while
I was gone.  When I got back from town, I had a banner that said PASS on
my screen.  It was part way through a second test.  I was gone a while. 
Doctor for about 30 minutes, Walmart for a while, Subway shop to get
something for supper a couple nights plus getting from place to place,
loading groceries etc etc.  Also, it rained on me too.  :/ 

I went into the BIOS.  On the first screen that pops up, it showed all
the memory with the same speed etc.  Now I admit, I didn't go into the
advanced stuff.  It is all set to the default from looking at it during
the initial build tho.  But, it doesn't seem any slower.  If anything,
it seems faster now.  When I open Firefox or Seamonkey, it pops up
faster than before.  Not a lot but noticeable.  I may just be lucky.  o_O 

I also noticed, the replacement sticks are only one digit apart on the
serial number.  So, they pick two from the line, test them I'd guess and
then box them up as a matched set.  It makes sense.  Two coming off the
line back to back should be as identical as it can get without extensive
testing.  Honestly tho, it is amazing given the number of components on
these chips that they work at all. 

So, I now have 128GBs of memory.  I should be able to compile some stuff
for a while now without running out of memory.  :-D

Dale

:-)  :-) 


^ permalink raw reply	[flat|nested] 55+ messages in thread

end of thread, other threads:[~2024-09-25 20:41 UTC | newest]

Thread overview: 55+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-03 23:28 [gentoo-user] Package compile failures with "internal compiler error: Segmentation fault" Dale
2024-09-04  0:12 ` [gentoo-user] " Grant Edwards
2024-09-04  0:39   ` Dale
2024-09-04  4:16     ` corbin bird
2024-09-06 20:15     ` Dale
2024-09-06 23:17       ` Michael
2024-09-07  3:02         ` Dale
2024-09-07 22:12     ` Wols Lists
2024-09-08  1:59       ` Dale
2024-09-08 13:32         ` Michael
2024-09-08  9:15       ` Michael
2024-09-08 20:19         ` Wol
2024-09-04  7:53   ` Raffaele Belardi
2024-09-04  4:26 ` [gentoo-user] " Eli Schwartz
2024-09-04 10:48 ` [gentoo-user] " Dale
2024-09-04 11:05   ` Frank Steinmetzger
2024-09-04 11:21     ` Dale
2024-09-04 15:57       ` Peter Humphrey
2024-09-04 19:09       ` Grant Edwards
2024-09-04 21:08         ` Frank Steinmetzger
2024-09-04 21:22           ` Grant Edwards
2024-09-04 21:53             ` Dale
2024-09-04 22:07               ` Grant Edwards
2024-09-04 22:14                 ` Dale
2024-09-04 22:38                 ` Michael
2024-09-05  0:11                   ` Dale
2024-09-05  8:05                     ` Michael
2024-09-05  8:36                       ` Dale
2024-09-05  8:42                         ` Michael
2024-09-05 10:53                           ` Dale
2024-09-05 11:08                             ` Michael
2024-09-05 11:30                               ` Dale
2024-09-05 18:55                                 ` Frank Steinmetzger
2024-09-05 22:06                                   ` Michael
2024-09-06  0:43                                     ` Dale
2024-09-06 12:21                                       ` Michael
2024-09-06 21:41                                         ` Frank Steinmetzger
2024-09-07  9:37                                           ` Michael
2024-09-07 16:28                                             ` Frank Steinmetzger
2024-09-07 17:08                                           ` Mark Knecht
2024-09-14 19:46                                           ` Dale
2024-09-15 22:29                                             ` Frank Steinmetzger
2024-09-16 10:24                                               ` Dale
2024-09-07 22:48                                     ` Wols Lists
2024-09-08  9:37                                       ` Michael
2024-09-05  9:08                   ` Frank Steinmetzger
2024-09-05  9:36                     ` Michael
2024-09-05 10:01                       ` Frank Steinmetzger
2024-09-05 10:59                         ` Dale
2024-09-04 14:21     ` Grant Edwards
2024-09-04 11:37   ` Dale
2024-09-04 14:23     ` Grant Edwards
2024-09-04 15:58       ` Peter Humphrey
2024-09-04 19:28         ` Dale
2024-09-25 20:41 ` Dale

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox