From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 73C5E158083 for ; Fri, 6 Sep 2024 12:21:34 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id B8F38E29E0; Fri, 6 Sep 2024 12:21:28 +0000 (UTC) Received: from heron.birch.relay.mailchannels.net (heron.birch.relay.mailchannels.net [23.83.209.82]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id EFD60E29B9 for ; Fri, 6 Sep 2024 12:21:27 +0000 (UTC) X-Sender-Id: thundermail|x-authsender|confabulate@kintzios.com Received: from relay.mailchannels.net (localhost [127.0.0.1]) by relay.mailchannels.net (Postfix) with ESMTP id A6DA8903F0B for ; Fri, 6 Sep 2024 12:21:23 +0000 (UTC) ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1725625283; a=rsa-sha256; cv=none; b=wQ+ZyPXLKDzG9I/ZtbUHy01hxN3Z54/MlmuOAKt4w8zjcuWe5DI4BaB5D8F6yKXdKoj/pf Q+mQlpW+2/rrGA4RcR4h7OwNFoQagtOYzHSQQQQurF9qSWu+a5OGiejzz2Mf1E6jsD0iF5 BEdNzvq7Tza8bJ3xVAFfIYDDB/6nn7X9mCWMyhG77y5CuzJ6gfoqae3eSFSg0Qcco8se8B su285TNfJ03fJg070n9vKIywGkrF9KhZRV+WYK3GfIrdVppSYUdFzd9NGypuyDwkYVfvSH njYpwcAmTwuukuIUr6LnPpikYNME710O3AQXcQeHvY6+k1CewN9uu2yBJoPNog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=mailchannels.net; s=arc-2022; t=1725625283; h=from:from:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type:in-reply-to:in-reply-to: references:references; bh=0kY+yftf1Wb+2zvWxFyolMiAH58V2JMUSyaQRcJhZV0=; b=ChvHQqkyiKyqknD8OX6cK+rjYOaxlgEHMy9mFRZfaXZjDJmBfjREe4mk+BoFPSKKQ44Iu3 qrVUDjCJiMWnmE1xhxDPM5HOzlrBoijGou0iLlcDW8dGFQQgl5Z4z6Ro6Wksn66eYBcf/Y s+cWyHjoP/oKjGk9MYXgwh0+d9IvfLpZDSnURVC6YBybwh0SM08jT23iYLukcrAAAx7Fps E6MadbNRCUXt5CkgrrMZjZYdHlh82i6A024GeccJIM0K01HLhQBvNmC8Re9tobG6dYZJjw iU2duKusaNFC2NtQ8bzaXcme/LAaUZukhHGJJebGnePABsYffCPH6N+3uNJeTg== ARC-Authentication-Results: i=1; rspamd-6bf87dd45-7lbbm; auth=pass smtp.auth=thundermail smtp.mailfrom=confabulate@kintzios.com X-Sender-Id: thundermail|x-authsender|confabulate@kintzios.com X-MC-Relay: Neutral X-MailChannels-SenderId: thundermail|x-authsender|confabulate@kintzios.com X-MailChannels-Auth-Id: thundermail X-Juvenile-Continue: 11c51fd20d008a8f_1725625283413_2285362259 X-MC-Loop-Signature: 1725625283413:2037513965 X-MC-Ingress-Time: 1725625283412 Received: from mailclean11.thundermail.uk (mailclean11.thundermail.uk [149.255.60.66]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384) by 100.122.216.85 (trex/7.0.2); Fri, 06 Sep 2024 12:21:23 +0000 Received: from cloud238.thundercloud.uk (cloud238.thundercloud.uk [149.255.62.116]) by mailclean11.thundermail.uk (Postfix) with ESMTPS id EC7511E0005 for ; Fri, 6 Sep 2024 13:21:20 +0100 (BST) Authentication-Results: cloud238.thundercloud.uk; spf=pass (sender IP is 217.169.3.230) smtp.mailfrom=confabulate@kintzios.com smtp.helo=rogueboard.localnet Received-SPF: pass (cloud238.thundercloud.uk: connection is authenticated) From: Michael To: gentoo-user@lists.gentoo.org Subject: Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault". Date: Fri, 06 Sep 2024 13:21:20 +0100 Message-ID: <15276605.tv2OnDr8pf@rogueboard> In-Reply-To: References: <8c26be16-d033-ea3f-06e1-a9ce84cbbafb@gmail.com> <7866172.lvqk35OSZv@rogueboard> Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply MIME-Version: 1.0 Content-Type: multipart/signed; boundary="nextPart4245450.1IzOArtZ34"; micalg="pgp-sha256"; protocol="application/pgp-signature" X-PPP-Message-ID: <172562527971.2448068.14452345445722941494@cloud238.thundercloud.uk> X-PPP-Vhost: kintzios.com X-Rspamd-Queue-Id: EC7511E0005 X-Rspamd-Server: mailclean11 X-Spamd-Result: default: False [-1.61 / 999.00]; SIGNED_PGP(-2.00)[]; MID_RHS_NOT_FQDN(0.50)[]; MIME_GOOD(-0.20)[multipart/signed,text/plain]; ONCE_RECEIVED(0.10)[]; MX_GOOD(-0.01)[]; R_SPF_ALLOW(0.00)[+mx]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; ASN(0.00)[asn:34931, ipnet:149.255.60.0/22, country:GB]; MISSING_XM_UA(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; MIME_TRACE(0.00)[0:+,1:+,2:~]; RCVD_COUNT_ONE(0.00)[1]; R_DKIM_NA(0.00)[]; RCVD_TLS_ALL(0.00)[]; NEURAL_HAM(-0.00)[-0.961]; REPLYTO_ADDR_EQ_FROM(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; ARC_NA(0.00)[]; DMARC_POLICY_ALLOW(0.00)[kintzios.com,none]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[gentoo-user@lists.gentoo.org]; HAS_REPLYTO(0.00)[confabulate@kintzios.com] X-Rspamd-Action: no action X-Archives-Salt: e5bd0590-918e-46b4-80be-7f7c63c8fbbd X-Archives-Hash: 8e39999f1cbf4219e2ab3085543f7643 --nextPart4245450.1IzOArtZ34 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8"; protected-headers="v1" From: Michael To: gentoo-user@lists.gentoo.org Reply-To: confabulate@kintzios.com Date: Fri, 06 Sep 2024 13:21:20 +0100 Message-ID: <15276605.tv2OnDr8pf@rogueboard> In-Reply-To: MIME-Version: 1.0 On Friday 6 September 2024 01:43:18 BST Dale wrote: > Michael wrote: > > On Thursday 5 September 2024 19:55:56 BST Frank Steinmetzger wrote: > >> Am Thu, Sep 05, 2024 at 06:30:54AM -0500 schrieb Dale: > >>>> Use rsync with: > >>>> --checksum > >>>>=20 > >>>> and > >>>>=20 > >>>> --dry-run > >>=20 > >> I suggest calculating a checksum file from your active files. Then you > >> don=E2=80=99t have to read the files over and over for each backup ite= ration you > >> compare it against. > >>=20 > >>>> You can also run find to identify which files were changed during the > >>>> period you were running with the dodgy RAM. Thankfully you didn't r= un > >>>> for too long before you spotted it. > >>=20 > >> This. No need to check everything you ever stored. Just the most recent > >> stuff, or at maximum, since you got the new PC. > >>=20 > >>> I have just shy of 45,000 files in 780 directories or so. Almost 6,0= 00 > >>> in another. Some files are small, some are several GBs or so. Thing > >>> is, backups go from a single parent directory if you will. Plus, I'd > >>> want to compare them all anyway. Just to be sure. > >>=20 > >> I aqcuired the habit of writing checksum files in all my media > >> directories > >> such as music albums, tv series and such, whenever I create one such > >> directory. That way even years later I can still check whether the fil= es > >> are intact. I actually experienced broken music files from time to time > >> (mostly on the MicroSD card in my tablet). So with checksum files, I c= an > >> verify which file is bad and which (on another machine) is still good. > >=20 > > There is also dm-verity for a more involved solution. I think for Dale > > something like this should work: > >=20 > > find path-to-directory/ -type f | xargs md5sum > digest.log > >=20 > > then to compare with a backup of the same directory you could run: > >=20 > > md5sum -c digest.log | grep FAILED > >=20 > > Someone more knowledgeable should be able to knock out some clever pyth= on > > script to do the same at speed. >=20 > I'll be honest here, on two points. I'd really like to be able to do > this but I have no idea where to or how to even start. My setup for > series type videos. In a parent directory, where I'd like a tool to > start, is about 600 directories. On a few occasions, there is another > directory inside that one. That directory under the parent is the name > of the series. Sometimes I have a sub directory that has temp files; > new files I have yet to rename, considering replacing in the main series > directory etc. I wouldn't mind having a file with a checksum for each > video in the top directory, and even one in the sub directory. As a > example. >=20 > TV_Series/ >=20 > =E2=94=9C=E2=94=80=E2=94=80 77 Sunset Strip (1958) > =E2=94=82 =E2=94=94=E2=94=80=E2=94=80 torrent > =E2=94=9C=E2=94=80=E2=94=80 Adam-12 (1968) > =E2=94=9C=E2=94=80=E2=94=80 Airwolf (1984) >=20 >=20 > I got a part of the output of tree. The directory 'torrent' under 77 > Sunset is temporary usually but sometimes a directory is there for > videos about the making of a video, history of it or something. What > I'd like, a program that would generate checksums for each file under > say 77 Sunset and it could skip or include the directory under it.=20 > Might be best if I could switch it on or off. Obviously, I may not want > to do this for my whole system. I'd like to be able to target > directories. I have another large directory, lets say not a series but > sometimes has remakes, that I'd also like to do. It is kinda set up > like the above, parent directory with a directory underneath and on > occasion one more under that.=20 As an example, let's assume you have the following fs tree: VIDEO =E2=94=9C=E2=94=80=E2=94=80TV_Series/ | =E2=94=9C=E2=94=80=E2=94=80 77 Sunset Strip (1958) | =E2=94=82 =E2=94=94=E2=94=80=E2=94=80 torrent | =E2=94=9C=E2=94=80=E2=94=80 Adam-12 (1968) | =E2=94=9C=E2=94=80=E2=94=80 Airwolf (1984) | =E2=94=9C=E2=94=80=E2=94=80Documentaries =E2=94=9C=E2=94=80=E2=94=80Films =E2=94=9C=E2=94=80=E2=94=80etc. You could run: $ find VIDEO -type f | xargs md5sum > digest.log The file digest.log will contain md5sum hashes of each of your files within= =20 the VIDEO directory and its subdirectories. To check if any of these files have changed, become corrupted, etc. you can= =20 run: $ md5sum -c digest.log | grep FAILED If you want to compare the contents of the same VIDEO directory on a back u= p,=20 you can copy the same digest file with its hashes over to the backup top=20 directory and run again: $ md5sum -c digest.log | grep FAILED Any files listed with "FAILED" next to them have changed since the backup w= as=20 originally created. Any files with "FAILED open or read" have been deleted= ,=20 or are inaccessible. You don't have to use md5sum, you can use sha1sum, sha256sum, etc. but md5s= um=20 will be quicker. The probability of ending up with a hash clash across two= =20 files must be very small. You can save the digest file with a date, PC name, top directory name next = to=20 it, to make it easy to identify when it was created and its origin. =20 Especially useful if you move it across systems. > One thing I worry about is not just memory problems, drive failure but > also just some random error or even bit rot. Some of these files are > rarely changed or even touched. I'd like a way to detect problems and > there may even be a software tool that does this with some setup, > reminds me of Kbackup where you can select what to backup or leave out > on a directory or even individual file level.=20 >=20 > While this could likely be done with a script of some kind, my scripting > skills are minimum at best, I suspect there is software out there > somewhere that can do this. I have no idea what or where it could be > tho. Given my lack of scripting skills, I'd be afraid I'd do something > bad and it delete files or something. O_O LOL=20 The above two lines is just one way, albeit rather manual way, to achieve=20 this. Someone with coding skills should be able to write up a script to mo= re=20 or less automate this, if you can't find something ready-made in the=20 interwebs. > I been watching videos again, those I was watching during the time the > memory was bad. I've replaced three so far. I think I noticed this > within a few hours. Then it took a little while for me to figure out > the problem and shutdown to run the memtest. I doubt many files were > affected unless it does something we don't know about. I do plan to try > to use rsync checksum and dryrun when I get back up and running. Also, > QB is finding a lot of its files are fine as well. It's still > rechecking them. It's a lot of files.=20 >=20 > Right now, I suspect my backup copy is likely better than my main copy.=20 > Once I get the memory in and can really run some software, then I'll run > rsync with those compare options and see what it says. I just got to > remember to reverse things. Backup is the source not the destination.=20 > If this works, I may run that each time, help detect problems maybe.=20 > Maybe??=20 This should work in rsync terms: rsync -v --checksum --delete --recursive --dry-run SOURCE/ DESTINATION It will output a list of files which have been deleted from the SOURCE and= =20 will need to be deleted at the DESTINATION directory. It will also provide a list of changed files at SOURCE which will be copied= =20 over to the destination. When you use --checksum the rsync command will take longer than when you=20 don't, because it will be calculating a hash for each source and destinatio= n=20 file to determine it if has changed, rather than relying on size and=20 timestamp. --nextPart4245450.1IzOArtZ34 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part. Content-Transfer-Encoding: 7Bit -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmba88AACgkQseqq9sKV ZxmqBRAAq64OMwsWMdTie39PkNEsFF5hjMjwBKg0kDY6wCuRwMFyk8FSfZ9ZWCJN 8S+6IemY8iKiQXq8e2qqVlzxnuLCVu7+E6rBu4K9x7RAKs9KJ9e6/W5go/zq8Cja qGk3OPgFInqHfQOPvuS2Q1BcvfXvITZIzM45w1nui06c9U4Z8hdBbN3jShbRL6h7 fLg7dtjTixYAMb+PDcezSkIkC2257vWozsfBlm0qOWT9+08t+Ww/CcXx+jT+3PID N5bB5UVzX4VOsD24JWXWHifKBX/2plG1eGBlOi7i7KNBMcnsIUbpsFlvaQiAPNTU nkTJiW8kmwne42DUk2aMg5aEeR9h9uD5iwcdfH0oi/GbPiuu0EUPNMbSWjsAobvg nXlFHNyWo/P5zMPoBpOqt92P1AE/fAwpROrJuxrTrqpkgdR/KtxUVOdBj3VUPldS kzqEqGKKrZhXkKLzbsK44qhr9ush7/+pvDxXf0Qz1fYoUkYUWStZKg2FzLz7hs/l 0CzIFzzIDYDBTxOloqKazVG0/ljPUlyS0dh0NElItlS1s6elFSOLrxbeFK0fpn7G l25HdYQbbznsHAuINCky9vuEK5YKGywBQkTnvoXmU1ISwgeET35z6Zy8ZWhhD/Ih EDsmow7K2nQ8FYgyROAtlDrkAhJy9v6FrEvZuuT8umrufer9pIA= =O8Vl -----END PGP SIGNATURE----- --nextPart4245450.1IzOArtZ34--