this post was submitted on 18 Dec 2024
43 points (100.0% liked)

Linux

1261 readers
27 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

There I said it !

top 17 comments
sorted by: hot top controversial new old
[–] elmicha@feddit.org 15 points 1 month ago (1 children)

I agree. zgrep also works for uncompressed files, so we could use e.g. zgrep ^ instead of zcat.

[–] interdimensionalmeme@lemmy.ml 10 points 1 month ago

Thanks, didn't know that existed

That's basically everything I was looking for !

[–] allywilson@lemmy.ml 8 points 1 month ago (2 children)

Yeah, it's a pain. Leads to bad one liners:

for i in $(ls); do zcat $i || cat $i; done

[–] MonkderVierte@lemmy.ml 6 points 1 month ago* (last edited 1 month ago) (2 children)

Btw, don't parse ls. Use find |while read -r instead.

find -maxdepth 1 -name "term" -print |while read -r file
   do zcat "$file" 2>/dev/null || cat "$file"
done
[–] allywilson@lemmy.ml 3 points 1 month ago (1 children)

Won't this cause cat to iterate through all files in the cwd once zcat encounters an issue, instead of just the specific file?

[–] MonkderVierte@lemmy.ml 1 points 1 month ago

Yeah, i was tired and had $file there first, then saw that you wanted to cat all in directory. Still tired, but i think this works now.

[–] gnuhaut@lemmy.ml 1 points 1 month ago* (last edited 1 month ago) (1 children)

You can just do for f in * (or other shell glob), unless you need find's fancy search/filtering features.

The shell glob isn't just simpler, but also more robust, because it works also when the filename contains a newline; find .. | while read -r will crap out on that. Also apparently you want while IFS= read -r because otherwise read might trim whitespace.

If you want to avoid that problem with the newline and still use find, you can use find -exec or find -print0 .. | xargs -0, or find -print0 .. | while IFS= read -r -d ''. I think -print0 is not standard POSIX though.

[–] MonkderVierte@lemmy.ml 1 points 1 month ago (1 children)

because it works also when the filename contains a newline

Doesn't that depend on the shell?

[–] gnuhaut@lemmy.ml 1 points 1 month ago

I don't think so and have never heard that, but I could be wrong.

[–] interdimensionalmeme@lemmy.ml 1 points 1 month ago* (last edited 1 month ago)

Thanks !

But still we shouldn't have to resort to this !

~~Also, can't get the output through pipe~~

for i in $(ls); do zcat $i || cat $i; done | grep mysearchterm

~~this appears to work~~

~~find . -type f -print0 | xargs -0 -I{} sh -c 'zcat "{}" 2>/dev/null || cat "{}"' | grep "mysearchterm"~~

~~Still, that was a speed bump that I guess everyone dealing with mass compressed log files has to figure out on the fly because zcat can't read uncompressed files ! argg !!!~~

for i in $(ls); do zcat $i 2>/dev/null || cat $i; done | grep mysearchterm

[–] Malfeasant@lemm.ee 5 points 1 month ago (1 children)

How do you propose zcat tell the difference between an uncompressed file and a corrupted compressed file? Or are you saying if it doesn't recognize it as compressed, just dump the source file regardless? Because that could be annoying.

[–] interdimensionalmeme@lemmy.ml 4 points 1 month ago (1 children)

Even a corrupt compressed files has a very different structure relative to plain text. "file" already has the code to detect exactly which.

Still, failing on corrupted compression instead of failing on plaintext would be an improvement.

[–] Malfeasant@lemm.ee 2 points 1 month ago (1 children)

What even is plain text anymore? If you mean ASCII, ok, but that leaves out a lot. Should it include a minimal utf-8 detector? Utf-16? The latest goofy encoding? Should zcat duplicate the functionality of file? Generally, unix-like commands do one thing, and do it well, combining multiple functions is frowned upon.

[–] interdimensionalmeme@lemmy.ml 1 points 1 month ago

I wouldn't call all this hoop jumping to reading common log files "doing it better".

This is exactly the kind of arcane tinkering that makes everything a tedious time wasting chore on linux.

At this point it's accepted that text files get zipped and that should be handled transparently and not be precious about kilobits of logic storage as if we were still stuck on a 80386 with 4 megs of ram.

[–] fool@programming.dev 3 points 1 month ago* (last edited 1 month ago) (1 children)

just use -f lol.

less $(which zcat) shows us a gzip wrapper. So we look through gzip options and see:

-f --force
Force compression or decompression. If the input data is not in a format recognized by gzip, and if the option --stdout is also given, copy the input data without change to the standard output: let zcat behave as cat.

party music

[–] interdimensionalmeme@lemmy.ml 2 points 3 weeks ago

That works great now I can zcat -f /var/log/apache2/*

[–] makingStuffForFun@lemmy.ml 2 points 1 month ago

Celeste. Are you here? In a future search maybe?