this post was submitted on 07 Jun 2023
135 points (100.0% liked)

Asklemmy

1454 readers
68 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

What opinion just makes you look like you aged 30 years

you are viewing a single comment's thread
view the rest of the comments
[–] argv_minus_one 16 points 1 year ago (1 children)

Containerization seems overrated. I haven't really played with it much, but as far as I can tell, the way it's most commonly used is just static linking with extra steps and extra performance overhead. I can think of situations where containers would actually be useful, like running continuous integration builds for someone you don't entirely trust, but for just deploying a plain old application on a plain old server, I don't see the point of wrapping it in a container.

Mac OS 7 looked cool. So did Windows 95.

Phones are useful, but they're not a replacement for a PC.

I don't want to run everything in a web browser. Using a browser engine as a user interface (e.g. Electron) is fine, but don't make me log in to some web service just to make a blasted spreadsheet.

I want to store my files on my computer, not someone else's.

I don't like laptops. I'd much rather have a roomy PC case so I can easily open it up and change the components if I want. Easier to clean, too.

[–] Hagarashi8@sh.itjust.works 1 points 1 year ago (2 children)
  1. Idea is that you can have different apps that require different versions of dependency X, and that could stop you with traditional package managment, but would be OK with containers
  2. Haven't seen macOS 7,but 100% agree on Windows 95. 2000 is better though.
  3. Still can't believe someone actually believe they are
  4. 100% agree
  5. Sometimes you just have 1 hour free, and that's not enough to go home, but too big to just kill it. That's when laptop is great. Also, sometimes going outside and do stuff feels better than doing it at home.
[–] argv_minus_one 2 points 1 year ago (1 children)

Idea is that you can have different apps that require different versions of dependency X, and that could stop you with traditional package managment, but would be OK with containers

That's what I mean by “static linking with extra steps”. This problem was already solved a very long time ago. You only get these version conflicts if your dependencies are dynamically linked, and you don't have to dynamically link your dependencies.

[–] Hagarashi8@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago) (1 children)

Yes, you don't have to dynamically link dependecies, but you don't want to recompile your app just to change dependency version.

[–] argv_minus_one 2 points 1 year ago (1 children)

Don't I? Recompiling avoids ABI stability issues and will reliably fail if there is a breaking API change, whereas not recompiling will cause undefined behavior if either of those things happens.

[–] Hagarashi8@sh.itjust.works 1 points 1 year ago (1 children)

That's why semver exists. Major-update-number.Minor-update-number.Patch-number Usually, you don't care about patches, they address efficency of things inside of lib, no Api changes. Something breaking could be in minor update, so you should check changelogs to see if you gonna make something about it. Major version most likely will break things. If you'll understand this, you'll find dynamic linking beneficial(no need to recompile on every lib update), and containers will eliminate stability issues cause libs won't update to next minor/major version without tests.

[–] argv_minus_one 2 points 1 year ago (1 children)

What's so horribly inconvenient about recompiling, anyway? Unless you're compiling Chromium or something, it doesn't take that long.

[–] Hagarashi8@sh.itjust.works 1 points 1 year ago (1 children)

Still, it's going to take some time, every time some dependency(of dependency(of dependency)) changes(cause you don't wanna end up with critical vulnerability). Also, if app going to execute some other binary with same dependency X, dependency X gonna be in memory only once.

[–] argv_minus_one 2 points 1 year ago (1 children)

Still, it’s going to take some time

Compared to the downsides of using a container image (duplication of system files like libc, dynamic linking overhead, complexity, etc), this is not a compelling advantage.

Also, if app going to execute some other binary with same dependency X

That seems like a questionable design choice.

[–] Hagarashi8@sh.itjust.works 1 points 1 year ago (1 children)

That seems like a questionable design choice.

I mean, you could have GUI for some CLI tool. Then you would need to run binary GUI, and either run binary CLI from GUI or have it as daemon. Also, if you are going to make something that have more than one binary, you'll get more space overhead for static linking than for containers

Compared to the downsides of using a container image (duplication of system files like libc, dynamic linking overhead, complexity, etc), this is not a compelling advantage.

Man, that's underestimating compiling time and frequency of updates of various libs, and overestimating overhead from dynamic linking (it's so small it's calculated in CPU cycles). Basically, dynamic linking reduces update overhead, like with static linking you'll need to download full binary every update, even if lib is tiny, while with dynamic you'll have to download only small lib.

[–] argv_minus_one 2 points 1 year ago (1 children)

I mean, you could have GUI for some CLI tool.

Yes, I've seen that pattern before, but:

  1. I wouldn't expect them to have many libraries in common, other than platform libraries like libc, since they have completely different purposes.
  2. I was under the impression that Docker is for server applications. Is it even possible to run a GUI app inside a Docker container?

Also, if you are going to make something that have more than one binary

If they're meant to run on the same machine and are bundled together in the same container image, I would call that a questionable design choice.

Man, that’s underestimating compiling time and frequency of updates of various libs

Well, I have only my own experience to go on, but I am not usually bothered by compile times. I used to compile my own Linux kernels, for goodness' sake. I would just leave it to do its thing and go do something else while I wait. Not a big deal.

Again, there are exceptions like Chromium, which take an obscenely long time to compile, but I assume we're talking about something that takes minutes to compile, not hours or days.

and overestimating overhead from dynamic linking (it’s so small it’s calculated in CPU cycles).

No, I'm not. If you're not using JIT compilation, the overhead of dynamic linking is severe, not because of how long it takes to call a dynamically-linked function (you're right, that part is reasonably fast), but because inlining across a dynamic link is impossible, and inlining is, as matklad once put it, the mother of all other optimizations. Dynamic linking leaves potentially a lot of performance on the table.

This wasn't the case before link-time optimization was a thing, mind you, but it is now.

Basically, dynamic linking reduces update overhead, like with static linking you’ll need to download full binary every update, even if lib is tiny, while with dynamic you’ll have to download only small lib.

Okay, but I'm much more concerned with execution speed and memory usage than with how long it takes to download or compile an executable.

[–] Hagarashi8@sh.itjust.works 1 points 1 year ago (1 children)

I mean, you could have GUI for some CLI tool.

Yes, I've seen that pattern before, but:

  1. I wouldn't expect them to have many libraries in common, other than platform libraries like libc, since they have completely different purposes.
  2. I was under the impression that Docker is for server applications. Is it even possible to run a GUI app inside a Docker container?

Also, if you are going to make something that have more than one binary

If they're meant to run on the same machine and are bundled together in the same container image, I would call that a questionable design choice.

In the time i was thinking about some kind of toolkit installed though distrobox. Distrobox, basically, allows you to use anything from containers as if it was not. It uses podman, so i guess it could be impossible to use docker for GUI, although i cant really tell.

inlining is, as matklad once put it, the mother of all other optimizations. Dynamic linking leaves potentially a lot of performance on the table.

Yes, but static linking means you'll get security and performance patches with some delay, while dynamic means you'll get patches ASAP.

[–] argv_minus_one 1 points 1 year ago (1 children)

dynamic means you’ll get patches ASAP.

Some claim this doesn't work in practice because of the ABI issues I mentioned earlier. You brought up Semver as a solution, but that too doesn't seem to work in practice; see for example OpenSSL, which follows Semver and still has ABI issues that can result in undefined behavior. Ironically this can create security vulnerabilities.

[–] Hagarashi8@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago) (1 children)

Yeah, but there's by lot more security improvement by having ability to apply fix for severe vulnerability ASAP than weakening from possible incompativilities. Also, i wonder why i never brought it up, shared libs are shared, so you can use them across many programming languages. So, no, static is not the way to replace containers with dynamic linking, but yes, they share some use cases.

[–] argv_minus_one 1 points 1 year ago (1 children)

Yeah, but there’s by lot more security improvement by having ability to apply fix for severe vulnerability ASAP than weakening from possible incompativilities.

Um, we're talking about undefined behavior here. That creates potential RCE vulnerabilities—the most severe kind of vulnerability. So no, a botched dynamically-linked library update can easily create a vulnerability worse than the one it's meant to fix.

Also, i wonder why i never brought it up, shared libs are shared, so you can use them across many programming languages.

Shared libraries are shared among processes, not programming languages.

[–] Hagarashi8@sh.itjust.works 1 points 1 year ago (1 children)

Shared libraries are shared among processes, not programming languages.

You still can use them in any programming language

[–] argv_minus_one 1 points 1 year ago

Not without suitable glue code, you can't. If you want to use a native library with Java or Node.js, you need to wrap it in a JNI or N-API wrapper. The wrapper must be dynamically linked, but the native library can be statically linked with the wrapper. My current project does just that, in fact.

There is one exception I know of. The Java library JNA dynamically links native libraries into a Java program and generates the necessary glue at run time.

[–] argv_minus_one 1 points 1 year ago (1 children)
[–] Hagarashi8@sh.itjust.works 2 points 1 year ago (1 children)

Yes, i agree, it looks awesome.

[–] argv_minus_one 1 points 1 year ago

Also see Mac OS 8, which added a shaded-gray look not unlike Windows 95, and Mac OS 9, the last version of the classic Mac OS. These versions have a lot more features than the older version 7, but they also take much longer to boot—so long that Apple added a progress bar to the boot screen!