CasualTee

joined 2 years ago
[–] CasualTee 3 points 1 week ago

My guess would be to force a hierarchy as to distribute load. DNS is distributed in a sens. There are the root name servers that know about all TLD and then each TLD has its own server (in practice there is multiple TLD a single entity controls and it allocates as many server as needed to answer all DNS request for those). And those "TLD servers" know about the second level. And either they also know about the lower levels or those are further delegated.

So fewer TLD means that the "root" DNS servers do not have to keep a huge "phonebook" (TLD to IP address of the DNS responsible for them) and can therefore be efficient, which means that fewer of them are required. And fewer root server means its easier to update them and keep them consistent. And if nearly everyone can only register second level domains, then the root name servers do not need to be updated nearly as often.

[–] CasualTee 11 points 1 week ago (2 children)

I can of agree the focus to make Linux easy to use is not exactly on the right things. There is a bit too much of a "make a GUI of everything". Which is not wrong per say, but should not be the goal. More a mean to an end.

I disagree that users won't do stuff on their own. They will, but they will allocate very little time to it, on average, especially when compared to a tech savy person. And that's just because their computer is a tool.And if they cannot make their tool work for what they want to do, they'll find another way. Or deem it impossible.

I think distro must make mundane tasks such as system maintenance hands off. As an opt-in option not to upset power user. But things such as updates, full system update, disk space reclaiming, ... should have a single "do the right thing without being asked to" toggle. Things a bit more complicated such as printing/scanning document should be more context aware. A bit like on smartphone where, if you have a document open, you can select print and, if no printer is configured, you have the option to add one there and then.

Immutable distro have made good progress on that front IMO. But we still need better integration between applications and the Desktop Environment for things like printing, sharing and so on. I'm hopeful though. Generally speaking, things are moving in that direction. Even if we can argue flapak and snap are a step backward with regards to the integration of the DE, this is also an opportunity to formalize some form of protocol with the DE.

[–] CasualTee 5 points 1 month ago (2 children)

It's not directly related to the torrent or its content no. It's more related to the potential bugs in Transmission that might be exploited to propagate viruses.

Since Transmission has to exchange data with un-trusted parties, before knowing whether the data is relevant to the torrent you are downloading, anyone could exploit bugs that exist in the parsing of these messages.

So running Transmission as a dedicated user limits what an attacker may have access to once they take control of Transmission through the exploit of known or unknown bugs.

Obviously, this user need to have many restriction in place as to prevent the attacker from installing malware permanently on the machine. And when you copy over data that has been downloaded by Transmission, you'd have to make sure it has not been tampered with by the attacker in an attempt to get access to the data available to your real account.

If you just use transmission occasionally, not on a server, I would not bother with it. Either use the flatpak version for some sandboxing and similar security guarantees as having a dedicated user running Transmission, or use an up to date version (the one from your distro should be fine) and don't leave it running when you do not need to.

[–] CasualTee 19 points 2 months ago* (last edited 2 months ago) (2 children)

the common practice is to relax the dependencies

I found this a bit disturbing

I find that funny that, since this is rust, this is now an issue.

I have not dwelved in packaging in a long while, but I remember that this was already the case for C programs. You need to link against libfoo? It better work with the one the distribution ship with. What do you mean you have not tested all distributions? You better have some tests to catch those elusive ABI/API breakage. And then, you have to rely on user reported errors to figure out that there is an issue.

On one hand, the package maintainer tend to take full ownership and will investigate issues that look like integration issue themselves. On the other hand, your program is in a buggy or non-working state until that's sorted.

And the usual solutions are frown upon. Vendoring the dependencies or static linking? Are you crazy? You're not the one paying for bandwidth and storage. Which is a valid concern, but that just mean we reached a stalemate.

Which is now being broken by

  • slower moving C/C++ projects (though the newer C++ standards did some waves a few years back) which means that even Debian is likely to have a "recent" enough version of your dependencies.
  • flatpack and the likes, which are vendoring everything and the kitchen sink
  • newer languages that static link by default (and some distributions being OK with it)

In other words, we never figured out a proper solution for C projects that will link with a different minor than the one the developer tested.

Well, /rant I guess. The point I'm raising does not seem to be the only one, and maybe far from the main one, for which bcachefs-tools is now orphaned. But I've seen very dubious arguments to try and push back against rust adoption. I feel like people have forgotten where we came from. And while there is no reason to go back per say, any new language that integrate this deep into the system will face similar challenges.

[–] CasualTee 2 points 3 months ago (1 children)

I don't think that's the issue. As said in the article, the researchers found the flaw by reading the architecture documentation. So the flaw is in the design of the API the operating system uses to configure the CPU and related resources. This API is public (though not open source) as to allow operating system vendors to do their job. It usually comes with examples and pseudo code on how some operations work. Here is an example (PDF).

Knowing how this feature is actually implemented in hardware (if the hardware was open source) would not have helped much. I would argue you are one level too low to properly understand the consequences of the implementation.

By the vague description in the article it actually looks like a meltdown or specter like issue where some code gets executed with the inappropriate privileges. Such issues are inherent to complex designs and no amount of open-source will save you there. We need a cultural and maybe a paradigm shift on how we design CPU to fully address those issues.

[–] CasualTee 4 points 3 months ago

It would tend towards centralisation just because of the popularity of certain posters/instances and how scale-free networks behave when they’re not handled another way.

Ah, I get you. That's true.

[–] CasualTee 2 points 3 months ago (2 children)

making the place less equal, more of a broadcast medium, and less accessible to unconnected individuals and small groups.

I do not think it is a very good analogy. I do not see how this would turn into a broadcast medium. Though I do agree it can feel less accessible and there is a risk of building echo chambers.

How does an instance get into one of these archipelagos if they use allowlists?

By reaching out, I would say. It's most likely a death sentence for one-persone instances. Which is not ideal. On the other hand, I've seen people managing their own instance give up on the idea when they realized how little control they have over what gets replicated on their instance and how much work is required to moderate replies and such. In short, the tooling is not quite there.

[–] CasualTee 5 points 3 months ago

I think both models (i.e. allowlist/blocklist) have their own perks and drawbacks and are all necessary for a healthy and enjoyable internet.

I would tend to agree. I think both methods have their merits. Though ideally I'd rather have most instances use a blocklist model. This is less cumbersome to the average user and this achieves (in my opinion) one of fediverse goal of having an online identity not tied to an instance, an online identity you can easily migrate (including comments, follow, DMs, ...) if needed.

But the blocklist model is too hard to maintain at this time. There are various initiative to try and make it work, such as fediseer, and it might be good enough for most. But I think it's a trap we should not fall into. On the fediverse, "good enough for most" is not good enough.

Now that people are fleeing to the Fediverse, we’re just gathering our tribe - and this is a natural phenomenon.

I think there is indeed something of that effect going on as well, this is true. But I do not think this warrants a move to allowlist by itself.

I think the move to allowlist is mandated by the fact that building a safe space for "minorities" is hard. The tools to alleviate issues such as harassment and bigotry are not sufficient at this time to keep those communities fully open.

Which is a shame as I think the best way to fight those issues, as a society, is to have people express themselves and have healthy conversation on issues that are rarely brought up.

But we are not entirely giving that up by moving to an archipelago model. It just means that individuals would have multiple accounts, on different archipelago. The downside is that it makes the fediverse less approachable to the average person.

[–] CasualTee 4 points 3 months ago (2 children)

I think the current technical limitations push us toward this archipelago model.

The thing is, bigotry and racism, to name only two, will exist on any social media, any platform where anyone is free to post something. And since those are societal issue, I don't think it is up to the fediverse to solve. Not all by itself by any means.

What the fediverse can solve however, is to allow instances to protect themselves and their members from such phenomenon. And my limited understanding, as a simple user, is that's it's not possible right now. Not on lemmy nor on Mastodon, if I trust the recent communications around moderation and instance blocking. Not without resorting to allow list.

This is annoying to admit because it goes against the spirit of the fediverse. But the archipelago model is the only sane solution short term IMO. And it will stay that way until the moderation tools make a leap and allow some way to share the load between instances and even between users.

[–] CasualTee 12 points 3 months ago (1 children)

What a shit show. And if it is confirmed that laptop CPU are also affected, even if to a lower extent, AMD will be the only option on consumer hardware in the coming couple of years. Thankfully, Qualcomm entered the scene recently which should stir up the competition and prevent AMD from resting on its laurels.

[–] CasualTee 13 points 4 months ago

Enable permissions for KMS capture.

Warning

Capture of most Wayland-based desktop environments will fail unless this step is performed.

Note

cap_sys_admin may as well be root, except you don’t need to be root to run it. It is necessary to allow Sunshine to use KMS capture.

Enable

   sudo setcap cap_sys_admin+p $(readlink -f $(which sunshine))

Disable (for Xorg/X11 only)

   sudo setcap -r $(readlink -f $(which sunshine))

Their install instruction are pretty clear to me. The actual instruction is to run

sudo setcap cap_sys_admin+p $(readlink -f $(which sunshine))

This is vaguely equivalent to setting the setuid bit on programs such as sudo which allows you to run as root. Except that the program does not need to be owned by root. There are also some other subtleties, but as they say, it might as well be the same as running the program directly as root. For the exact details, see here: https://www.man7.org/linux/man-pages/man7/capabilities.7.html and look for CAP_SYS_ADMIN.

In other words, the commands gives all powers to the binary. Which is why it can capture everything.

Using KMS capture seems way overkill for the task I would say. But maybe the wayland protocol was not there yet when this came around or they need every bit of performance they can gain. Seeing the project description, I would guess on the later as a cloud provider would dedicate a machine per user and would then wipe and re-install between two sessions.

75
submitted 4 months ago by CasualTee to c/technology
 

It looks like it will require a manual review process for now but it could be automated down the line.

[–] CasualTee 2 points 5 months ago

This looks like one of those wireguard based solution like tailscale or netbird though I'm not sure they are using it here. They all use a public relay used for NAT penetration as well as client discovery and in some instance, when NAT pen fails, traffic relay. From the usage, this seems to be the case here as well:

Share the local Minecraft server:

$ holesail --live 25565 --connector "holesailMCServer420"

On other computer(s):

$ holesail "holesailMCServer420"

So this would register a "holesailMCServer420" on their relay server. The clients could then join this network just by knowing its name and the relay will help then reach the host of the Minecraft server. I'm just extrapolating from the above commands though. They could be using DHT for client discovery. But I expect they'd need some form of relay for NAT pen at the very least.

As for exposing your local network securely, wireguard based solution allow you to change the routing table of the peers as well as the DNS server used to be able to assign domain name to IPs only reachable from within another local network. In this instance, it works very much like a VPN except that the connection to the VPN gateway is done through a P2P protocol rather than trough a service directly exposed to the internet.

Though in the instance of holesail, I have heavy doubts about "securely" as no authentication seems required to join a network: you just need to know its name. And there is no indication that choosing a fully random name is enough.

 

Can't be worse than CMake, can it.

 

While not a major breakthrough in terms of computing power, it's crazy to see that CPU can have more cache than desktop PC had hard drive space in the late 90s.

view more: next ›