merc

joined 1 year ago
[–] merc@sh.itjust.works 4 points 1 year ago (1 children)

Sure it is, you're forgetting the felony indictments in New York.

[–] merc@sh.itjust.works 15 points 1 year ago (1 children)

Whatever happens to the Instant Pot brand, I hope that computerized cooking is here to stay.

After using things like the "keep warm" function, the "saute" function that shuts off if things get too hot, etc., cooking on the stove seems primitive. How often do you want to heat something until it boils, and then lower the heat so it simmers. Why can't the stove notice the boiling and lower the heat?

Instead of recipes saying that something should be fried at high heat, give a specific temperature and have a smart stove it it and maintain it. Instead of setting a timer to remind you to turn down the heat after 20 minutes, tell the stove to do it.

[–] merc@sh.itjust.works 82 points 1 year ago (16 children)

The Average US President has been indicted 1.54 times. (Trump is the only US president to ever be indicted.)

Shamelessly stolen from someone on Mastodon (unfortunately I can't find the toot).

[–] merc@sh.itjust.works 3 points 1 year ago (1 children)

Arsenal's transfer records according to transfermarkt:

Rank Name Amount Season
1 Nicolas Pépé €80m 19/20
2 Pierre-Emerick Aubameyang €64m 17/18
3 Benjamin White €59m 21/22
4 Alexandre Lacazette €53m 17/18
5 Gabriel Jesus €52m 22/23
6 Thomas Partey €50m 20/21
7 Mesut Özil €50m 13/14
8 Granit Xhaka €45m 16/17
9 Alexis Sánchez €43m 14/15
10 Shkodran Mustafi €41m 16/17

It's in Euros because they record it in Euros in their database, even though the spending was reported in pounds in many sources. Keeping it in pounds would have been tricky with inflation and currency rate changes over the years.

So, Pépé was the most expensive player Arsenal ever bought, but was he the biggest flop from the top 10?

Who do you think was the best and worst deal from that top 10 list?

It's also interesting to see the transfer inflation. When Arsenal bought Ozil, he set the transfer record, and €50m was a lot of money. Just 3 years later Mustafi cost €41m and while that was a lot of money for Mustafi, it didn't register as a massive transfer spending, the way €50m had just a few years earlier.

[–] merc@sh.itjust.works 2 points 1 year ago (1 children)

Thanks for the reply. Do you know of any Fediverse community for people into things like monitoring, paging, alerts and occasional sleepless nights?

[–] merc@sh.itjust.works 9 points 1 year ago (3 children)

My comment as someone who used to do this professionally:

4 golden signals:

  1. Latency (how long the transaction / query / function takes)
  2. Traffic (queries per second)
  3. Errors (errors per second)
  4. Saturation / Fullness (how close to maximum traffic are you. This can be I/O, how close are you to maximum bandwidth, memory: how close are you to running out of RAM, threads: how many serving threads are you using out of the total thread pool, CPU: how much free CPU capacity do you have)

Don't veer too far from measuring those key things. You might be able to get many other rates and values, but often they're derived from the key signals, and you'd be better off monitoring one of the golden signals instead.

3 types of alerts:

  1. Pages, as in "Paging Doctor Frankenstein". High priority that should interrupt someone and get them to check it out immediately.
  2. Tickets / Bugs. These should be filed in some kind of bug reporting / ticketing system automatically. That system should be one where someone can look up the day's or the week's bugs and see what needs investigation, track it, and then ultimately resolve it. This level of alert is for things that are serious enough that someone should be periodically checking the bug / ticket system to see if anything needs investigation, but not important enough that someone should drop what they're doing and look right away.
  3. Logs. Write info to storage somewhere and keep it around in case it's useful for someone when they're debugging. But, don't page anyone or create a ticket, just keep the info in case someone looks later. Graphs are basically a form of logs, just visual.

Tempting as it may be, never email. Emails just get ignored. If it's high priority you should page. If it's not that high priority, file a bug / ticket.

For latency, use distributions: 50th percentile latency, 90th percentile latency, 99th percentile latency, etc. Meaning for 50th percentile, half the users have this much latency or better. For 99th percentile 99% of users have this much latency or better, 1% of users have this much latency or worse. The reason for this is that an average latency is not very useful. What matters are the outliers. If 99% of operations complete in 500 ms but 1% take 50s, the average latency will still be approx 500 ms, but that one operation that takes nearly a minute can be a sign of something, either breakage or abuse.

Black and white box monitoring are both important.

White box monitoring is monitoring as someone who knows the internals of the system. Say monitoring the latency of the GetFriendsGraph() call. As someone who knows the code you know that that's key to performance, it has a DB as a backend, but there's a memory cache, and so-on.

Black box monitoring is monitoring viewing the system as a black box whose internals you pretend you don't understand.. So, instead of monitoring GetFriendsGraph() you monitor how long it takes to respond to loading http://friends.example.org/list/get_buddies.jsp or whatever. That will include time doing the DNS lookup, time going through the load balancer, querying the frontend, querying the backend(s) backends, and so on. When this kind of monitor experiences errors you don't know what the cause is. It could be broken DNS, it could be broken load balancers, it could be a DB crash. What it does tell you is that this is a user-visible error. With white-box monitoring you may know that the latency on a certain call is through the roof, but the black box monitor can say that it isn't an issue that is actually affecting most users.

In terms of graphing (say GraphViz or whatever), start by graphing the 4 golden signals for whatever seems to be important. Then, treat the graphs like logs. Don't stare at them all the time. Refer back to them when something higher priority (bugs / tickets or alerts) indicates that something needs an investigation. If additional graphs would have helped in the investigation, add more graphs. But, don't do it just to have things to look at. Too many graphs just becomes visual noise.

[–] merc@sh.itjust.works 2 points 1 year ago (1 children)

Oh. So that's what it feels like when your brains leak out of your ear. Neat, I guess? Also, ow.

[–] merc@sh.itjust.works 7 points 1 year ago (1 children)

I really like it, but I'm concerned for rough times ahead.

Running instances is hard, thankless but necessary work. A for-profit company like Reddit can afford to pay engineers to do it. A lot of open-source / free software things survive because people are generous and donate their time, creativity, expertise and often even money to keeping them running. But when it's a hobby not a job, it gets to a point where people often have to think of their own sanity and step away.

The fediverse design seems well suited to handle that without major disruption, but there will definitely be some disruption.

I'm also hoping that people are tolerant of design quirks. Design by committee is often seen as one of the worst ways to do things, and FOSS is nothing but committees. Reddit's design obviously influenced Lemmy (as Slashdot influenced Reddit, and so-on). But, while I wasn't a fan of the new Reddit design, at least it was a unified view. I'm incredibly impressed at how smooth Lemmy has been so far, but again, I expect it's just a matter of time before there are some controversial choices in what new features to add, how to expose them, what defaults to choose, and so on. I hope people are tolerant of the churn that that might cause.

Basically, I just really hope that whatever controversies and rough periods are ahead, that the communities I care about choose to weather the storm and stick around. If we can survive that, social media that isn't owned by any company, and that isn't part of the "surveillance capitalism" world is very promising.

[–] merc@sh.itjust.works 3 points 1 year ago (3 children)

Next, make one that's designed to be easily scraped, so machines can learn from it: Machines Learning Machine Learning.

[–] merc@sh.itjust.works 4 points 1 year ago

Useful, thanks.

[–] merc@sh.itjust.works 2 points 1 year ago

The problem is that from day 1, Reddit was a Y-Combinator project, meaning it was VC backed, and the VCs want a big money-making exit. It looks like social media in general, and link aggregator website in particular are not great money-making ventures. A private company might just shrug and accept a small but steady profit. But, VCs want a big splashy money-making event, and they're willing to risk killing the company to achieve it.

[–] merc@sh.itjust.works 1 points 1 year ago

Sounds good to me.

view more: next ›