Tbh I think an instance not enforcing tagging content as nsfw is probably very strong grounds for blocking imo
Yep I notice their sync rate has been really slow
Critical section uses a shared variable so any case you have multiple threads writing to a variable you should have a critical section, it's very general. I don't have much experience with reduction but it seems geared towards all loops preforming the same function as part of a larger function and they take approximately the same amount of time to complete plus are expected to start and end together. Something like parallelizing an integral by spliting it into ranges would be simpler with reduction. Also if the threads need to read and write to the global, seems like that would need a critical section.
I've been making communities here instead of writing the paper I should be working on
It’s not like you could have polled users that hadn’t joined yet anyways. Maybe the blocked list could be made more visible so people could be informed early on before they get too invested in their account?
Sorry wrong person
From what I've seen people saying it's because the users have a reputation for getting overly involved in other instances (skewing voting in news communities) which seems like a recipe for conflict. That kind of thing can have a big impact on a new/small instance, maybe if this was bigger with more mods then it wouldn't be such a problem. I think some separation of politically extreme communities might be required in general because they mix like oil and fire then spill everywhere.
More discussion: https://www.reddit.com/r/Lemmy/comments/142h1a5/choosing_an_instance_and_my_issues_with_lemmygrad/
With the effective implementation of AI, it could potentially boost the nation’s GDP by 50% or more in a short time.
That's a bold claim, I'd like to see a source on that
Original answer:
It's hard to give you an answer to your first question with just those graphs because it looks like one run on one dataset split. To address this part specifically:
My current interpretation is that the validation loss is slowly increasing, so does that mean that it's useless to train further? Or should I rather let it train further because the validation accuracy seems to sometimes jump up a little bit?
The overall trend is what's important, not small variations, try to imagine the validation loss curve smoothed out and you don't want to go beyond the minimum of that curve. Technically overfitting is indicated by a significant difference in loss between training and testing.
It would be only Quebec/NL/LB. Very small amount of salmon compared to the rest of the market, not something you would find in walmart. Conservation groups have been calling for tighter restrictions for years and it might be they're only giving out licenses to indigenous or recreational/sport atm. In Canada, we have special rules for indigenous that they can basically ignore certain/limits rules on hunting and fishing.
Original answer:
Short answer: Use a convolutional autoencoder (add convolution layers to the exterior of the autoencoder)
Long answer: From my experience with time series and autoencoders, it is best to do as much feature extraction as possible outside the autoencoder as it's more difficult to train them to to do the feature extraction and dimensionality reduction. Consider using FFT or wavelet transforms on your data first. Even if they don't extract your pattern exactly, it helps many applications. After transforming the data, train the convolutional autoencoder using the features and then to evaluate your model, reverse the transformation and compare with the original.
Yes, users only see communities from other instances on all that another user from their own instance has subscribed too (or I suppose that they subscribed to but usually people aren't suprised about that)