ShadowAether

joined 1 year ago
MODERATOR OF

Yes, users only see communities from other instances on all that another user from their own instance has subscribed too (or I suppose that they subscribed to but usually people aren't suprised about that)

Tbh I think an instance not enforcing tagging content as nsfw is probably very strong grounds for blocking imo

Yep I notice their sync rate has been really slow

Critical section uses a shared variable so any case you have multiple threads writing to a variable you should have a critical section, it's very general. I don't have much experience with reduction but it seems geared towards all loops preforming the same function as part of a larger function and they take approximately the same amount of time to complete plus are expected to start and end together. Something like parallelizing an integral by spliting it into ranges would be simpler with reduction. Also if the threads need to read and write to the global, seems like that would need a critical section.

I've been making communities here instead of writing the paper I should be working on

[–] ShadowAether@sh.itjust.works 3 points 1 year ago (1 children)

It’s not like you could have polled users that hadn’t joined yet anyways. Maybe the blocked list could be made more visible so people could be informed early on before they get too invested in their account?

[–] ShadowAether@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)

Sorry wrong person

From what I've seen people saying it's because the users have a reputation for getting overly involved in other instances (skewing voting in news communities) which seems like a recipe for conflict. That kind of thing can have a big impact on a new/small instance, maybe if this was bigger with more mods then it wouldn't be such a problem. I think some separation of politically extreme communities might be required in general because they mix like oil and fire then spill everywhere.

More discussion: https://www.reddit.com/r/Lemmy/comments/142h1a5/choosing_an_instance_and_my_issues_with_lemmygrad/

 

Join us here: !rwby@sh.itjust.works

With the effective implementation of AI, it could potentially boost the nation’s GDP by 50% or more in a short time.

That's a bold claim, I'd like to see a source on that

Original answer:

It's hard to give you an answer to your first question with just those graphs because it looks like one run on one dataset split. To address this part specifically:

My current interpretation is that the validation loss is slowly increasing, so does that mean that it's useless to train further? Or should I rather let it train further because the validation accuracy seems to sometimes jump up a little bit?

The overall trend is what's important, not small variations, try to imagine the validation loss curve smoothed out and you don't want to go beyond the minimum of that curve. Technically overfitting is indicated by a significant difference in loss between training and testing.

 

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

The more I read, the more I am confused as to how to interpret the validation and training loss graphs, so therefore I would like to ask for some guidance on how to interpret these values here in the picture. I am training a basic UNet architecture. I am now wondering if I need a more complex network model, or that I just need more data to improve the accuracy.

Historical note: I had the issue where validation loss was exploding after a few epochs, but I added dropout layers and that seems to have fixed the situation.

My current interpretation is that the validation loss is slowly increasing, so does that mean that it's useless to train further? Or should I rather let it train further because the validation accuracy seems to sometimes jump up a little bit?

[–] ShadowAether@sh.itjust.works 2 points 1 year ago* (last edited 1 year ago)

It would be only Quebec/NL/LB. Very small amount of salmon compared to the rest of the market, not something you would find in walmart. Conservation groups have been calling for tighter restrictions for years and it might be they're only giving out licenses to indigenous or recreational/sport atm. In Canada, we have special rules for indigenous that they can basically ignore certain/limits rules on hunting and fishing.

Original answer:

Short answer: Use a convolutional autoencoder (add convolution layers to the exterior of the autoencoder)

Long answer: From my experience with time series and autoencoders, it is best to do as much feature extraction as possible outside the autoencoder as it's more difficult to train them to to do the feature extraction and dimensionality reduction. Consider using FFT or wavelet transforms on your data first. Even if they don't extract your pattern exactly, it helps many applications. After transforming the data, train the convolutional autoencoder using the features and then to evaluate your model, reverse the transformation and compare with the original.

 

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

Im training an autoencoder on a time series that consists of repeating patterns (because the same process is repeated again and again). If I then use this autoencoder to reconstruct another one of these patterns, I expect the reconstruction to be worse if the pattern is different from the ones it has been trained on.

Is the fact that the sime series consists of repeating patterns something that needs to be considered in any way for training or data preprocessing? I am currently using this on raw channels.

Thank you.

 

cross-posted from: https://sh.itjust.works/post/58054

Some seem better than others but a new shampoo & conditioner bar I bought seems much worse.

 

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

I got a data set from high performance liquid chromatography, because hplc is expensive we only got about 39 data point. Each data point is 9 dimension, representing 9different substances concentration. I tried different network and the accuracy is not higher than 50%. (We have four classes) however the KNN has a accuracy of more than 90%. I remember hearing that neural network is not good on small data set. Is this the reason? I have not tried svm or other traditional machine learning models yet. Should I try them if yes which one

 

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

I'm being provided a dataset with several variables in it, and a success metric (1 or 0) at the end. I'm being asked to analyze the dataset and give insights on how to improve the success metric rate. To do this I intend to do a thorough data analysis to study correlations and relationships. However I'm also intending to run a logistic regression to confirm these correlations with the features coefficients.

My question is, if my sole interest is understanding the most important feature determining a metric, and not building a robust model, should I still split my datasets into 2 ? What benefits do I have splitting it ? Won't my exploratory analysis loose interest if I'm putting away - let's say- 20% ?

Thank you for your help

 

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question: When training a model for image classification it is common to use pooling layers to reduce the dimensionality, as we only care about the final node values corresponding to the categorical probabilities. In the realm of VAEs on the other hand, where we are attempting to reduce the dimensionality and subsequently increase it again, I have rarely seen pooling layers being used. Is it normal to use pooling layers in VAEs? If not, whats the intuition here? Is it because of their injective nature?

 

Icebreaker post! What's something related to ML that you are in the process of learning more about or just learned?

 

cross-posted from: https://sh.itjust.works/post/48227

Presented on Wednesday, June 21 at 12:00 PM ET/16:00 UTC by Daniel Zingaro, Associate Teaching Professor at the University of Toronto, and Leo Porter, Associate Professor of Computer Science and Engineering at UC San Diego. Michelle Craig, Professor of Computer Science at the University of Toronto and member of the ACM Education Board, will moderate the questions and answers session following the talk.

 

Have a cool new project idea that involves machine learning but not sure where to start? Years of experience but your new model is just not working out? Ask stupid questions (or hard ones) about machine learning here! Join us at learnmachinelearning@sh.itjust.works

How are we different from artificial_intel@lemmy.ml? This community is more focused on helping others with the math, programming and implementation behind machine learning applications (more general than just AI). Looking at the cool stuff you can do with AI is always fun but it's good to have a place to talk about the problems and challenges.

 

Have a community that you want to recommend or promote on here? Just made a community and want to let everyone know? Write it in the comments!

Veux recommander une communauté? Où annoncer une communauté tu as créé? Partagez-le ici!

Remember to link non-instance communities like so it will take users to the right place without taking them off their instance:

[/c/main@sh.itjust.works](/c/main@sh.itjust.works)

/c/main@sh.itjust.works

view more: next ›