Well, you see my parents and grandparents don't understand the concept of ads fully, especially in case of YouTube Shorts. After a few instances of them sharing the ads, thinking they were regular content, i just got the family plan.
beigeoat
Sorry, should've read them.
For future reference, can I copy paste all content and reference link to my site at the bottom of post, or just the content itself?
I know.
Put it this way: I try to be objective, but at the end of the day I am some what subjective.
I think you are misunderstanding something, you don't need a rocm kernel. What you need is the rocm-opencl-runtime.
This video is a year old, but should be enough to get you started: https://www.youtube.com/watch?v=d_CgaHyA_n4
You can get it to work on arch, rocm is in the repos.
I suggest you use a container if you proceed though.
At least check what the conflict is about...
While I personally don't like the BJP and they are at fault to some extent, this conflict has nothing to do with Hindu nationalists. This is about ethnic groups, people from different religions can belong to same ethnic group.
There had been rising tension between the two communities for the past few years, which sparked into a full grown conflict when the High Court of the state ordered the state government to make a decision regarding the reservation status of the majority community.
The state government didn't end up making a decision in the given timeframe, but both communities were up in arms about it, the majority community in favour and the minority community against it, this resulted in small skirmishes followed by a full on conflict.
While i didn't benchmark previously, I remember quite vividly the speeds were much slower 7-8 months ago.
I'll just drop this here. The whole thing is pretty dumb. They probably did this cause the opposition parties fromed an alliance called the INDIA Alliance.
I have used it mainly for dreambooth, textual inversion and hypernetworks, just using it for stable diffusion. For models i have used the base stable diffusion models, waifu diffusion, dreamshaper, Anything v3 and a few others.
The 0.79 USD is charged only for the time you use it, if you turn off the container you are charged for storage only. So, it is not run 24/7, only when you use it. Also, have you seen the price of those GPUs? That 568$/month is a bargain if the GPU won't be in continuous use for a period of years.
Another important distinction is that LLMs are a whole different beast, running them even when renting isn't justifiable unless you have a large number of paying users. For the really good versions of LLM with large number of parameters you need a lot of things than just a good GPU, you need at least 10 of the NVIDIA A100 80GB (Meta's needs 16 https://blog.apnic.net/2023/08/10/large-language-models-the-hardware-connection/) running for the model to work. This is where the price to pirate and run yourself cannot be justified. It would be cheaper to pay for a closed LLM than to run a pirated instance.
The point about GPU's is pretty dumb, you can rent a stack of A100 pretty cheaply for a few hours. I have done it a few times now, on runpod it's 0.79 USD per HR per A100.
On the other hand the freely available models are really great and there hasn't been a need for the closed source ones for me personally.
I see it
To satisfy you: