chinpokomon

joined 2 years ago
[–] chinpokomon 9 points 1 year ago (1 children)

In their interactions and personal knowledge, perhaps he was. If you personally don't know Danny or anyone else involved, your only exposure is what you've heard presented and made public. If you personally knew Danny and hadn't witnessed any of these crimes yourself, you now have a conflicted view of someone who is both your friend and now guilty of 2 counts of sexual assault. While that conviction almost certainly changes your relationship going forward, it doesn't change how you thought of that individual beforehand.

Ashton and Mila were asked to write letters of character that described the Danny they knew. It doesn't change the outcome of the trial, but as with matters that carry different sentencing structures awarded by the judge, a judge will often take letters like this to determine what is appropriate. Is there a chance that the defendant will repeat this offense? What punishment, if any, will be restorative to the victims? How does this punishment affect everyone, including families established years afterwards? Is the defendant the same person today as they were when they committed these crimes?

These aren't matters easily decided and therfore it isn't surprising to see letters of character submitted either as part of the trial or during sentencing. If there is a patten of behavior, then sentencing might be maximum allowed, but if there's no clear discernable behavior, then sentencing might be light.

I don't know all the details that was considered, but based on my knowledge from reports, I think 15 years concurrent would have been appropriate. However, I don't have all the evidence or material to make an informed decision. I don't look upon these letters ss reflecting poorly on Ashton or Mila as they were just doing what was asked of them to help give the judge the context necessary to carry out an appropriate sentence. They aren't guilty of doing anything wrong, more than the lawyers defending a now convicted and sentenced rapist.

[–] chinpokomon 3 points 1 year ago* (last edited 1 year ago) (1 children)

This is in NV, but what I don't know is what jurisdiction applies. Involved might be federal, state, or county agents. ~~The Ranger truck might suggest that it was federal, but the highway would have been state managed.~~

Edit: Read elsewhere that it was a Tribal Ranger truck, so yet another jurisdiction. Excessive response and the conduct is under review.

[–] chinpokomon 11 points 1 year ago

How does Russia want our help? Strange that they're asking the West to help with the Ukranian drone attacks, but they have my support.

[–] chinpokomon 2 points 1 year ago (1 children)

I try to use both equally, because I’m always on the hook for picking the “doomed” standard in any 50/50 contest.

I can relate to that. It usually isn't a coin flip for me though. I'll align with some technology over another because I truly can see an advantage. That technology might be the underdog from the beginning. Consider that we're evaluating Firefish vs. Lemmy vs. Kbin whereas all of them combined are the underdog for certain more well established social forums. I engage with all three (and others still), because I don't know the future.

[–] chinpokomon 4 points 1 year ago

That's the same theme of a reply I made yesterday. I read the article and might have even boosted it myself because as a fediverse citizen, I'm concerned about any government agency seizing an instance like this. The "well known racist" claim is demonstrably false, because I still don't know who they are talking about nor would I know the person behind a username.

[–] chinpokomon 4 points 1 year ago

I think a human might consider the meaning about what is being said whereas an LLM is only going to consider what token is the best one to use next. Humans might not be infallible, but they are presently better at detecting obvious BS that would slip undetected past an AI.

Maybe this is an opportunity we haven't considered. This is the chance to create a Turing CAPTCHA Test. We can't use Glorbo to do so, because it has been written, but perhaps it makes sense that there is a nonsensical code phrase people can use to identify AIs, both with markers intentionally added to LLM training models, buried in articles written by human authors, and a challenge/response which is never written down and only passed verbally through real human-human interactions.

[–] chinpokomon 1 points 1 year ago

I grabbed some the last time I saw them in the store. Still waiting for them to come back.

[–] chinpokomon 5 points 1 year ago

Grandview seemed to do the best in clearly identifying the character 0. Is it an O, 0, I, l, or 1? Even without an example of O clearly visible in the sample text, the shape of 0 was very clear and seems like it should stand apart. Not the only reason to select a font, but it might be important to some.

[–] chinpokomon 11 points 1 year ago

Arguably it is a strength. Unless a user has used the same username and password for different instances, their credentials on one instance are shielded from exploit over the whole network. The potential risk can only really be determined by how security was breeched. If it was social engineering, then there isn't any other direct concern. If it was a vulnerability in software, then the same attack could be played out on other instances, but that's not any different than other systems like a Linux kennel exploit.

[–] chinpokomon 1 points 1 year ago (1 children)

If a human can access your public repo and read comments posted on public forums, are they stealing your code? LLMs are just aggregators of a great many resources and they aren't doing anything more than a biological human can already do. The LLM can do so more efficiently than a biological human, while perhaps being more prone to error as it doesn't completely understand why something is written the way it is. As such any current AI model is prone to signpost errors, but in my experience it has been very good at organizing the broader solution.

I can give you two examples. I started trying to find out how a .Net API call was made. I was trying to implement a retry logic for a call, and I got the answer I asked. I then realized that the AI could do more for me. I asked it to write the routine for me and it suggested using a library which is well suited for that purpose. I asked that it rewrite it without using an external library and it spit it out. I could have written this completely from scratch, in fact I had already come up with something similar but I was missing the API call I was initially looking for. That said, the result actually had some parts I would have had to go back and add, so it saved me a lot of time doing something I already knew how to do.

In a second case, I asked if to solve a problem which at its heart was a binary search. To validate that the answer was correct it would need to go one extra step, but to answer the question it wasn't necessary to actually perform that last validation step. I was looking for the answer 10, but I got the AI to give me answers in the range of 9-11. It understand the basic concepts, but it still needs a biological human to validate what it generates.

[–] chinpokomon 1 points 1 year ago

We have, and there are still things to solve before this is completely practical. This is still different than connecting to a mainframe over a 3270 terminal. A closer example of how this would work is port forwarding an X11 to a remote system or using SSH to tunnel to a server where I've ran screen. If I've connected to a GUI application running on a server or reconnected my SSH session, it is less important about where I'm connecting from. Extending this concept to Windows, you wouldn't even need local storage for most needs. It won't be practical for places with poor network connectivity, but where it is reliable, high bandwidth, and low latency, it won't be so discernable from local use for most business applications. This is probably the biggest driving force behind XCloud. If Microsoft can make games run across networks with minimal problems, business applications are going to do just fine. XCloud works great for me, allowing me to stream with few problems. That's less true for others in my family, so clearly this isn't something which can roll out to everyone, everywhere, all at once. I think it would be great to be able to spin up additional vCPU cores or grow drive space or system RAM as needed per process so that I'm not wasting cycles or underutilizing my hardware. It seems like this would become possible with this sort of platform.

[–] chinpokomon 1 points 1 year ago* (last edited 1 year ago) (1 children)

For a business, I see this as a strong benefit for this design. The work done for a company is the property of that company by most hiring contracts, so the work done on a remote system can by tightly controlled. At the same time, it would allow someone to use their own thin client to do both professional and personal work and keep things isolated. For someone doing freelance work, it makes sharing a natural extension of that process and access can be granted or revoked as it relates to contracts. That seems like an advantage to corporate IT departments.

As for individuals, I don't see how this takes away ownership. Regulations will be updated to allow users to request their data in compliance with GDPR requests, so nothing would become completely locked up. Should that be challenged ever, I don't think any jurisdiction would say that Microsoft owns the data. What a user will be able to do with the bits they receive is a different question.

view more: next ›