They already had preset email replies, if I recall correctly; this seems like a natural extension to that. It sounds like they're just sending special reply emails, which hopefully is easy for other email clients to parse.
brie
Firefox has a kiosk mode using a cli flag, which if I recall correctly also prevents exiting fullscreen (though they can still close Firefox, or follow links off-site).
An example of a compression algorithm that does support tuning parameters before hand is zstd.
Even if something isn't in a pre-shared dataset, I wonder if a sufficiently advanced LLM might be able to do well at compressing predictable but non-repeating data, such as "abc, bcd, cde, [...]".
Ah, my bad. Bypassing such integrity checks should still be doable, either by reverse engineering and spoofing the communications between the browser and Google, or by modifying a "trusted browser" in a way that keeps it from detecting such alterations. It might not be very reliable though, as the internals could be changed arbitrarily with each update, and old versions blocked in the name of security.
Building adblock into the browser could enable better countermeasures for adblock detection, but uBlock Origin's filters usually work fine in my experience. Hiding that adblock is being used is essentially just an arms race between adblock detectors and ad blockers.
I can understand why installing the wrong part should give a warning, but the IDs are unique to the part, not the model of part, so even identical parts are not interchangable.
The paper for the trolly problem has some interesting details. In experiment 2, those who considered themselves worse at the foreign language were more likely to make the utilitarian choice, which indicates that it might just be a matter of proficiency in the language, rather than whether or not it is the participant's primary language.
Good point. It just seems odd that the Columbia article calls them "current language models," whereas the coauthor of the paper is quoted as only calling them "the best models [the authors of the paper] have studied."
BERT and GPT-2 are fairly old models...
What about tuning, to align with "finetuning?"
Bobby "Alt-d g g Z Z" Tables