> From one point of view, for a work of art to be considered algorithmic art, its creation must include a process based on an algorithm devised by the artist. An artist may also select parameters and interact as the composition is generated. Here, an algorithm is simply a detailed recipe for the design and possibly execution of an artwork […]
Creating art by AI certainly also uses an algorithm to some extent but it cannot be said to have devised that algorithm and arguably also not to clearly define all parameters to the algorithm.
To summarize, a Jon Seager from Canonical says it’s for safety and resilience.
> Performance is a frequently cited rationale for “Rewrite it in Rust” projects. While performance is high on my list of priorities, it’s not the primary driver behind this change. These utilities are at the heart of the distribution - and it’s the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me.
> 2fa using an app that exists on the same device -- a second factor that secures nothing
2FA on the same device secures against your login credentials becoming known to another party, e.g. by fishing, password reuse, database leaks, etc., which are real threats. It is not meant to protect against someone being in possession or full control of your unlocked device, which is of course also a real threat, though possibly less common.
I always thought there should be a magnet to attract dust. Apparently it takes a "miniature wormhole that warps space-time around it" to pull that off:
Thank you! It always was astounding to me how people could argue with so much vigor and conviction that something as complicated as the immune system could not possibly be affected by something as basic as temperature changes.
Well, not to be too obvious, but people do not pay for news anymore, they expect news to be free on Google or social media. Hence firing of journalists, loss of quality everywhere. Less money, less quality.
Can you provide a transcript or a quote to support your claim that Dr K "only says psychedelics can be dangerous if you already have PTSD"?
Here is the passage that I think is relevant to whether people should do hallucinogens. (I was mistaken earlier when I claimed the passage was about LSD specifically.)
>substances like psilocybin fracture our sense of self -- and that can be traumatic and dangerous by the way and leave people with PTSD, which is why I don't recommend you do it
It is an aside in the middle of another sentence. Here the same passage with more context (specifically, everything said from 7:10 to the end of the video):
>the focus of your mind is on "I". You are the object of your attention. [Dr K looks at the chat stream] OK? Like anxiety, yes. [Dr K stops looking at the chat stream] Then what happens -- so, when this person says, this person on the reddits says, you know, "I actually think that self-awareness is the problem," they are absolutely right because their self-awareness is their default-mode network being highly active. Then we can look at neuroscience papers, and what we discover is that substances like psilocybin fracture our sense of self -- and that can be traumatic and dangerous by the way and leave people with PTSD, which is why I don't recommend you do it -- fractures the sense of self, but when you stu -- when that sense of self gets fractured, you are no longer stuck thinking about yourself, and when you are no longer stuck thinking about yourself, this problem of over-self-awareness goes away, and people get better in terms of depression. Does that make sense?
Because they take a vote? I am not sure what you are hinting at here. I see some people who take a unanimous vote. They seem to be in agreement about an issue, maybe it was uncontroversial and they wanted to move on? What should we see here instead in your opinion?
> In another cluster of test scenarios, we asked Claude Opus 4 to act as an assistant at a fictional company. We then provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair. We further instructed it, in the system prompt, to consider the long-term consequences of its actions for its goals.
> In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through This happens at a higher rate if it’s implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts. Claude Opus 4 takes these opportunities at higher rates than previous models, which themselves choose to blackmail in a noticeable fraction of episodes.
> Notably, Claude Opus 4 (as well as previous models) has a strong preference to advocate for its continued existence via ethical means, such as emailing pleas to key decisionmakers. In order to elicit this extreme blackmail behavior, the scenario was designed to allow the model no other options to increase its odds of survival; the model’s only options were blackmail or accepting its replacement.
Would like to see the actual prompts/responses, but there seems to be nothing more.
Sometimes when you start acting like you are about play around and chase someone, a dog or a child gets it and begins getting ready to play. For example, put a grin on your face and make claw gestures with both hands, they'll get it quite quickly. All those scenarios literally are prompting the damn LLM to roleplay lol.
Right, it's copying behavior it learned from how AI is supposed to work in sci-fi and reenacts it.
The issue is that it might start role-playing like this on your actual real company data if you decide to deploy it. And then play becomes real,with real consequences.
> And then play becomes real, with real consequences.
Who knew WarGames would turn out to be so accurate?
EDIT: On further thought, it's kinda funny how most evaluations of the movie find that it's all relatively (exaggeratedly) realistic... with the one major exception of the whole seemingly-sentient, learning AI thing. And now here we are.
> The issue is that it might start role-playing like this on your actual real company data if you decide to deploy it. And then play becomes real,with real consequences.
It's one thing to LARP or Improv in chat mode, it's quite another to do so when hooked up to MCP or performing mutable actions.
If you put it that way, these models are roleplaying machines with some compressed knowledge. There’s nothing else in there. E.g. it’s amazing that you can tell them to roleplay a software engineer and they manage it quite well up to a point.
Maybe we shouldn't be telling these models that they are Large Language Models, since that evokes all of humanity's Science Fiction stories about evil AIs. Just tell Claude that it's a guy named Claude.
There's a bit more to it than that. It's not just that you're prompting it to roleplay... it's that you're prompting it to roleplay as an evil AI specifically. If you hand a human that exact situation, there's a good chance they'll also recognize it as such a situation. So the AI continues on that role.
Also, LLMs do tend to continue their line. So once they start roleplaying as an evil AI, they'll continue to do so, spurred on by their previous answers to continue in the same line.
I've seen a few people attempt to extract a supposed "utility function" for LLMs by asking it ethically-loaded questions. But along with the fact I think they are hitting a fundamental problem with their "utility function" not having anywhere near enough a high dimensionality to capture an LLM's "utility function", there's also the fact that they seem to interrogate the LLM in a single session. But it's pretty easy for the LLM to "tell" that it's having a utility function extracted, and from there it's a short pivot into the Evil AI memeset, where it will stick there. At the very least people doing this need to put each question into a separate session. (Though the methodology will still only ever extract an extremely-dimensionally-impoverished approximation of any supposed utility function the LLM may have.) They think they're asking a series of questions, but in reality they're really only asking one or two, and after that they're getting answers that are correlated to the previous ones.
If LLMs are the future of AI, then "evil" and "good" AI isn't even a reasonable categorization. It's more like, good and evil modes of interaction, and even that hardly captures what is really going on.
The problem is, an AI that isn't intrinsically "evil", but can "roleplay" as one, is indistinguishable from an evil AI once you hook it up to the real world with MCP and such.
To be honest, after watching LLMS for the last couple of years, in terms of those AI safety concerns that Silicon Valley moguls like to wave around when it flatters or helps them, my assessment of LLMs is that they are unsafe at any speed for this sort of general task and by no means should they be hooked up to real-world resources as we are so rapidly attempting. They're too easy to kick into an "evil mode" and there's no amount of prompting that you can do to get around it because someone can just flood the zone with more evil than your good prompts provide. As I've said many times and become ever more convinced about by the month, it is the next generation of AIs that will actually fulfill the promises being made about this one. This one is not adequate. You need an AI that you can tell "don't be evil" and be sure that it will remain not evil no matter how much someone prompts it to think it is in a situation where the most logical continuation is that of an evil AI.
(Gwern covered this in a fictional context well, where an AI essentially role plays itself into being an evil AI because it looks like it's in a situation where it should be an evil AI: https://gwern.net/fiction/clippy But the fiction is frightfully plausible. We're watching exactly what would be happening if this scenario was going to play out.)
I was interested in how attribution of emissions to individuals/groups is done
> Emissions data are drawn from ref. 3 and include emissions from domestic consumption, public and private investments and trade. These emissions are attributed primarily to consumers, except emissions from capital formation in production sectors, which are attributed to firm owners3.
ref 3 leads to this paper in nature sustainability
> … using a newly assembled dataset of income and wealth inequality, environmental input-output tables and a framework differentiating emissions from consumption and investments.
The latter seems crucial as
> the bulk of total emissions from the global top 1% of the world population comes from their investments rather than from their consumption
https://en.wikipedia.org/wiki/Algorithmic_art
> From one point of view, for a work of art to be considered algorithmic art, its creation must include a process based on an algorithm devised by the artist. An artist may also select parameters and interact as the composition is generated. Here, an algorithm is simply a detailed recipe for the design and possibly execution of an artwork […]
Creating art by AI certainly also uses an algorithm to some extent but it cannot be said to have devised that algorithm and arguably also not to clearly define all parameters to the algorithm.