Friday Squid Blogging: Strawberry Squid in the Galápagos
Scientists have found Strawberry Squid, “whose mismatched eyes help them simultaneously search for prey above and below them,” among the coral reefs in the Galápagos Islands.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
vas pup • December 1, 2023 5:56 PM
https://www.technologyreview.com/2023/10/26/1082398/exclusive-ilya-sutskever-openais-chief-scientist-on-his-hopes-and-fears-for-the-future-of-ai/
“While others wrestle with the idea of machines that can match human smarts, Sutskever is preparing for machines that can outmatch us. He calls this artificial superintelligence: “They’ll see things more deeply. They’ll see things we don’t see.”
!!!!!!!!Together with Jan Leike, a fellow scientist at OpenAI, he has set up a team that will focus on what they call superalignment. Alignment is jargon that means making AI models do what you want and nothing more. Superalignment is OpenAI’s term for alignment applied to superintelligence.
=>The goal is to come up with a set of fail-safe procedures for building and controlling this future technology. OpenAI says it will allocate a fifth of its vast computing resources to the problem and solve it in four years.
“Existing alignment methods won’t work for models smarter than humans because they fundamentally assume that humans can reliably evaluate what AI systems are doing,” says Leike. “As AI systems become more capable, they will take on harder tasks.” And that—the idea goes—will make it harder for humans to assess them. “In forming the superalignment team with Ilya, we’ve set out to solve these future alignment challenges,” he says.
“It’s super important to not only focus on the potential opportunities of large language models, but also the risks and downsides,” says Dean, Google’s chief scientist.
for Sutskever, superalignment is the inevitable next step. “It’s an unsolved problem,” he says. It’s also a problem that he thinks not enough core machine-learning researchers, like himself, are working on. “I’m doing it for my own self-interest,” he says. “It’s obviously important that any superintelligence anyone builds does not go rogue. Obviously.”
=> he has an exemplar in mind for the safeguards he wants to design: a machine that looks upon people the way parents look on their children. “In my opinion, this is the gold standard,” he says. “It is a generally true statement that people really care about children.” (Does he have children? “No, but I want to,” he says.)*
“One possibility—something that may be crazy by today’s standards but will not be so crazy by future standards—is that many people will choose to become part AI.” Sutskever is saying this could be how humans try to keep up. “At first, only the most daring, adventurous people will try to do it. Maybe others will follow. Or not.”
*My nickel: It is better machine looks upon people the way people looks upon their pats. Too many cases of child physical, mental and sexual abuse even murder by their parents. That will be possible as soon as people give machine unconditional love and respect. Just my opinion.