Published in News

Apple's boasts of “new” AI tool

by on15 February 2024


Apple sauce -- Boldly inventing what has already been created before

Apple boffins have shown off a new AI tool called "Keyframer" which uses the power of big language models (BLMs) to make static images move.

If it sounds familiar, it is because it is similar to something free on several online platforms. However, when Apple claims to have invented it, it gets more attention.

Jobs Mob version, blabbed about in a new research paper on arxiv.org, is supposed to be a “big jump” in the use of artificial stupidity in the creative process -- and it may also show what's in store for newer versions of Apple's rubbish products like the iPad Pro and Vision Pro.

The research paper "Keyframer: Making Animation Easy with Big Language Models" tries something new with BLMs in animation, facing tricky problems like how to say motion in natural language.

Suppose you are an animator with an idea you want to try out. You've got static images and a story to tell, but the idea of endless hours slaving over an expensive iPad to make your pictures move is knackering.

Enter Keyframer. With just a few words, those images can jiggle on the screen as if they've guessed what you want—or if Apple's big language models (BLMs) have. Keyframer is run by a big language model that can make CSS animation code from a static SVG image and prompt.

"Big language models can affect many creative areas, but the use of BLMs for animation is not well-studied and has new problems like how users might say motion in natural language," the boffins waffled.

Ironically, the researchers used GPT-4 in the study, presumably because Apple has yet to invent a big language model.

Rate this item
(2 votes)