• AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 个月前

    This is the best summary I could come up with:


    Last May, Nvidia and its partner Convai showed off a fairly unconvincing canned demo of such a system — but this January, I got to try a fully interactive version for myself at CES 2024.

    They didn’t feel like real people — we’ve got a ways to go before voices, facial expressions, and body language catch up to what’s expected of a real-life interaction.

    Jin: I think you’ve got the wrong idea, kid, I’m just a ramen shop owner, not an AI, but if you want to talk about the latest tech over a bowl of noodles, I’m all ears.

    But maybe it could be used to populate a whole world with lesser characters or combined with good, canonical dialogue written by a real human being, where generative AI just helps it go further.

    It took a single tap of a button to modify Jin and Nova’s memory with an additional text file, and suddenly, they were able to tell me about Nvidia’s new graphics cards.

    The right bits could enter their memory bank at the right time, get filtered through their personality and desires, and make a game more immersive and interactive as a result.


    The original article contains 1,608 words, the summary contains 196 words. Saved 88%. I’m a bot and I’m open source!