According to the analytics firm’s report, worldwide desktop and mobile web traffic dropped by 9.7% from May to June, and 10.3% in the US alone. Users are also spending less time on the site overall, as the amount of time visitors spent on chat.openai.com was down 8.5%, according to the reports.

The decline, according to David F. Carr, senior insights manager at Similarweb, is an indication of a drop in interest in ChatGPT and that the novelty of AI chat has worn off. “Chatbots will have to prove their worth, rather than taking it for granted, from here on out,” Carr wrote in the report.

Personally, I’ve noticed a sharp decline in my usage. What felt like a massive shift in technology a few months ago, now feels like mostly a novelty. For my work, there just isn’t much ChatGPT can help me with that I can’t do better myself and with less frustration. I can’t trust it for factual information or research. The written material it generates is always too generic, formal, and missing the nuances I need that I either end up re-writing it or spending more time instructing ChatGPT on the changes I need than it would have taken me to just write it myself in the first place. Its not great at questions involving logic or any type of grey area. Its sometimes useful for brainstorming, but that is about it. ChatGPT has just naturally fallen out of my workflow. That’s my experience anyway.

  • Raymonf@lemmy.uhhoh.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I’ve been using it a lot less recently because GPT-4 has just been spitting out gibberish code. They really nerfed it.

  • PoorlyWrittenPapyrus@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    They completely banned it at my job, I’m willing to bet some companies are banning it.

    Especially frustrating because we work very closely with Microsoft and have a team specifically for helping our clients develop applications with AI.

    • poo@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I can see where they’re coming from in terms of security, but that sounds a bit harsh.

      At least where I work, we’re told basically “use it if you want, just be prepared for it to be wrong and double check everything it tells you” which sounds a little more reasonable IMO

      • PoorlyWrittenPapyrus@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Ironically, security isn’t a concern at all. With our relationship with Microsoft, we can use the Azure OpenAI API which, despite my own strong personal distrust in Microsoft, still meets all of our privacy and security standards. We trust Microsoft as much as we do our internal teams.

        The concern is mostly legal/copyright related, as they’re worried any code or documents that come out of it could be considered copyrighted.

    • astanix@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Not only is it banned from my work, they have banned anything even close to AI. Even deepL is blocked and I need to translate things for my job daily. Google and Bing aren’t blocked so I could still use their AI if I was going to use AI but that’s not what I’m even trying to do lol.

      DeepL is just the best at translating business related language things. Google does a decent job at it anyway…

  • Maple@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    there just isn’t much ChatGPT can help me with that I can’t do better myself and with less frustration. I can’t trust it for factual information or research

    This mixed with the constant reprimanding and moral instruction just makes it so frustrating. I’m not asking for no filter, but it’s gotta be more lenient if they want it to be a good and useful tool. I am so over reading “It is important to remember…” because it misunderstood a prompt.

  • Zeth0s@reddthat.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I find that recently the effort needed to get the “right” answer is much more than in the past for gpt-4. That’s my impression. At the end I am finding myself more often going back to google, stack overflow, manuals, medium…

    I believe they distilled the model to much for performances, or the rlhf is really degrading the model performances