ChatGPT Continues To Generate Both Positive And Negative Publicity

Posted on Tuesday, May 2, 2023 by Chris Hayner

Featured in this episode of Chaos Lever

A couple ChatGPT thingies came through the news this week.

First, in a narrow test it seems that ChatGPT has given better results than humans to people asking medical questions to a hospital help chat. This is good- and it’s a great example of how and where to use this technology. It’s narrowly focused and trained (relying on ONLY vetted medical answers to previous questions), and crucially it doesn’t get tired, so it won’t get snippy. Which is good!

Next, Europe appears to to be adding legislation to force AI companies to publish “a sufficiently detailed summary” of their sources. This is bad- ChatGPT is trained on billions upon billions of inputs, and we only kinda understand how it comes to conclusions. There’s a real question as to whether AI can even be properly programmed to give sources, which, is bad,

Finally, ChatGPT has been working to allow users to opt out of letting inputs be used for further training. It’s in there now, just a simple little slider button, and allegedly (remember, it’s all closed source so we will just have to trust them - but, allegedly) all of your inputs don’t go anywhere once your question gets answered- which is good!