Yeah exactly.
The tedious stuff no one wants to do. But that's not really impressive 'intelligence'.
ChatGPT was getting even the most basic questions wrong. "Who won the gold in 200m freestyle in the 1996 Atlanta Olympics" style questions. It searched the internet, found some answers but could not 'reason' what is to be trusted and what isn't.
To be fair, this doesn't actually separate it from a significant proportion of actual real-deal humans using the Internet.
At base this is Searle's Chinese room problem on steroids. For those not familiar, you have a guy in a room with an enormous book full of instructions. His job is to receive slips of paper through a slot with mysterious symbols on them, find the matching symbol in the book of instructions, and then write a different symbol derived from those instructions on another piece of paper, which he returns through the slot. For him it's just arbitrary symbol transformation.
Thing is, though, when you put a sentence of English on a piece of paper and put it through the slot, what you get back is the same sentence rendered perfectly in Mandarin. The guy in the room, to be clear, does not speak Chinese. Hell, let's say he doesn't even speak English. so he clearly doesn't understand how to translate English to Chinese.
But does the system of dude + book + slot know Chinese?
Searle thought "no", but it turns out to be pretty hard to justify an answer that does not involve acknowledging that in some sense the entire system understands Chinese. Objection often founder on the difficulty of specifying what, precisely, is lacking in terms of understanding that is not instantiated in the property of the system being able to turn appropriate, well-formed, germane inputs in one language into appropriate, well-formed germane inputs in another language. Strengthen the case, say the system can also take sentences in Chinese as input and output appropriate responses in Chinese.
Often people come up with objections that involve some reference to consciousness, but this is a shaky branch to put weight on. The philosopher Georges Reyes was fond of saying that he had no conscious experiences; he used to, but he was in a bike accident when he was 10 and hadn't had one since. His point was that there is literally no way anyone can tell him this is incorrect since, as David Lewis put it, "an incredulous stare is not a strong counter-argument."
AIs like this are increasingly going to turn this problem up to 11.