This is totally wrong. So badly wrong I have to point it out. We are observing not 2 or 3 or 5 or 10, but hundreds of millions a second discretely different recognitions any one of which could have a major impact in the next second on what we perceive or an action we take through exactly the same mechanisms. ChatGPT copies what our brains do. Not vice versa. And that was intentional although highly constrained by the hardware we have for implementation in electronics. Here is a simulation of such advanced perceptron parallelism I did based on human eye movement and memory studies back in 1980! Your numbers are way off. We abandoned the 'like a computer' model years before that. (Not thirty, 50 years ago, I was there.)
https://link.springer.com/article/10.3758/BF03203564
The evidence never supported it. However, people who pretended to know would make such uninformed guesses just like you have done at least 20 times in this article. Sorry, but if I tried to rewrite physics you might see a few major problems that I don't see. Like many things in computational physics the limit isn't what we know how to do (compute) it is in the hardware available to do the simulations.
There is pretty much nothing that we have now observed in brain information processing that we do not know how to compute with advanced / extended perceptions in large enough numbers arranged appropriately.
Just because Jeff Hawkins doesn't see it doesn't mean others don't. I know him and I've written a review of his book which points out he believes language is just a Skinnerian training thing. It isn't. If he had spent a few dozen years studying human language as a window on brain computation his concepts would be more attuned. Same goes for many neuroscientists and other part time amateur computational cognitive neuroscientists. His "thousand brains" metaphor is good, just an order of magnitude too small from what I see. Our brains after all evolved not over 100,000 years like people like to think, but a billion years. What happened in the last 200,000 was not in how we think but how we communicate through the evolution of human natural spoken language. You can see observe it directly if you study it with enough knowledge of how to observe it. You see in OpenAI's version of ChatGPT how other people do know how to observe it and implement it. Maybe not the loudest people.
This stuff is hard science, meaning hard to do. Lightweight 'experts' should be more attentive to the literatures. I suggest you should have mentioned Terry Sejnowski's "Deep Learning Revolution." Terry has been in full throttle on the larger problem of brain computation his whole life. As have some others. Not necessarily the loudest people.
That said, there are many poor books. Hawkin's first book was pretty good. His second, not so good. Both though better than 90% of the tripe out there on ill-informed and constructed topics like consciousness or what differentiates humans from animals in intelligence (not much but a lot in specific ways and places).
ChatGPT does do an awful lot of human reasoning. It is not like it. It is it. Again the fundamental computations can account for it all, but nobody has the hardware to do the kinds of simulations that people who just want a Turing test would find convincing. Just think a bit about the fact that ChatGPT uses the same computatlons to both perceive language and produce it. If you see that then you have a better understanding of what we know already. Here is another of my attempts to explain this but as a fundamental organizing principle among those hundreds of millions of 'advanced perceptrons'.
https://medium.com/liecatcher/100-billion-sources-of-hate-between-your-ears-d34baf503c98