LLMs are of course extremely limited to generating stuff that if reversed would generate the prompt. That is to say that it closely resembles a proper report or analysis but might include fake references and other "hallucinations" to make it resemble better.
The key is that they run by massive networks of linked artificial neurons that have multiple inputs from ANs and multiple outputs to other ANs. Training the network involves assigning weights (strengths) to the links between ANs, doing a run, evaluating the result and adjusting the weights.
This is how the human brain works, which was the inspiration for the breakthrough of large artificial neural networks (large ANNs). Connections between neurons (synapses) are adjusted organically by learning from experience (and other sources like teachers and books).
So when a progressive and a maga look at current events and arrive at conclusions, neither can truly explain how they got there. They can rationalize it based on picking and choosing certain aspects. Also, rational direction of thinking can exert a great deal of influence on the process but a large amount of it is influenced by emotional experience.
It is called "explainable AI", but is basically unknowable and therefore an important research topic. The thing is that human thinking is also in large part unexplainable. We know the general mechanism but can't detail how a particular conclusion was arrived at.
Just like we treat humans as skin-encapsulated egos, we are likely going to have to treat much of AI as "black boxes".