text comprehension is about double-movements between local, micro-units of meaning (I have already defined what these are in source code) and broader, theoretical macro-structure of the text.
the macrostructure is dependent on signal elements (headlines, subtitles, initial appearances of sentences)
A discourse is coherent only if its respective sentences (tokens) and propositions (functions) are connected, and if these propositions are organized globally at the macrostructure level.
Particularly, they postulate that there are some propositions which are principally important (i.e. they are being recalled throughout text comprehension in order to understand what is going on, and this happens through coreference, the fact that things that refer to the same thing are close to one another)
Inferences is the interplay between implict text and explicit text (“An actual discourse, therefore, normally expresses what may be called an implicit text base. An explicit text base, then, is a theoretical construct featuring also those propositions necessary to establish formal coherence.)
the schematic structures of discourse (e.g. story, argument, interview) have equivalent in programming (to some extent) (functional paradigm, MVC, object oriented, etc.). However, they can also represent idiosyncratic personal processing goals (aka reader response? it’s the filter that decides which proposition is relevant towards the general model)
they then specify text, proposition, concept as different elements at play in text comprehension what are equivalent elements in programming?
the rest of the piece is a very detailed description of how that model actually works (somehwat tedious)