There might be a parallel between simplification and entropy.
cf. Entropy Generation and Human Aging:
Constructal Design Principle proposed by Bejan et al. shows how optimal geometric forms for fluid and heat flow have dissipation which scales to the ¾ power of their size, predicting how natural occurring structures (tree branches, river deltas, vascularized tissue, lightning) repeat themselves due to the entropy minimization principle.
Batato .. proposed that entropy generation in the human body can be divided in 3 stages: during growth (childhood) the rate of entropy generation decreases; for a healthy adult, this value approaches zero (questionable since ds/dt can be zero, but not σ); and during old age, until death, the rate of entropy generation is positive.
If we watched a speeded-up film of a form developing, we would easily discern the latent form within it (which means entelechy, really) press outward into actualization; we would sense it within, and then we would see the inner pressure finally unfold and die away, leaving the completed form without internal energy. Then we would watch decay and disorder begin. (For example, a rose bud developing.) A force, internal, a plan, unfolds energetically, then reaches equilibrium and stasis, then the force dwindles away, becomes feeble, and the completed form is at the mercy, forever, of external forces which formerly the entelechy pressed outward against so effectively. One form of energy (within, growth) has waned, and forces moving toward disorder now prevail.
Young organisms would often display the negative entropy of learning, by structuring new knowledge in indexes and fractals. (Note that neural networks are indexes.)
Organisms approaching “senile death”, on the other hand, have lost the capacity to form new indexes and fractals (either due to deficiency in neurogenesis, or because they’ve learned not how to resolve inner conflicts, so there are wars between structures new and old.)
Consequently, there are two kinds of simplification: one where information is indexed, matching the added entropy of new information with the negative entropy of structure, and another where information is reduced to match the existing structures (cf. confirmation bias).