"Why are parameterizations important in numerical models ?"
Models are just that, they're models. We cannot currently hope to explicitly and accurately represent all of the spatial and temporal scales for a problem. Instead we focus in on the spatial and temporal scales that are important and make an informed decision about what to do at small scales. In the prototypical problem of shock formation (a la Burger's equation), small scales are forced into existence from the large scale. This transfer of energy to small scales, called the "forward cascade", poses challenges for numerical models that can become unstable when too much energy is forced into the smallest resolvable scale. The comic shown below shows what happens when you don't remove that energy from the grid scale (the red line). The blue line shows the same shock formation occurring, except that the numerical algorithm now includes a "filter" that turns on when too much energy appears in the grid-scale and violates the principles of the forward cascade (it is called "adaptive" because of its conditional action). The filter effectively removes the oscillations, so-called "grid-scale noise", and removes enough energy to maintain stable numerical integration. The filter used here is a "Roll-off" filter, meaning that it damps marginally resolved modes much stronger than the well resolved modes. This Adaptive Roll-off Filter (ARF) is a parameterization for shock formation. This idea of parameterizing the interactions between resolved and unresolved flows is applied in just about every fluid dynamics model out there, including those that are used in climate prediction studies.
The high-end SELF module "Burgers1D" is used with the driver program in the highend/burgers1D/Testing subdirectory to generate the results discussed here. Checkout the SELF code if you'd like to play around with the Roll-off filter in a 1-D shock formation problem.
Comments, questions, criticisms ? Let's talk science!