The Web is full of contradictions. An animation toy (Flash), essentially a diversion, the antithesis of work, has been a driving force for Adobe in launching their forward strike to gain space on the user desktop. Macromedia first, then Adobe, has the pole position now on the Rich Internet Application marketspace, with a strong solid product. (that was once a mere toy).
Now, inspired by animation strategies, perhaps, the practice of lazy instantiation, or delayed instantiation, or lazy loading seems pretty ubiquitous. It is built into the Flex framework, and built into the ORM engines, and is a well worn design pattern for data access. In animation, 3D animation, the computer, or the artist, need only draw the things which are seen. There is no need to puzzle out what the other side of a building looks like if you never have to render it. And there is no need to draw that little short guy, because the stout man is always directly in front of him. And so it goes. If animators, or cg, had to render every surface, then the production of animated sequences would take many times as long as it does, and it is processor intensive.
For data access, lazy loading plays a similar role, in minimizing hits to the database, thus boosting performance overall, and raising capacity. On the other side of the coin, however, is the bleeding edge of Web 2.0 concerns, not content with merely being fast, but wanting instead to peer into the future, patterns for prefetching some data are being used to enable typeahead controls, and faster access to some data sets. And the proper positioning of the data depends on the usage of the data, which will vary from population to population. And so to precisely balance the predictive client side cache load, with timely garbage collection, boosted by the yields of lazy loading for other objects requires statistical analysis of the user community, and to some degree, specialized analysis of the data usage patterns of the individual.
“Is this the type of guy who is only going to really look at the first 4 books on his book search?” if so, we can get the first four, and maybe fake the rest with a proxy. But is this same guy almost guaranteed to view every page with a “special offer” on it? If so, some of this data may do well to be prefetched or staged for him.
So, do we run headfirst into the first into the first wave of Delayed Prediction engines? More likely, we deal, as best as we can with the business problems at hand, always keeping an eye out for the low hanging fruit. And for some software house, which will be bought by a larger company, and then swallowed by a mega corporation, for that plucky little entrepeneur will see huge gains by implementing a liquid GUI, with predictive fetching, lazy loading, statistical throttling, enterprise messaging, as a framework for the company’s killer app. We shall see.