Sunday, February 29, 2004

Products and Frameworks

I have thought about this for some time and in the recent time the discussions at TSS have got me thinking on this. For a start, I belong to that camp (I guess it is extinct) that would prefer hurd over linux (I wish linus had waited for some time). But over time I have realized that each domain, whether it is operating systems, application server or some other product, moves from monolithic systems to framework systems. This process takes its own time. The reason being that before a particular domain starts to grow, people do not have complete understanding about what the framework for that domain should look like. As the monolithic products hit the road, people starts seeing the marks. As the implementation grow, the vendors understand the domain better and can develop the framework that fit the domain. But by that time these monolithic applications have grown so big that vendors do not have any incentive to rewrite their products based on the framework and make the life of customer easier. But there are definitely some products out there which are result of reseach and thus are based on frameworks. At the moment the products like J2EE are at a level where the scope of the frameworks are getting re-defined. I will try to summarize what is going on and how things may proceed. J2EE So far J2EE has grown as a framework which specifies the lifecycle of the application in the container and it also specifies a set of services that may/should be made available(most of the time using the existing specification for that particular domain like Directory, JMS, etc) by the container to application. They did a great job at doing that. But as people started putting together the applications and vendors started developing products to match these specification, they realized that a lot of times people need to be able to configure the container itself for their application to work and the application framework is not good enough. At the same point the commercial products have more to offer in terms of services than what is required by specification which people will like to use. So what is really needed is an Application Server framework(I wish somebody does develop something similar for C - OpenGroup are you listening). In the parallel, people fed up with the complexity and cost of application servers or looking to develop a lighter, flexible and J2EE independent application framework, started developing frameworks for java applications. The frameworks like Avalon(My favourite - why do I always love stuff that most people do not care about), PICO and Spring were result of such requirements. Besides that there were a lot of framework based products were being developed to simplify the life of Java developers like struts, webworks(and similar web frameworks) for Front-end, hibernate for backend and so on. In addition to that advances and maturity in AOP and metadata attribute concepts and implementations were enticing people to utilize them in their application Now people are looking to utilize these various components to develop J2EE applications. But the framework was never meant to address the problems that people wanted to solve. So how should we proceed from here. Basically next generation framework will have to be for Application Servers instead of for applications. How would such a framework look like. Basically it may look something like Avalon ;-) Basically idea being that each of the services like Servlet-JSP/Front-end applications, JMS/Asynchronous Messaging, JNDI/Directory Discovery service, IIOP-Socket-/Synchronous Messaging-RPC, Scheduler/Time management, Transactions, Cache/Replication/clustering, EJB/Java Application with business logic, JDBC-Database Manager, Security Services, are themselves a service in the Application Server. Any of these services can be used by other services or application. So for example the Database Manger service may use cache service to provide better performance. Now some of the components/services like EJB, front-end applications can themselves be containers which host the applications.written by developers. These containers can be standard JSP or enhanced containers like strut or webwork or it may support AOP or other properietory thing that people want it to support. But it is important to define the lifecycle of these provider and especially the management/configuration interface(may be JMX is good enough). This would allow users to use a standard way to configure these containers for their applications and not bother with properietory files like weblogic.xml. At the same time J2EE should get out of the way in defining which service should be part of the specification. Any service that follows the java specification should be allowed to be part of J2EE specification as long as it is defined by one of JCPs. This will allow vendors to innovate and respond quickly to market requirements instead of waiting for the J2EE to pass it. So if tommorow vendor see that rules engine is in demand they should be able to ship it without breaking J2EE requirements. Another important aspect of the system is enhacement of these containers themselves by the application developers. With AOP showing the way, it may be prudent to design the specifications for generic containers that are extensible using various methods like configuration files, AOP or a properietory method. I think if we can lead J2EE along this path we will have more flexible system. Some may raise the question about how the application server companies will make money in such a system. I am not sure that we should worry about it. The basic application server vendors can continue to make money by shipping the complete product that provides a default implementation of all the services because there would be products developed that will have dependency over other services and even if you replace one component with new product, users will need all the other services to function well. So I do not see vendors being threatened by this system and at the same time it will allow the experts in particular fields to develop components that can easily be integrated into the system without developing properietory wrappers around them. This is more important for services like cache, transaction manager, security which cut through the all the services and the framework itself. In order for this to continue, another important component is JCP. Basically the JCP when defining the services/API should take into account the management aspect of it and develop the schema for the same. So each of the services should take into account that service providers will be developing them and will need to expose JMX interface that will allow external customers to tune these APIs at initialization or at runtime. Eventhough some of the services take that into account, this information is missing from most of the other places and results in chaos when the systems hit the street.

Friday, February 06, 2004

User Pain Lifecycle and an approach to solving the problem

Basically why does a company buy a product even after building it in-house? I can think of some technical reasons (I am sure there a lot of non-technical reasons) -
  1. Vendor has more subject matter expertise – A simple idea that if vendor has designed and developed a product, then vendor would have designed solution in an environment independent and framework model which can be used to address most of the use case out of box and at the same time can be extended to cover all the use cases.
  2. Vendor will have more SME – As the time progresses, the vendor integrates products in more diversified environment and would have had to enhance the system for different use cases and so the when company will run into those use cases vendor would be able to provide solutions.
  3. Vendor have dedicated resources and can spend more time, money and effort to make sure that the product works.
But given my limited experiences with new products, it seems that most of the time none of this is true.

Pain Life Cycle

Typically most of the companies have “pain” to start with and then some one comes up with an idea to solve the problem. So a small system is built to solve the pain which slowly starts getting accepted and enhanced. As the time progresses the small product grows to become a large product which fulfills most of the internal users need.

Depending on how good the architecture and coding team was the IT departments ends with a fine product or a blob of code that works but no body understand.

While this thing was going on, somebody (The Vendor) noticed that this requirement exists in a lot of places (the start of the hype) and so tries to· start building a product to address client needs (please note this may not apply to a few companies that are started by people from academic background). Marketing/Sales wants to get the the first few versions of the product to the market as soon as possible resulting in a faulty architecture, sloppy code and limited testing. This product has very basic functionality, is defective and is architecturally weak.

·At this point the media and analysts have started to pick on the hype and added words like “paradigm shift” and “next generation” to these requirement. At the same time company is in deep pain from managing the in-house developed blob of code.

The Company buys the product after some evaluations and pilots with no idea how the requirements are going to change over time.·

So the company ends up with a product that does not provide all the functionality that in-house product provided, does not integrate well in the environment of the company, is defective/unstable/bad performance and non-extensible.

Why do companies do that?

I don't know.

Another approach

But what if the company took a different approach -

  1. Consortium of Vertical Industry users - So form a consortium of IT professionals who will come up with a set of use cases which are common to all the people. So instead of the vendors developing fuzzy use cases and even fuzzier standards around them, each vertical segment should have the requirements laid out. The consortium may have its own lab or donated labs where the products can be verified to be requirement compliant.
  2. Selectively "Open-design" their internal product - Basically this is based on the idea that the products developed internally are superior to the first few versions of vendor products. So in order to cut down the time and provide vendors with roadmap, the customers can open-design(could not think of any other word) the high level design and development information which can be made either public or provided only to legitimate members. This will help vendors and other open-source developments to get a handle on "what client wants".
  3. Share the experiences with the consortium - This is another important idea that must be used to ensure that the knowledge is not wasted and people can learn from the mistakes of other.4. Vendors Interface - Force vendors to implement the common features required by most of the members of consortium and make them follow a framework architecture.

Most of these thoughts are not something new and I guess people have not seen the benefits of sharing out weight the benefit of keeping their internal information secret. But till we don’t have a legal memory erasing device or a single giant enterprise, people will continue to change jobs and take these "internals secret" to the competitors. So why not do this sharing in a formal way especially when it is going to help achieve companies to·improve their core·functionality·and not·be obsessed with·IT.