I would like to design very high availability(never take server down, roll out features without restart etc) application with both client(probably C# gui) and server components(Java,C++,Perl).
I got some advice from (minimize-code-maximize-data.html and Yegge), and I would like to make most logic dynamically readable from database so that all configuration(including all GUI configurations, text translations etc, business rules as well as data) would reside on the server in a database rather than in a code that requires restart to be read into executable memory.
I would like to be able to customize any aspect of the application without restarting either client or server and have application reflect changes with as short lag as possible(dynamic class loading etc).
What are the performance and other limitations of designing such a ‘never kill’ system? Has anybody managed to create such an application? What were the main lesson learned? When is this not cost effective and more traditional build, release, qa, couple hrs downtime approaches are required?
Something not all too different to consider is using a heavily static and small code base around a script interpreter, like Rhino:
http://www.mozilla.org/rhino/ScriptingJava.html
That way all the logic and data can be put into reloadable scripts and the only core part of the program is the script runner and shell-like part.
This is definitely bad for performance, I think that is a given.
If I remember correctly, Yegge posted something similar once in his blog, so if you get to talk to him again, might ask him about that.