A Document and Application Server Framework for Perl
Presented at the O'Reilly Perl Conference 5, July 23-27 2001
This paper describes an ongoing Open Source project to design and develop a generic, configurable, extensible, scalable and above all useful architecture for a document, information and application server framework, written in and for Perl. It discusses the rationale for building such a system and draws inspiration from the various other application servers and related software components currently available. It attempts to identify a common architecture that might permit closer integration of these different systems and lead to a convergence of features. It presents Camelot, a prototype implementation of an extensible application server framework, and describes the design philosophy and general architecture underlying it. In particular, that is to encourage a clear separation of concerns between data, application logic and presentation components, thereby promoting modularity, re-usability and ease of customisation of content delivery services and other server bound applications.
- Application Server Frameworks
- Fundamental Components and Facilities
- Functional Requirements
- Camelot Architecture
- XML Configuration
- Service Invocation
- Summary and Conclusions
In very general terms an application server is any kind of programming framework which provides reusable services around which applications can be built. A web application server will typically provide facilities for authentication ("Who are you?"), authorisation ("Are you supposed to be here?"), session management ("Have you been here before?"), database abstraction ("Which of these products are you interested in?"), presentation ("How do you want to view this?") and so on. Applications and sub-application components can be written into a framework, reusing the underlying services and thereby reducing development time and component complexity. This allows the author to concentrate on the application specific task at hand and worry less about the wealth of peripheral detail that might otherwise require attention. Ideally, the practice of writing components to a common framework should also make them more generic, portable and reusable.
This paper investigates application server frameworks and attempts to identify the core features, functionality and other requirements of a truly generic and extensible implementation. It introduces Camelot, a prototype framework written in Perl which has been used to expound some of these ideas, act as a platform for investigative research and provide an implementation test bed. It will describe the key facilities and features of Camelot, showing examples of use and demonstrating server/service configuration from XML files. It will illustrate how the underlying architecture promotes a generic approach to building adaptable application servers into which reusable and highly customisable components can be built. The final section will present a summary of the project, drawing conclusions from the work undertaken to date and suggesting possible future directions.
In this paper we will make a subtle but important distinction between an application server and an application server framework. It is this:- an application server framework is considered to be a generic structure around which specific application server instances can be built. An application server instance is then the deployment of an specialised arena for hosting application components, and one which is tailored to a particular environment or modus operandi.
In most, if not all cases, existing application servers and the underlying frameworks around which they are built are designed specifically for use via the World Wide Web (i.e. Internet/intranet), and are typically bound to a particular web server, operating system or platform. They may also be dependent on specific underlying technologies (e.g. XML, SQL, etc) or software (e.g. Apache modules, particular template engine, database abstraction layer, etc) or in some other way specific to a particular application area or environment. By our definition, we would generally consider these to be application servers but not necessarily application server frameworks (or at best, limited ones).
Examples of specific application servers and/or services include template driven web content delivery systems (e.g. Apache::Template, Apache::PageKit), XML delivery and presentation systems (e.g. AxKit), content management systems (e.g. Iaido) and more general web application servers (e.g. OpenIntereract). More comprehensive and mature systems like Zope (written in, and primarily for Python) typically offer a range of these different service capabilities.
To the author's best knowledge at the time of writing, the only existing Perl solution which implements a truly generic application server framework is AO. This implements a Perl version of the Java Servlet API and is partially based on Tomcat, the Apache servlet engine. Conceptually, it implements a generic servlet engine which in theory can operate independently of platform, web server, presentation engine, and so on. At present, it is implemented only as an Apache/mod_perl module using HTML::Mason as a presentation engine, but the architecture provides the relevant hooks for customisation of these and other aspects. Readers interested in deploying a functional and practical Perl-based application server framework in the immediate future (which rules out Camelot) would be well advised to investigate AO further. AxKit is also worthy of consideration as a very powerful XML content delivery system and application server, as is OpenInteract, another recent release of note which provides a more general framework for web based application development.
These various application servers, frameworks and other general solutions are typically built around, or are themselves manifestations of a number of different fundamental elements. In some cases these are related to specific software components (e.g. web server, database, etc.) and in other cases, describe more general facilities or aspects (e.g. content management, session management, logging, etc.). These include, but are not limited to, the following items:
A web server lies at the heart of a web application server, effectively providing the "transport layer" by handling incoming HTTP requests and delivering files or dispatching handlers to generate content in response. The combination of Apache and mod_perl in particular provides a powerful, flexible and efficient platform for developing web applications and a base on which numerous other higher level tools have been built. By itself, the mod_perl solution is fairly low-level requiring reasonable understanding of the specifics of the request phases, including authentication, authorisation, configuration management, and so on.
There are numerous tools, most of which appear to be written in Perl and available from CPAN, which offer some kind of template processing facility. The Template Toolkit, the author's own small contribution to the state of the art, provides a general purpose template language for generating and manipulating content. It promotes a clear separation of concerns between data (Perl data), application (Perl code) and presentation elements (templates), encouraging a structured approach to reuse for the sake of simple but flexible construction and ease of subsequent maintenance.
Apart from a few other enlightened template modules which also promote a clear separation between data and presentation (e.g. HTML::Template), the majority of modules adopt the approach of embedding Perl programming fragments directly into template documents. Text::Template is one such example, an elegantly simple module which incorporates the full programming power of Perl into text templates. Embperl is a more comprehensive system adopting this approach and tailoring it especially to the web application environment. HTML::Mason is yet another powerful and highly regarded tool which allows the construction of complex web applications by combination of smaller components which integrate Perl code and presentation elements.
However powerful this approach may be, it is widely regarded as being limited as to the extent and ease with which it allows larger systems to be adapted, extended and generally managed. Without a clear separation of concerns between application, data and presentation, it is not easy or even possible to change one of these aspects without concern for the others. This is generally not an issue when the approach is used for small document systems, applications or tasks, but can become a problem when used in more complex scenarios. It is possible to use this approach to build large systems that are well structured but it generally requires discipline, experience and a good appreciation of the underlying design principles.
XML is a semantic markup language. Unlike HTML which for the most part adorns plain text with special tags to indicate font size, colour, page placement and numerous other presentation details, XML is a generalised markup language which adorns plain text with special tags to indicate what something is, not what it looks like. In its most simple form, XML can be thought of as a flexible content and data interchange markup language which permits a clear separation of data/content from presentation style (and thus it also falls into the previous category). Furthermore, the innately extensible nature of XML means that it can also be adapted and adopted to almost any application area imaginable (however appropriate or inappropriate that might be - it is possible, for example, to use XML as a programming language in its own right, but that doesn't necessarily mean it's a good idea).
There are dozens of Perl modules available for processing XML using numerous different techniques. The XML::Parser module provides low level access to Expat, while XML::DOM, XML::XPath and XML::Grove, to name just three, provide facilities for higher-level access to XML content. Other modules such as XML::XSLT, XML::Sablotron and XML::XPathScript, provide transformation techniques for applying style sheets or otherwise processing XML documents into different formats. Moving higher still into the realm of application code, the XSP concept allows XML "tag libraries" to embed into documents well-defined calls to reusable application components. At the top of the evolutionary ladder for XML processing in Perl, AxKit provides a full-blown, XML content delivery system and application server environment built around Apache/mod_perl.
Database abstraction is typically found at three levels. At the highest level of the application, tools like Tangram and SPOPS act as intermediary layers, marshaling data between the application code (e.g. Perl object classes) and persistent storage media (e.g. relational database tables). In doing so, they simplify a program's interface to an underlying data source and hide the specific details of implementation, thereby reducing application complexity and increasing portability.
At a slightly lower level, modules like DBIx::Recordset and Alzabo provide a generalised abstraction of a database implementation. Instead of directly writing SQL queries to access a database, calls are made to the intermediary layer to perform fundamental record-oriented operations like insert, retrieve, update and delete. The abstraction hides the specific details of the SQL queries generated, thereby promoting greater conceptual simplicity and portability across the different flavours of SQL implemented by database vendors, and also to non-SQL systems.
At the lowest level, the DBI module and related DBD drivers provide an abstraction of the environments specific to particular database implementations. Its methods provide a consistent interface for connecting to, querying, retrieving data from, and otherwise controlling an underlying database system. DBI is the de facto standard module at this level.
Content management as a general concept relates to the process of adding, updating, deleting and otherwise maintaining content delivered by a web or application server. It typically embodies the concepts of content ownership, version control, preview facilities, application of presentation styles or handlers specific to content type, and so on.
If the first lesson learnt from the explosion of the World Wide Web into our collective consciousness was that "content is king", then the second was surely that creating and managing content is one of, if not the most costly and time consuming element of running a web site. Thus, a truly useful and effective content management system is widely regarded as a highly desirable item.
In the realm of the web server security falls into the processes of authentication and authorisation. HTTP provides a basic authentication facility which permits a user to identify him or herself to a web server by means of a user ID and password. HTTPS is a implementation of HTTP over the Secure Sockets Layer (SSL) and offers a more secure and robust approach to authentication. Authorisation is the process of validating that an authenticated user has access to a particular page or resource on a web server.
At a higher conceptual level, there may be numerous different authorisation strategies deployed. For example: hierarchical security levels, access control lists, access tokens (keys), and so on. It might also be the case that a particular user can access part of a system while playing one of a number of different roles, and thus is expecting to be presented with different views (e.g. guest user, administrator, editor, reviewer, etc).
HTTP is a stateless protocol, meaning that no information is retained from one request to the next. However, there exist numerous "bolt-on" techniques for maintaining state across requests using browser cookies, by encoding request parameters in the URL, and so on. These create the appearance of a continuous session throughout which the state of various data items can be maintained. The Apache::Session module provides an abstraction of such session data, allowing arbitrary data to be saved and retrieved from a variety of different backing stores (e.g. SQL database, plain file, Berkeley DB File, etc).
Caching strategies can be applied at various different levels to achieve significant improvements in real and perceived system performance. At the outermost level, entire web pages can be cached if there is a deterministic way to assert that the output content remains consistent for a given set of input parameters. This implies that for a given URL and optionally, a set of CGI parameters, the same page content will always be returned and that there will be no side-effects caused by processing or otherwise generating that page. A counter example would be a handler for the submission of form data which causes a record to be added to a database.
On a similar basis, smaller sub-page components or services can also be cached according to the same heuristics. At lower levels still, database abstraction layers or other persistent storage frameworks may cache raw data to avoid reading/writing to or from a relatively slow secondary data store. Similarly, an XML document may be parsed into and then cached in Document Object Model (DOM) form, avoiding the need to re-perform this relatively costly parsing operation for subsequent requests that utilise it.
To subvert the words of Thomas Edison, building an application server is 1% inspiration and 99% debugging. Good error handling, logging and general debugging facilities are therefore essential.
Having identified some of the key components and fundamental elements we shall now consider some of the core functionalities that they collectively implement. The following items identify just some of the particular application areas that might be requirements for a document or application server system. Many are loosely defined, possibly overlapping or integrating aspects of other items, but hopefully serve to crystalise particular features or typical usage areas that a generic system should be able to support.
In the simplest case, we may want to serve up regular, static HTML pages that require no other processing. However, as with many of these examples, we may wish to apply peripheral services like adding standard headers and footers, apply a caching or security strategy, and so on.
Delivering pages that have been constructed by processing source templates (e.g. Template Toolkit, HTML::Template, etc). Similar to the XML transformation process described above, but likely to be less formal and possibly interacting with various data sources other than XML (e.g. SQL data, Perl program code, etc).
A variation of the previous categories, this would typically involve a single template being used to present records from an SQL database or other data store. One template is thus used to generate a number of virtual pages. The process may be a direct application of the data into a template (e.g. using the Template Toolkit with data provided externally or accessed directly via the DBI plugin) or by first transforming it into another format to utilse an established transformation path (e.g. using DBIx::XML_RDB to convert SQL result data into XML for transformation via XSL, or some other method).
With the appropriate security model in place, this would entail the provision of a facility to allow database content to be added or updated by end users, external applications or remote servers. A simple example would the provision of an HTML form for adding/updating a database record. A more advanced scenario might have an application sending data to a server via an XML document for central collation, storage and reporting.
Identified earlier as a general aspect, this can also be defined in more concrete terms relating to specific functionality. In this sense, content management represents a facility which allows an end user to upload a new page, template, or sub-page template component to the server (i.e. anything that the server might later serve out). Again, this requires an adequate security model and the definition of an appropriate ownership relation between users and content.
The provision of services through which the server itself can be managed, allowing new components and server elements to be easily installed or updated.
An extension of data management, this describes the more general case of multiple servers, applications or agents interacting directly with each other. An example of this might have an organisation running multiple servers in different physical locations which should cross-transmit articles posted to a bulletin board application (e.g. similar to the USENET news transmission model).
This describes the situation in which a number of discrete steps are required to perform a particular task, possibly over an extended period of time, almost certainly across multiple requests. An example of this might include an online purchasing facility through which goods are selected, followed by checkout (listing selected goods and total price) and then payment. Another example is the request to be added to a mailing list which typically causes a confirmation email to be sent, and a reply expected before the user is subscribed.
As well as delivering pages online and running interactive services, it should also be possible to use these various techniques to generate pages offline in batch mode. For example, if a database table is only updated once a day/night and contains a limited number of records, then it may be preferable to generate and store static HTML pages for each record, rather than construct dynamic virtual pages online.
The World Wide Web is not the only application area for the client/server model. We may want to run a local document server which formats or generates content as PostScript for sending direct to a printer. Alternately, we might want to run a server entirely within another application, thereby providing a generalised method for building complex content from command line programs.
Ideally, the server should provide the facility for communicating and interacting with other programming languages, platforms or software. If we need to run a Java application or fetch data from a database running on a remote Windows machine, then it would be highly desirable if the server could handle the specific details of this.
Given this daunting array of requirements, the question arises as to whether any single document or application server architecture can provide a suitably generic framework into which these many different services can be built. If there is such a solution, then can it be made general enough to support such diversity, without having to exist at such a low level that it offers no reusable functionality or other clear benefits over existing solutions?
The rest of this paper will focus on the architecture and implementation of Camelot, a prototype system which we hope will answer this question one way or the other. It is, at present, a highly experimental research project through which different architectures and ideas can be tested to evaluate their relative strengths and weaknesses. At the time of writing, a basic framework exists for building open document and application services, but it is far from complete and of limited use for real world applications.
It can be argued that in very general terms, most computer programs can be broken down into three constituent phases: input, processing and output. In these enlightened days of design patterns, we identify the Model/View/Controller (MVC) as being on roughly equal terms in the sense that it can be universally applied to almost any application. The model can be thought of as representing the data of a system, that which is originally provided as input. The controller represents the processing phase whereby the underlying model (input data) is manipulated or changed. The view represents the output, where the model (data) is presented in a particular format.
The MVC pattern is a more powerful paradigm for thinking about program construction than shown in this very simplistic example because it explicitly decouples the 3 core elements in both space and time. An underlying data model may simultaneously present multiple views and have any number of controllers that may modify it over an extended period of time. It should be possible to reuse controllers, views and models, integrating them in numerous different ways to achieve a high degree of flexibility and independence.
This is one of the fundamental concepts underlying the Camelot architecture. It promotes a clear separation of concerns between data, presentation and application logic elements. To represent these aspects, Camelot defines the following entities:
Data comes from all kinds of places: text files, XML files, SQL databases, CGI parameters, global configuration parameters, and so on. It may be stored in various different formats and require all manner of different tools and techniques to retrieve and massage it into an appropriate format for use. But ultimately, it's just data.
Camelot treats these various different data sources as "resources". A resource is a piece of data, or more accurately, some kind of facility which provides access to particular pieces of data. Before you can use a resource you must acquire it. When you're done with it, you should release it.
A typical example of a resource is represented by the individual records in a database table. To acquire the resource, you fetch a particular row or rows from the table. When you're finished with it you free the memory used, notify the database, call a method on a statement handle, or do whatever your particular database system requires you to do to say you're finished. Most of the time, you can trust Perl or one of your database abstraction layers to do it for you.
The Camelot::Resource module implements a base class from which any resource objects are derived. To illustrate the previous example, the Camelot::Resource::DBI::Table::Row module provides access to a row in a table via DBI.
my $resource = Camelot::Resource::DBI::Table::Row( dsn => 'dbi:mysql:camelot', table => 'products', key => 'id' );
To acquire an instance of this resource, the acquire() method is called, passing in any relevant parameters to uniquely identify the resource. In this case, it is a product identifier.
my $product = $resource->acquire('foo123');
This particular resource will translate the above request into a SQL query of the form:
SELECT * FROM products WHERE id = 'foo123'
It then submits this to the underlying DBI connection and then calls fetchrow_hashref() to return the first row selected.
In practice, acquired resources may be very simple data structures (e.g. hash array of data), complex objects (e.g. security tokens, session objects), data interface objects (e.g. database handle or abstraction layer), content data (e.g. XML DOM), and so on. By providing a uniform access method to these disparate data types through the acquire/release interface of a resource, a high degree of conceptual clarity can be achieved. Furthermore, it allows potentially complex data management techniques (e.g. caching) to be automatically applied to resources in a consistent manner, regardless of the underlying data type or facility.
Actors represent self-contained units of application logic. They perform actions which operate on data, usually in the form of resources that have been previously acquired. Unlike a resource which is considered to be a "read-only" piece of data, actors may have side effects that change the state of a system.
An example of an actor would be an object responsible for adding a record to a database. It might define that single task as a number of separate actions: validating the input data, performing calculations to provide additional field data, constructing a query and submitting it to a database handle. The elements of data that it operates on, and possibly even the database handle, would be provided as previously acquired resources.
In other cases, actors might be used to implement common libraries of functionality (e.g. CGI), manage broader aspects of the system (e.g. content or system management) or encode application specific functionality. In implementation terms, an actor is simply an object and an action is a method. When Camelot invokes an action it calls the method passing a reference to the Camelot::Request object and any other configuration parameters specified (more on that shortly).
A presenter is something which generates views. In the general case, a presenter is an implementation of, or a wrapper around another module which performs template processing (e.g. Template Toolkit), a transformation process (e.g. XSL) or some other mechanism for generating output. The different views that a presenter can present are effectively implemented as templates or style sheets that control the output process, determining what gets output and how.
Presenters are derived from the Camelot::Presenter class and implement a simple interface for presenting views via the present() method. As with resources, having a common interface ensures that Camelot can treat one presenter much the same as any other. The method is passed a reference to a hash array containing currently defined data items. This will typically include the various resources that have been acquired by a service and/or additional data items that have been provided or manipulated by actors.
The rest of the Camelot architecture is effectively a wrapper around and support framework for these items. The following additional entities are defined:
A service is a self-contained operational unit which combines ones or more of the above items. It provides a container for them and defines a workflow as a series of actions to be taken when the service is invoked. Typically, a service will acquire some resources, perform some actions and then generate some output by presenting one or more views. Finally, it will automatically release any resources that have previously been acquired. In the event of all or part of a service failing to run as expected, custom error handling facilities may be specified which the framework will automatically invoke.
A service can be something little or something big. It may require no input data, perform no fundamental operations, or generate no output, but presumably it does at least one of the above. Or it may do something very complex, requiring many external resources, performing multiple operations, calling on several other services, and producing output from a number of different views. Camelot allows small services to be constructed that perform useful, reusable tasks which are then built up into larger services in different combinations.
For example, one service may be responsible for generating a menu, another for creating a view of a product from an SQL database record and an XML document, a third for managing a shopping cart, and a fourth for combining the output of all these other views into a single HTML page for sending back to the client.
As with resources, the consistent interface to Camelot services provides an opportunity for output to be automatically cached. Although this only applies to services that have no side effects, the design allows a complex service to be broken down into smaller services (which themselves become more reusable), to which different caching strategies can be applied.
A server is the overall framework in which services are provided. It includes functionality for mapping requests to services and overseeing the processing of these services. It also provides peripheral features like logging and error reporting.
A client communicates with the Camelot server via an interface module. Different interfaces may be provided for particular operating environments (e.g. Apache/mod_perl, CGI script, TCP/IP socket, GUI, command line, etc). The interface handles the specific details related to accepting a client request from the environment (e.g. an HTTP request received via Apache/mod_perl, notification from a GUI control, etc) and returning an appropriate response. This is the thin abstraction layer that prevents Camelot from being limited to just a web-based application server.
The interface module through which a client connects to a server constructs a request object to encapsulate the details of the request, store resources that are acquired by services, and also manage one or more output buffers that different services might generate. In some cases, requests may be thin wrappers around other objects (e.g. Apache::Request objects within the Apache/mod_perl environment). The request maintains a reference to the interface that created it so that output and/or status messages can be dispatched back to the client by the appropriate channel and in the correct format.
Camelot is comprised of a collection of Perl modules which represent the above components and can be interconnected within a Perl script as required. Far more useful, however, is the ability to specify a Camelot configuration using XML. Much, if not all of the tedious overhead involved in configuring an application server can be reduced to the tedious overhead of updating an XML configuration file. The key difference is that the latter can be automated far more easily as there are many well defined techniques for manipulating XML documents.
The following fragments illustrate how different aspects of a Camelot system might be configured.
This example defines a resource to retrieve records from an SQL
database table via DBI. The
'name' attribute specifies a
unique name for the resource within the enclosing service and
'class' specifies the implementing class (this is
implicitly prefixed by
'Camelot::Resource::' and thus
'module' can be used in place of
avoid such a prefix being added). Additional parameters enclosed
within the <resource>...</resource> block are passed to
the resource constructor.
<resource name="zoo" class="DBI::Table::Row"> <dsn>dbi:mysql:camelot</dsn> <table>animals</table> <key>id</key> </resource>
This fragment defines an actor to implement some application logic.
'name' attribute specifies a unique name for the
actor within the enclosing service and
the implementing class (no prefix added, unlike
which for actors would add the
prefix). One additional constructor argument is also provided.
<actor name="keeper" module="My::Zoo::Keeper"> <brush>soft</brush> </actor>
This fragment defines a presenter named
'tt2' which uses
the the Template Toolkit via the
<presenter name="tt2" class="Template"> <include_path>/usr/local/camelot/zoo</include_path> <pre_process>header.tt2</pre_process> <post_process>footer.tt2</post_process> </presenter>
To pull it all together, we can now define a service named
'animal' which acquires an animal record from the
'zoo' resource (DBI table) using the value of the current
'animal_name' as an argument, and stores the
returned data in the
'animal' item. This is then passed
'keeper' actor which performs some arbitrary
processing on it, identified here as the action
Finally the data is presented via the
'animal.tt2' template as a view.
<service name="animal"> <acquire resource="zoo" name="animal" item="animal_name"/> <invoke actor="keeper" action="groom" item="animal"/> <present presenter="tt2" view="animal.tt2"/> </service>
The server and interface modules provide the general mechanism for a client to invoke Camelot services. The simplest example, by virtue of having the closest direct mapping to the Camelot architecture, would be invoking a service via a web server.
Here we can assume that there is a direct 1:1 mapping between the URL
path and the Camelot
'animal' service defined above. In
fact, this is the only supported mechanism for service mapping at the
time of writing, but the facility exists to provide custom mapping rules
and/or modules and will be further extended at a later date.
Any data values provided are prepared and stored in the request in a
simple hash array. Initially, this will contain the variable
'animal_name' set to the value
specified by the CGI parameter
'animal_name=camel' at the
end of the URL. As resources are acquired they are added to the data
store as items identified by the
'name' attribute in the
<acquire resource="zoo" name="animal" item="animal_name"/>
In this example, the
'item' attribute indicates that the
value of the existing data item
'camel') should be passed as an identifier to acquire the
'zoo' resource, and the returned record should be added to
the data store as the
Presenters are passed a reference to the data store when asked to
present a view. This is then the definition of the template variables or
other source data from which they can generate a specific view. The
'animal.tt2' template used to create the final view in the
earlier example can safely assume that the
template variable contains valid animal data which can be displayed via
simple template directives. For example:
[% INCLUDE html/header title="Animal Zoo: $animal.name" %] <h1>[% animal.name %]</h1> We like the [% animal.name %] because: <ul> [% FOREACH v = animal.virtue %] <li>[% v %] [% END %] </ul> [% INCLUDE html/footer %]
By default, view output is appended to the request output buffer. A
'name' attribute can instead be provided to have the output
put back into the data store as a named item. This allows both input
data and output text to be easily pipelined through any number of
different operations or services
<service name="animal"> ... <present presenter="tt2" view="animal.tt2" name="animal"/> <present presenter="tt2" view="menu.tt2" name="menu"/> <present presenter="tt2" view="cart.tt2" name="cart"/> ... <present presenter="tt2" view="mainpage.tt2"/> </service>
In this example, the final view template,
can access the
'cart' items like any other variable, integrating the
output generated by these individual views into a complete page. The
following HTML template fragment shows a trivial example of how this
<table> <tr> <td> [% menu %] </td> <td> [% animal %] </td> </tr> <tr> <td colspan=2> [% cart %] </td> </tr> </table>
Just as a service can acquire resources, invoke actions and present
views, it can also request other services. As with views, the default
behaviour is to append any output generated by a service to the current
output buffer. Otherwise, a
'name' attribute may be
provided to capture the output into a variable.
<service name="index"> ... <request service="main_menu" name="menu"/> <request service="show_product" name="product"/> ... <present presenter="tt2" view="index.tt2"/> </service>
Services can also be nested hierarchically. In this next example, the
'animal' service is modified to contain 2 sub-services,
<service name="animal"> <acquire resource="zoo" name="animal" item="animal_name"/> <service name="show"> <present presenter="tt2" view="animal/show.tt2"/> </service> <service name="edit"> <present presenter="tt2" view="animal/edit.tt2"/> </service> </service>
The request is invoked through the outermost services inwards, correctly performing any workflow directives for the outer service before entering the inner service. Thus the server now implements 2 virtual URLs which might be invoked as:
These are then executed as if written:
<service name="animal/show"> <acquire resource="zoo" name="animal" item="animal_name"/> <present presenter="tt2" view="animal/show.tt2"/> </service> <service name="animal/edit"> <acquire resource="zoo" name="animal" item="animal_name"/> <present presenter="tt2" view="animal/edit.tt2"/> </service>
Resources acquired by a service are released automatically when the service is complete, a fact which holds true true for nested services and services called from within other services. Thus, a service can be considered an atomic action which either fully succeeds or fails gracefully. This allows complex service interactions and inter-dependencies to be built which should always perform in a well defined manner according to the original intention of the system architect.
Given the self-contained nature of a service, it is entirely possible to define service objects which implement their own fully custom behaviours. A service could interact with a remote system, import data from an external source, execute code in another programming language, and so on. As long as it implements the consistent and simple interface that Camelot expects then there is no limitation on the kind of underlying operation that it might perform.
The general architecture of Camelot offers a number of important benefits. Firstly, it promotes a clear separation of concerns between data, application logic and presentation elements. This allows these different items to be developed and maintained in isolation, typically making them smaller and simpler and thereby providing less complex code in which bugs can hide. It also increases the likelihood that these focussed components will be more generic and reusable than their hopelessly intertwined counterparts.
Secondly, the facility to define layered services allows combinations of these different core components to be easily interconnected as required. Small, reusable foundation services can be defined on which larger, more complex services can be built. Thus, a system architect can adopt a fine, medium and/or coarse grained approach to building services, whatever is most appropriate to the task at hand.
Thirdly, the central configuration repository promotes a very high level view of an application server, allowing the original implementor and subsequent maintainers to see, understand and modify the big picture without worrying about too much of the specific detail. The abstraction of complex details behind simple resource, actor and presenter interfaces makes this possible.
Finally, this abstraction has a further benefit in allowing the system itself to be more intelligent in applying caching or other data managements strategies to its components. There are numerous cross-cutting aspects of the system such as caching, error handling, logging, security (or more generally, role playing), and so on, that should be be defined as configurable and/or customisable parts of the framework itself. Not only does this further reduce the amount of "housekeeping" code that individual components must implement, but also allows an entire strategy for error logging (for example) to be replaced across an entire server, or subset of services, by changing an XML configuration definition in just one place.
Camelot exists in the form of a simple, but working prototype written in a thousand or so lines of Perl code. This has been built as more of an experimental platform to expound some of the ideas presented in this paper, than as a working server ready for deployment in the real world. In particular, some of the peripheral or more complex features like error handling and data caching are yet to be implemented in anything other than their current skeletal form. Furthermore, there are a number of functional requirements identified here that are not explicitly catered for, although it is anticipated that the basic architecture will allow them to be incorporated in the future as a natural by-product of further development. It is quite likely that significant design, architecture and/or implementation changes will also be made as current weaknesses are identified, new ideas are formulated and better solutions to existing problems are found.
Apart from the development of the core framework, there is also a significant amount of work required in writing or adapting code to deploy resources, actors and presenters. In many cases, this should involve the relatively simple process of writing thin wrappers around existing modules or adapting them to implement the required interfaces. Nevertheless, the number of different components that could or should be adapted to such a framework is overwhelming for the (part-time) efforts of an individual.