The days of writing NLMs are all but gone. Those beasts had a cryptic API, confusing documentation and scarce support from Novell. But starting with NetWare 4, Novell has opened its API to developers through the NetWare Developer Kit. Using the NDK, developers can code in ActiveX, C/C++, Java and various scripting languages. NetWare 5 has full support for JavaScript, NetBasic 6.0, Novell Script, PERL 5, REXX and ScriptEase.



Universal Component System

Novell’s UCS (Universal Component System) exposes NetWare services to the scripting engines regardless of the underlying scripting language. UCS is a scripting-language interpreter that lets programmers develop scripting applications using various components implemented as JavaBeans, Java classes, CORBA (Common Object Request Broker Architecture) objects, UCX (Universal Component Extension) objects (which are C or C++ objects for NetWare/UCS), or even ActiveX and OLE Automation objects running on a remote Win32 machine. All access is achieved through a single homogenous interface.

The system acts as a conversion layer between the programming languages and the reusable components (see “The Universal Component System,” page 139).

An example of a UCS-supported script is the snippet of PERL code at right. This code logs a user into an NDS tree under a particular context and lists the available objects. Many similar scripts are available on Novell’s NDK Web site (

Novell Script for NetWare

Your first stop for NetWare scripting is Novell’s own Novell Script for NetWare. The most noticeable feature is its complete compatibility with Microsoft’s VBScript. It supports true software component semantics, allowing the use of functions, properties and events much as you would use ActiveX components on Win32. Novell Script replaces the NMX architecture of NetBasic 6.0 with UCS.

Because Novell Script is both BASIC- and VBScript-compatible, you can leverage your BASIC, VBScript and Visual Basic programming skills while using the power of NetWare services. Unlike its Microsoft Windows counterpart, the Novell Script seamlessly integrates with Java classes and nonvisual JavaBean software components.


PERL has its roots in Unix. However, Novell has ported PERL to NetWare, which is interesting since the PERL community had ported it to almost all other platforms with minimal help from vendors. PERL’s process-, file- and text-manipulation facilities make it ideal for tasks involving prototyping, software tools, database access, graphical programming, networking and system management.

NetWare 5 supports extended PERL syntax. Extensions may be developed in C and linked into your PERL scripts, making any component appear as merely another PERL function. You also can code your main functionality in C and then invoke PERL code on the fly. Novell has a special version of PERL that is UCS-enabled, so you can use any UCS object from PERL just as you would through the native NetBasic as shown in the UCS-supported script at lower left.


Ease of use was a primary goal in the design of REXX, making it usable by both the professional programmer and the casual user in need of a quick fix. Everything in REXX is handled as a text string, obviating data typing and conversions. REXX is used to execute and monitor programs, and it can take specified actions based on the output of the programs it’s monitoring. REXXware, developed by Simware, is an implementation of REXX for NetWare. On NetWare, REXXware’s features include clib calls for specific NetWare functions, a compiler, a scheduler, NDS library support and events support. REXXware also ships with a complete NDS administration application for helpdesk environments.

Rapid Application Development

As mentioned, UCS lets programmers use scripting languages to invoke external components to perform various OS functions. Novell takes this concept one step further with ready-to-use components that expose the majority-if not all-of the services available to NetWare programmers. These components come in two flavors: ActiveX components to be used from within COM (Component Object Module)-capable calling environments such as ASP (Active Server Page), Visual Studio and WSH (Windows Scripting Host), and Beans for Novell Services. As the name implies, the last are JavaBeans that can be used on the server to abstract significant NetWare networking services and data sources.

Win32 and Scripting

Almost any scripting language available for Unix is available for Win32 in a native format: no OS middleware, no fuss-just install and use. Some of the more common languages available for Win32 are PERL, Python, Tcl/Tk and most Unix shells. If you are serious about scripting on Win32, you should become intimately familiar with WSH. Until recently, the only native scripting language available for Microsoft OSes was the Command prompt and its associated scripting environment, which is limited at best. However, Microsoft recently changed that with WSH, a native environment that supports an assortment of scripting languages-much as UCS does on NetWare. WSH on its own is not a scripting language but a platform that enables the use of various scripting engines. The two engines available are VBScript and JavaScript. Microsoft expects other software companies to provide ActiveX scripting engines for other languages, such as PERL, Python, REXX and Tcl.

WSH controls ActiveX scripting engines, much as Microsoft Internet Explorer (IE) does on the client side and Internet Information Server (IIS) does on the server side for ASP pages. However, WSH has a much smaller footprint than either IE or IIS, making it ideal to run systems-administration scripts. Version 1.0 of WSH makes use of the script file’s extension, looked up the appropriate scripting engine in the registry and feeds the script to that engine (see “WSH 1.0 at Work,” below).

For example, a script file called rotate_logs.vbs would be run with the VBScript engine, and delete_logs.js would be executed using JavaScript. Running scripts under WSH 1.0 is limited in use without the availability of an object model. For security reasons, neither the scripts nor the script engines could connect to the operating system resources. Pressure from the WSH newsgroup opened Microsoft’s eyes to the weaknesses of WSH 1.0, and WSH 2.0 was introduced to rectify these shortcomings.

Administering Windows

ADSI (Active Directory Service Interface) extracts the capabilities of various directory services from different network vendors to present a single set of directory-service interfaces for managing network resources. Administrators and developers can use ADSI to manage the resources in a directory service, no matter which network environment contains the resources. ADSI lets administrators automate common tasks such as adding users and groups, managing printers and setting permissions on network resources. Using ADSI, an administrator can develop easy-to-understand scripts to manage user accounts without relying on custom C++ or VB objects (see “Sample Add-User Script” below).

PERL, Python and Tcl for NT

PERL 5 for Win32 offers all the features available to the Unix programmer, plus it has specific functions to manipulate the Windows NT event log and to query the system registry. Additionally, modules allow for manipulation of user accounts and groups. Recent builds of PERL 5 require no registry entry, and their DLLs don’t have to be installed under the Windows folder. Thus, they easily can be shared off a network drive. The latest stable version of PERL is 5.6, which can be downloaded from Larry Wall, the creator of PERL, recently announced intentions to develop PERL 6 from scratch.

Another possibility for writing your administration scripts is to use Tcl/Tk. The Tcl script was created by John Ousterhout and counts more than half a million programmers among its users. Tk is a popular GUI toolkit that lets programmers create graphical programs very quickly. Tcl/Tk 8.3 is the latest stable release of the product and is available for a plethora of platforms-its installation on Win32 is a breeze. Visit for the latest downloads and useful Tcl extensions developed by fellow programmers.

I found Python easier to learn than PERL. Available for many platforms, Python is seeing its following grow rapidly. Installation on Win32 is a snap, and the documentation is abundant. You can easily run it from the command line just as you could PERL and Tcl. I do wait for the day, however, when all three languages can be plugged into WSH, simplifying the work of Win32 administrators. You can check out for more information-like the fact that Python is not named after the snake but the British comedy troupe Monty Python.


When I was young, my parents put labels on items around the house as a tool to help me learn to read. For example, they’d attach a card to our table that read “table” in big letters. I’d see a familiar object and an unfamiliar word and make an association, thereby establishing what the word meant.

For the most part, XML does just the reverse. The idea is to use familiar labels on unfamiliar objects, establishing what the objects mean. XML neither provides the meaning nor the set of familiar labels. It’s simply a standard way of attaching labels.

The fact that XML doesn’t provide the set of labels is the key part of its extensibility. XML takes advantage of the lessons learned from its predecessor, the Standard Generalized Markup Language (SGML)–that no single set of labels will suit all applications. Unlike SGML, which defines the descriptions of the structure and content in documents, XML doesn’t provide the meaning, which is what people mean when they say that XML doesn’t have any semantics. However, XML does actually have some semantics: It defines what things like ” [is less than] ” mean in the context of an XML document. XML just doesn’t tell you the meaning of the labels–the elements and attributes.

Elements and attributes

People often take for granted that the meaning of elements and attributes aren’t inherently expressed in an XML document. They see a “price” element and suggest that it’s obviously a price. But this is obvious only because we’ve learned what the word “price” means (perhaps because our parents labeled things around the house for us when we were young). Jon Bosak of Sun Microsystems, who pioneered the XML specifications produced by the World Wide Web Consortium (W3C), frequently demonstrates this by showing an XML book catalog where the elements and attributes are in Japanese.

To a non-Japanese speaker, the elements and attributes are meaningless. The only reason we have any notion of the meaning of an element is that its name is familiar to us.


So how does a computer know the meaning of an element in an XML document, or at least what to do with that element? The key is familiarity. There’s a commonly held misconception that a schema is the magic piece that tells us what the elements and attributes mean. This simply isn’t true. A schema, first and foremost, gives you a set of element and attribute names along with constraints on where they can be used in a document and what they can contain. In terms of document creation, a schema provides a blueprint (e.g., when you create an “invoice,” it must contain a “date”). In terms of data interchange, a schema provides a contract (for example, when you send an “invoice,” it must contain a “date”). The schema doesn’t indicate to a machine what either “invoice” or “date” mean.

The meaning of an element is generally found for consumption in prose definitions accompanying the schema. Practically, the meaning is determined by the code that processes the XML documents following a particular schema.

Some schema languages–and the W3C XML Schema language in particular–introduce object-oriented notions of extension of types. This can be used simply to reuse common syntactic constraints, but is far more powerful in providing the possibility for semantic relationships to be expressed in a schema. It’s possible in some schema languages, for example, to indicate that an “author” is a “person.” This doesn’t actually give the meaning of “author.” Ultimately, the machine needs to know what to do with the base types in order to take advantage of the semantic relationships expressed.

One way to improve interoperability is to move away from the development of monolithic schemas that attempt to cover all possibilities–even move away from schemas that define entire documents. Instead the focus should be on more modular schemas that can be plugged together to describe a particular class of documents. Such modularity could extend to the individual type. For example, the ISBN Agency could define a schema with a single type: ISBN. The U.S. Postal Service could define another schema as “USZipCode,” which then could be referenced wherever an ISBN or U.S. ZIP code is needed within another schema.

The challenge with reuse is that different people have different views of the world and will want to model a schema differently. But this is no less true with monolithic schemas than atomic ones. Small schema pieces have a far better chance for reuse, especially in cases such as the two noted above, where a particular body is the obvious one to define the types.

The notion of small, reusable schema pieces highlights the role of registries and repositories. The work of the Organization for the Advancement of Structured Information Standards (OASIS) in this regard is particularly significant, and plays a large part in achieving interoperability.

XSLT and transformation

One technology often cited in discussions about interoperability is transformation. The idea is if you encounter an XML document following a schema you don’t understand, you can transform it to one that you do understand using something such as Extensible Stylesheet Language Transformations (XSLT). Note that XSLT shouldn’t be thought of as necessarily the best tool for this job. XSLT was initially designed for transforming structured data into a presentational form. Although XSLT is useful as a general transformation language, don’t overlook alternatives, such as writing specialized code.

Also, translating one vocabulary to another isn’t always trivial. There isn’t always a clear mapping between elements: The structure may be made explicit in one schema, while it may be left implicit in another. Certain element types in one schema may have no correspondence in another.

The whole transformation issue is made significantly simpler if schema derive from common libraries of base types–another reason for the development and publication of such types.

Where to from here?

The issues and challenges raised here shouldn’t detract from the fact that, with XML, you have a multi-purpose, internationalized foundation for data formats. Interoperation is made significantly easier just by the common syntax, and there are few arguments for not making information available in XML. One thing you shouldn’t overlook is the widespread interest in XML, which is facilitating increased understanding of XML by a large group of people. This greatly improves the ability to maintain XML-based systems in the future.

James Tauber is the director of XML technology at Bowstreet, where he’s responsible for educating developers, customers, partners, and employees about XML. In 1996, Tauber joined the World Wide Web Consortium (W3C) working group that helped produce the XML specification. More recently, Tauber was principal editor of the Directory Services Markup Language (DSML), an XML schema for representing directory contents and structure for business-to-business Internet commerce. Tauber serves on the W3C’s XML Core Working Group and Extensible Stylesheet Language (XSL) Working Group, and he helped author the specification for Canonical XML (for use in digital signatures). He created FOP (Formatting Object to PDF Translator), the first implementation of the Extensible Stylesheet Language that formats XML documents for print. He’s also writing a reference book for XML developers and maintains XML and XSL information Web sites.


The American Institute of CPAs has joined with the Big Five and several technology companies to develop Internet-based program code that promises to revolutionize financial reporting.

The XFRML Consortium is developing “XML-based financial reporting markup language,” also known as XFRML, a programming language that formats financial reports for transport across the Internet and viewing on browser-equipped computers. The tool is a derivation, or “specification” of extended markup language, which is fast supplanting HTML as the language for Web-based documents.

The group’s members expect their language to evolve into “the digital language of business,” by first establishing itself as the business community’s standard for preparing financial reports, and publishing, exchanging and analyzing the reports’ information over the Internet. At the same time, it will make the delivery of that information more rapid and less expensive than ever dreamed of, they say.

While HTML has allowed for financial statements to be posted on and downloaded from the Net, XFRML will allow investors, lenders and others to download very detailed information from the statements, such as depreciation expenses for a specific time period. It will also allow for the downloading of detailed groupings of information, such as all of a certain year’s depreciation expenses reported by all companies in the same industry.

“Accounting is the language of business, and XFRML will make it easier to share information expressed in that language by permitting computer applications to understand our vocabulary,” said AICPA chief executive Barry Melancon.

Moreover, the consortium’s members say that XFRML will enable the accounting industry and financial officers to realize the Internet’s full potential. “Everyone’s talking about information moving seven times faster on the Internet, and now we have the capability to make our financial data take advantage of that speed,” said Wayne Harding, vice president of Great Plains Software, a prominent consortium member.

While the consortium expects to have a functional language available by next March, at which time it may kick off a nationwide marketing campaign, it has developed a prototype that is being used by Great Plains to present its 1998 financial results on its Web site,

The statements initially targeted for use with XFRML include balance sheets, income statements, statements of equity, statements of cash flow, notes to the financial statements and the accountant’s report.

The remaining consortium members are: financial report software developer FrX Corp.; Internet-based financial information distributors Inc.; Interleaf Inc., a developer of XML-based technology; and the Woodburn Group, a CPA/tech consulting firm based in Minneapolis.

Along with providing easier access to financial information, XFRML theoretically will also provide for easier delivery by formatting information so that it automatically meets regulators’ reporting requirements. The consortium says that XFRML means that information entered once can be “rendered” as a printed financial statement, a Web site document or a specialized report.

The consortium expects XFRML to ultimately become the delivery language for many reports that are the bread and butter of many CPA firms, including IRS tax returns and the Securities and Exchange Commission’s 10Qs and 10Ks

CPAs themselves are expected to be key players in getting the business community – particularly their publicly traded clients – to adopt XFRML. A formal strategy has not yet been decided, but the Big Five will likely set a model that smaller CPA firms will follow in leading clients to make XFRML the standard for all businesses.

As lead member of the consortium, the AICPA owns the XFRML license. Its information technology group leader, Louis Matherne, has indicated that the institute will issue individual licenses for free, in the interests of getting XFRML adopted on a wide scale.

The consortium will be meeting in the coming months to further define development of the language and to hash out plans for its delivery. Still to be determined are the CPAs’ exact roles in making the technology known to their clients and the sectors to whom the message will be targeted.

The consortium is the latest of dozens of industry groups around the world that are developing Internet-based delivery languages specific to their industries by working from XML expansions of HTML.

While HTML tells browsers how to display type and images, XML has taken things a step further by also describing the nature of the content and indexing it so that users can retrieve very specific information. Industry-specific applications of XML drill down even further by indexing and allowing for the retrieval of details – in XFRML, that means details from within financial statements.

Other vertical players with XML initiatives include the insurance industry, which is creating XML standards for policy information, and the Newspaper Association of America, which is creating standards for classified advertising data.

One major hurdle and fear for the XFRML Consortium is the potential for other players to develop an XML specification for financial reports. If competing financial reporting specifications are created, it could splinter business users and ultimately dilute the power of the initiative.

“What will make this work is everyone using the same specification,” said Matherne. “The real power of this will be how it’s used – [as well as] the broad acceptance of XFRML as a common language.”

PricewaterhouseCoopers, an XFRML Consortium member, is already working with investment banker JP Morgan on creating an XML specification related to the investment industry. However, that is not a direct threat to XFRML, according to consortium members.

One apparent drawback to the XFRML Consortium’s efforts has been its inability to attract vendors of enterprise business software applications. For example, enterprise vendor SAP has already established itself as a leader in developing vertical XML specifications.

Matherne said that he hopes that the enterprise vendors, such as SAP and Oracle, will connect with XFRML as it gains more momentum. “The enterprise vendors are not a population we are ignoring by any stretch,” he said.

Consortium member Microsoft operates a forum for XML development initiatives, known as BizTalk, which already includes most of the enterprise software community. Microsoft accounting market manager Christy Reichhelm heartily praised the XFRML effort and said that it will likely become better known to the enterprise vendors though BizTalk.

“It’s a great move on the AICPA’s part, saying we can make some real changes in the industry and make life easier for a lot of people,” she said.

Fellow consortium member FrX Software, the Denver-based report writer technology developer, held off on its Internet development efforts until XML became more established. A soon-to-be-released update of its Visual Financial Reporting software product features the ability for reports generated from general ledgers to automatically format in XML for delivery over the Internet.

“We skipped HTML and waited for XML, and now XFRML is taking things even further and is better for business,” said Robert Blake, an FrX product manager. “A lot of people are waiting for XFRML to get a full green light because it opens the door for lots more.”

What The Heck Was DAML?


It may seem that the last thing the world needs is another Web standard, but there is always room for an intelligent addition. A new language known as DAML addresses an important, unmet need- making Web sites understandable to programs and nontraditional browsing devices.

DARPA (Defense Advanced Research Projects Agency) Agent Markup Language is a step toward what Tim Berners-Lee, the creator of the World Wide Web, calls a “semantic Web” where agents, search engines and other programs can read DAML mark up to decipher meaning-rather than just content-on a Web site. A semantic Web also lets agents utilize all the data on all Web pages, allowing it to gain knowledge from one site and apply it to logical mappings on other sites.

Enhanced searching of this type would require a lot of groundwork: Once DAML is available, authors at individual sites would have to add DAML to their pages to describe the content.

Jim Hendler, a University of Maryland professor who is one of DAML’s creators, told PC Week Labs that before long the Web will involve many machine-to-machine connections with the help of semantic languages such as DAML. Hendler works on DAML for DARPA, a research and development organization for the Department of Defense.

Although DAML is still in an early stage, Hendler has begun working with Berners-Lee and the World Wide Web Consortium to make sure that DAML fits with the W3C’s plans for a semantic Web, which would be based primarily on RDF (Resource Description Framework), the W3C’s metadata technology for adding machine- readable data to the Web. Hendler said he expects to have a working draft of DAML available by the summer.

Basis in XML bodes well

Like RDF, DAML is based on XML (Extensible Markup Language), which should help it integrate with other Web technologies. A site developer would use DAML in much the same way that HTML metatags are used, describing content on a page using markup that is generally invisible to site visitors. A critical difference is that DAML markup would be easily understandable to DAML-enabled user agents and programs, whereas most metatags are proprietary and have no contextual meaning for general search applications.

A DAML-enabled agent doing a search for an expert on XML (see chart, right) might combine information from diverse sites to produce a result that would be missed by an agent or program in use today. On one page, the agent might discover the existence of an advanced university course on Web development that covers XML in detail. On another page, it might find the course number for this site. Then, on a third page, it might discover the name of the professor teaching the course. Although none of the sites individually has enough information to lead the agent to the professor’s name, the combination of the content described in DAML yields the enhanced search result.

Hendler said he also sees tremendous possibilities for DAML as an enabler for future devices hooked to the Web. For example, he described a scenario in which an appliance would be able to respond to an electronic recall or defect notice about one of its parts. A query initiated by the parts manufacturer would detect the part data in DAML on the appliance’s embedded Web connection.

Another area of use for DAML would be the ability to do queries that can cut across the varied and confusing jargon used by different industries. This would be especially useful in military queries, where, for example, the same jet could be listed as an F-14 or a Tomcat.

Some of the biggest barriers to technologies such as DAML are the lack of tools to create it and the dearth of user agents that understand it. An experienced coder would be able to insert DAML mark up through an editor, but Hendler said the technology will reach its potential more quickly if page-creation tools simplify its use. An effective tool would step users through the process of describing the content on their pages, then insert the proper DAML automatically.

One advantage DAML may have over other emerging Web technologies is the involvement of DARPA, which has been instrumental in the creation of the Internet and many Internet technologies. Hen dler expects DARPA to make all the basic tool sets for DAML available and to encourage the participation of the Web development community. Also, DARPA will offer support to those willing to create tools and user agents that enable DAML.

The Python Strikes!

Hot on the heels of my Comdex/Chicago session, at which Python architect Guido van Rossum spoke, version 2.1 of Python shipped, as did Programming Python, 2nd Edition, by Mark Lutz . Of course, Programming Python, 2nd Edition doesn’t cover Python 2.1, only Python 2.0, but that’s life in book publishing when the book’s subject progresses on Internet time.



Python 2.1

I have to say that I’m fascinated by the evolution of this language. Guido and his team are managing to serve a variety of constituencies without introducing incompatibilities.

For instance, as described in the What’s New document, Python 2.1 adds support for the following:

PEP 227: Nested Scopes PEP 236: __Future__ Directives PEP 207: Rich Comparisons PEP 230: Warning Framework PEP 229: New Build System PEP 205: Weak References PEP 232: Function Attributes PEP 235: Case-Insensitive Platforms and Import PEP 217: Interactive Display Hook PEP 208: New Coercion Model PEP 241: Metadata in Python Packages

In the list above, the PEP numbers refer to Python Enhancement Proposals, Python’s community process for evolving the language. Python 2.1 is the first release that was steered by the PEP process. PEP appears to be a good thing, although no one could mistake it for a democratic process: Guido still pretty much decides what makes it into the language and what doesn’t.

That isn’t a criticism, by the way; every emerging programming language needs a Czar to keep it coherent. ANSI/ISO committees are more useful later in the language’s life cycle, some would say much later.

Nested Scopes, Future Directives, And Warnings Three enhancements, PEPs 227, 230, and 236 need to be discussed together. Python’s design has historically made functional programming in the language rather awkward. I use the term functional programming here in the specific sense expressed in the comp.lang.functional FAQ: “Functional programming is a style of programming that emphasizes the evaluation of expressions, rather than execution of commands.”

For instance, Scheme is considered to be a functional language, as are Haskell and ML; C++ is considered an imperative language. That isn’t to say that you can’t do functional programming in C++, or that you can’t do imperative programming in Scheme. As we old-timers say, you can write Fortran in any language.

The point here is that, up until now, Python only had three namespaces in which to resolve names: the local, global, and built-in namespaces. Functional programming makes heavy use of lambda expressions and nested functions, and those really require inner functions to be able to see the local variables of outer functions, and that feature is what has been added.

You may be wondering about lambda expressions. The idea and the term go all the way back to Lisp. In Python, the lambda expression yields an unnamed function that evaluates a single expression, and is often used for callback functions. Getting back to nested scopes: because adding nested scopes will break some existing code, Python 2.1 does not activate the feature by default. To write code that uses nested scopes, you need to activate it explicitly with a __future__ directive: from __future__ import nested_scopes.

In Python 2.2, nested scopes will be the default behavior. To warn people who have code that may break in the future, the current compiler also has a new warning framework (useful for other things as well), and will issue warnings for code that is incompatible with the new behavior. When the new behavior is turned on, the warnings become full syntax errors.

Anyway, with the new behavior, Python becomes a dandy functional programming language. It already had first-class functions, an exec statement, and an eval() function; the addition of nested scoping gives it some of the flavor of Algol or Scheme.