MS BackOffice Unleashed

Previous Page TOC Next Page



— 2


Integrated Architecture


Applications depend on their operating system for a number of services, such as access to memory, disk storage, and the network. They can also depend on one another for services, as in the case of Systems Management Server, which uses SQL Server to maintain its data. With the exception of the elder member of the BackOffice family (Microsoft Mail Server), the components are designed to run under only one operating system (Windows NT). This may be a problem for locations that would want to use BackOffice tools, but have standardized on another operating system, such as one of the many variants of UNIX that are available. However, it does have the advantage of enabling the BackOffice component developers to optimize their products for this single operating system and utilize its best features (such as Windows NT services) to provide better functionality.

This chapter presents this basic architecture for your consideration. This is one of the few computer environments where once you understand the basics of the operating system, you actually have a good understanding of how the various components are designed. In most cases, you have to learn the design philosophies, administrative philosophies, and user interface philosophies of a number of different vendors to run an application server. BackOffice simplifies this learning task, because some taskmaster at Microsoft has forced all the programmers (many probably against their will) to use a common design architecture. This lack of artistic creativity on the part of the hundreds of programmers who worked on BackOffice gives you a great advantage when you actually have to use the products and just want to know where the menu option or button to stop the background processes is located.

This chapter begins with an overview of the generic Windows NT architecture and then adds a discussion of how BackOffice fits into this architecture. However, I wanted to add a few more topics that go beyond the pure technical, behind-the-scenes features of Windows NT to cover some of the other component technologies that come into play when working with BackOffice applications. First, you look at the Windows interface. This may be obvious to some of you, but not everyone has spent years in the PC environment. This is actually a good technology to be familiar with because Microsoft follows its style conventions when designing its products. The next discussion covers some advanced technologies that Microsoft uses behind the scenes to enable applications to work together. These technologies are built into the operating system itself or are available as add-ons from the Microsoft World Wide Web site. The next three sections explore the security, monitoring, and administration tools provided in Windows NT. Rounding off this chapter are sections on the application programming interfaces that programmers can use to interface local applications with its services and a closing discussion on other forms of integration.

Overview of the Integrated Architecture


When Microsoft chose to label its new server and workstation operating system Windows NT, they ran the risk that many people would think it was just another upgraded version of their DOS/Windows product family. The good news is that while it can still run most DOS and Windows applications, it is a completely new operating system when you look under the hood. Although it was quite a challenge for the development team to write almost everything from scratch, it has a number of advantages. It was probably the only way that certain design goals could be achieved.

What is the Windows NT operating system? The following are some of its highlights:



The 32-bit architecture provides a vast improvement in stability and multiprocessing capabilities.


Microsoft’s Design Goals


Before going on with a discussion of the Windows NT architecture, it is interesting to consider Microsoft’s stated design goals when they built this operating system. This was really Microsoft’s second attempt at a server operating system. In the early days, it was partnered with IBM on the earlier versions of OS/2 (before the 2.0 release of OS/2). For a number of reasons, the two decided to go their separate ways; IBM kept the OS/2 name, and Microsoft chose to sell its product under the new name of Windows NT. The stated goals for this operating system were the following:

When you look at these goals, you find that they are echoed in the design of the BackOffice products other than just Windows NT itself. This ensures that the other BackOffice servers will stay in synch with the operating system itself. It also extends its marketability, especially in multinational companies.

The Windows NT Architecture


Discussions of computer architectures can get into some really nasty details about the protocols used to signal between the various subsystems and other such stuff. My goal here is not to teach you enough information that you could rewrite the operating system from scratch given a good C++ compiler. Instead, my focus is to provide you with just enough information that you feel comfortable with the operating system, its services, and how the various BackOffice components interface with the architecture to make a working system. If you are interested in the details of the operating system, there are a number of technical papers on the Web, on the Microsoft TechNet CD-ROM, and also in print in the Windows NT Resource Kit from Microsoft.

Another question is whether this discussion covers Windows NT 3.51 or Windows NT 4.0. The basic answer is "yes." You learn about the architecture of Windows NT 3.51 and then some of the upgrades in the NT 4.0 product. Actually, if you understand the basics of NT 3.51, you have a really good handle on the architecture under NT 4.0. There were a number of technology upgrades; however, they tend to be things such as a change to the graphical user interface (to look more like the Windows 95 interface) and greatly improved utilities rather than fundamental changes to the underlying architecture. The major improvements are with the addition of several high-performance graphics technologies that enable applications to go through a thinner layer of processing to access the capabilities of high-performance graphics cards more directly.

A simple starting point for this discussion is the three-layer model shown in Figure 2.1. At the top of this hierarchy are the applications with which we are all familiar. They are the user mode applications that include the BackOffice components, our word processors, and almost everything else that we interact with directly. Beneath this is the kernel mode services. The kernel of an operating system is its central processing logic and can be thought of as the heart of the operating system. The final layer is the hardware on which Windows NT is implemented. This is an important component that Windows NT has to adapt to, because it must support a number of different hardware platforms.

FIGURE 2.1. Three layers of the Windows NT architecture.

User Mode Applications


Let's look at each of these layers in a little more detail, starting with the user mode applications, because they are probably the simplest and most intuitive ones to deal with. Figure 2.2 shows a more detailed drawing of the user mode applications of most interest for these purposes. The system has at its heart the Win32 subsystem. Recall that Windows NT was a 32-bit operating system (as opposed to Windows 3.1, which uses smaller, 16-bit addresses). The early Intel chips (the 8086 and 80286) supported only 16-bit words, so operating systems had difficulty referencing large amounts of memory and using the most complicated instructions. However, the newer chips, such as the Intel 80486 and Pentium processors, support 32-bit instructions and memory referencing, which enables them to use a more complex operating system effectively. The Win32 subsystem is responsible for executing all the 32-bit applications that are on the market today.

FIGURE 2.2. User mode applications.

There are actually more 32-bit applications than you might imagine. Windows 95 is a 32-bit (mostly) operating system that executes 32-bit code. I have had relatively good luck getting Windows 95 applications to work well under Windows NT. Because this is a BackOffice book, I also have to mention that BackOffice is a 32-bit application suite that is specifically designed to take advantage of the new capabilities (that is, use of common memory areas for speed in such applications as Exchange Server and SQL Server).

The next component in the user mode application area is the Windows on Win32 (WOW) Virtual DOS Machine (VDM) shown in Figure 2.3. You may also hear a reference to WOW when talking about this technology. A lot of Windows NT references do not even show this component on the basic architecture chart. Perhaps they want to stress the sexier components in the architecture, such as 32-bit processing or the capability of running OS/2 applications. I have run across a number of good old 16-bit DOS and Windows applications that are still out there, however, and form important components in some user’s architectures. Therefore, let's look at the VDM here.

FIGURE 2.3. Windows NT’s virtual DOS machine.

The basic concept is simple. You simulate a 16-bit DOS environment within the Windows NT operating system. This is nice because it isolates the relatively ill-behaved DOS applications to a separate space, thereby preventing them from getting near the more sensitive parts of the operating system. Remember, DOS was a relatively simple operating system, which did not provide a lot of services to the application developers. Therefore, these creative folks made up their own services and worked hard on tricks to increase performance. However, problems arose when multiple application and driver vendors tried to use the same tricks and they conflicted with one another (often crashing the operating system).

This explains why not all Windows 3.1 applications run under Windows NT. There are some vendors who had a job to do and did things that were outside the boundaries of the operating system. The Windows NT designers could not simulate ways to handle all these possible tricks. Therefore, although well-behaved Windows 3.1 applications will run without problems, other 16-bit applications may not run at all. Your job is to know which ones work (or perhaps just upgrade your application suite to all newer 32-bit applications).

The next set of applications in the user mode that are of interest to us are the OS/2 and POSIX subsystems. The OS/2 subsystem is designed to run applications written for the OS/2 operating system. Remember, Windows NT started out from the same project work as OS/2, and compatibility was a logical choice at the time. Today, however, there are many more Win32 applications on the market than there are OS/2 applications. The POSIX subsystem derives from the UNIX world. POSIX is the United States government standard for interaction with an operating system (both programmatic and command-line) and is based on UNIX. It is required in many government contracts and can be a selling point if you are supporting one of these contracts. Anyway, this subsystem enables applications that are written to the POSIX standard to interact with the Windows NT operating system.

The final subsystem of interest in the user mode is the security subsystem. You learn more details of the Windows NT security model in Chapter 3, "Security Environment." The key to gain from this discussion is that the security subsystem exists in the user mode, far away from the kernel and operating system internals. It interacts with the security components in the operating system in a controlled (and secure) manner to control user access to information via the logon process.

Kernel Mode Components


The heart of the Windows NT operating system is the kernel and other components that provide the basic operating system services to the users. Figure 2.4 shows these basic components. The isolation of these key system processes from general user processes enables Windows NT to achieve a higher degree of stability than its predecessors. Of course, if you have a problem with any of these processes, your system is in deep trouble and you would get the dreaded blue screen (which indicates a system crashed—I have always found it to be associated with an incomplete installation of Windows NT or a failure in key hardware components such as the system memory).

FIGURE 2.4. Kernel mode components.

The system services component is a layer that provides the connection between the user mode subsystems and the other components in the kernel mode. Its main function is to route requests and responses correctly. It is usually drawn to be a relatively thin box on drawings to show that it is not a complicated layer and that it does not provide a lot of overhead to the applications.

The next component of interest is the Input/Output (I/O) Manager. At the lowest level of the I/O Manager, you have a series of device drivers that function to take a standard series of operations (an interface to the higher-level drivers) and turn them into the detailed set of signals that it takes to get a particular device (that is, a CR-563 CD-ROM drive) to do the job. This is an excellent design feature, because you can isolate the details of how a particular device works in a single software component that has a defined interface to the rest of the operating system. Above the device drivers, you have network drivers (specific to a networking environment), file systems (which control how the data is stored on the device), and cache managers (because most peripheral devices are much slower than the processors to which they are attached).

FIGURE 2.5. Components of the Input/Output Manager.

Another important concept is the input/output queue. In a multitasking system, a number of applications could want to access a particular device (for example, your C: driver) at the same time. You need a mechanism to control who gets the device first and track which jobs are waiting to get access. That is the purpose of the queue (a word rarely used in American English, except in computer circles). The I/O Manager is also responsible for maintaining these queues. Actually, when you get to Chapter 12, "Windows NT Performance Tuning," you will find that the length of this queue is a good overall measure of how well the system is keeping up with its I/O demands.

The final subject that needs to be discussed related to the I/O Manager is the difference between synchronous and asynchronous I/O. Quite simply, synchronous I/O means that the application makes a request for an input/output operation and then waits for the result before proceeding further. This is necessary when you are loading the next statement that you plan to execute off a disk drive (you cannot execute it until after you load it). There are other types of operations where you do not need to wait around for the operation to complete. Suppose, for example, that you want to save some partial results of your calculations to disk. Why wait for this operation to complete? Instead, you can just tell the operating system’s I/O Manager what to write and then go on about your business, trusting that the I/O Manager will get the job done for you. This is referred to as asynchronous I/O. It is used a lot by the operating system and other applications that have been designed in an environment where every ounce of performance is worth the effort.

The next component in the Windows NT Executive is the Process Manager. This is the component that creates and destroys the various processes that may be running on your system. It can be thought of as the key component that enables Windows NT to be a multitasking operating system. Typically, applications are comprised of a single process, although there are applications that actually start up multiple processes that coordinate their activities to get the job done. A key Windows NT design feature is that a process has its own physical memory areas and other system resources that are separated from those of all other processes. This helps to prevent a single poorly written application from taking down the entire system (I definitely like this idea).

FIGURE 2.6. Synchronous and asynchronous I/O.

Having multiple processes is a good start for an operating system, but you can do a little better, especially on computers where you have multiple central processing units installed. Many computer tasks can be broken down into components that can function in either a parallel (each doing its own work) or serial (one task’s output is the other task’s input) fashion. It can be thought of as an assembly line within an application. Anyway, if your application can be broken down into this series of tasks, it might be nice to assign different tasks to different central processing units to get the job done more rapidly. In the Windows NT (actually, Win32) environment, these little tasks are referred to as threads. Application processes are written in Win32 to spawn multiple threads. If there are multiple processors available, threads can be assigned (scheduled) to processors as needed to complete the tasks at hand.

The next component in the Executive is the Local Procedure Call Facility. It is interesting to note that Microsoft has set up a client-server architecture within the Windows NT operating system. The user applications and the Windows NT environment subsystems communicate with one another through a set of standardized messages. Most of these calls are made when an API is called. The Local Procedure Call facility serves as the traffic cop for these messages.

The Local Procedure Call facility is especially interesting for its client-server nature. Many applications are going to a client-server architecture to improve performance by leveling the processing load and enabling individual components to become more specialized. The use of this architecture within the operating system has the advantages of client-server computer environments, which are based on multiple computers on some form of network. It also minimizes the chief disadvantage of client-server architectures in that the network traffic bottlenecks are not noticeable when communications are taking place over the high-speed internal busses and use internal memory structures within one computer.

The next component is the Object Manager. Object-oriented technology had just started to come into its own when Windows NT was under construction. Therefore, you will see some object-oriented concepts creeping into the operating system along with its associated terminology. An object can be thought of as a combination of information and actions that can be associated with that object. For example, if you are working on medical records for a patient, you might consider the patient object to be a collection of data (his blood pressure since being admitted, his classification by his doctors, and even whether he has insurance and a good record for paying his bills). Associated with that collection of data is a series of actions that you could take on the patient—discharge him, take out his gall bladder, and so forth. Depending on the state of the patient, some actions may not be applicable (that is, you would not want to perform a heart transplant action if the doctors have diagnosed a broken toe). Anyway, that is the crude basics behind objects. With this understanding (perhaps you already knew far more than I do about the subject), I can discuss Windows NT objects and how the Object Manager deals with them.

Objects within Windows NT are things such as processes, threads, ports, files, directories, and a number of other things associated with the internal functioning of the operating system. You can have multiple instances of an object (you may have the Windows NT performance monitoring tools open in three different windows on your desktop monitoring three different sets of parameters). Object Manager’s job is to keep track of what objects are out there, how the operating system refers to them, when they are being used, and so forth. Just looking at the process example alone, you can see how important it is to keep track of what is going on with your operating system and how to control access to those resources.

Next on the list of Windows NT Executive components is the Virtual Memory Manager. Virtual memory is a concept that is common to most modern operating systems. It comes from the fact that random access memory (RAM) is relatively expensive and disk storage space is relatively cheap (I never would have believed--even a few short years ago--that I could own a home computer with several gigabytes of disk storage). Operating systems have real troubles when they run out of places to put data, instructions, and other critical processing information (they usually crash). Because it is unlikely that you will have the money to buy so much RAM that you could never run out, operating systems have designated certain places on their disk drives to store overflow information that cannot fit into RAM (see Figure 2.5).

This wonderful convenience comes with a price, of course. Central processing units access instructions and data that are located in memory, not directly on disk drives. Therefore, you have to develop a process that transfers overflow information from memory to disk and then brings it back again when it is needed for processing. Transfers to and from disk drives are many orders of magnitude slower than the transfers between RAM and the central processing unit. Therefore, if your system is swapping information to and from virtual memory, your performance usually degrades substantially. Operating system vendors have been studying this problem for some time and do have a few tricks that they implement to minimize this performance impact. They track which components are frequently used (for example, the instructions within a given subroutine that is being iterated through for a long-running application) and those that are not. When it comes time to transfer information virtual memory, the less frequently used information is transferred to disk. (These tricks help somewhat, but in the performance tuning section, I will get on my soapbox about how you have to work extremely hard to minimize swapping if you want to get the optimal performance out of your system.)

FIGURE 2.7. Basic concepts behind virtual memory.

The last item in the Windows NT Executive is the Security Reference Monitor. It is certainly not that security is a trivial consideration in an operating system or that Windows NT is in any way lax on security. In fact, it is one of the few commercial operating systems that has been rated at the C level by the United States government. You can get a publication known as the "orange book" to see what all this means, but you get the general idea—it has a much higher security rating than the vast majority of operating systems that are out there on the market today.

Where does the Security Reference Monitor fit in to this security picture? Basically, it uses its privileged position within the Windows NT Executive to ensure that privileges are correct before access is given to operating system resources such as disk files or printers. It receives the request for validation of user ID and password when the user first logs on and checks this against the list of valid domain or server accounts. Then it sends a notification to the requesting process as to whether this login in permitted or not. In domains, it also sends a security key to the requesting workstation that allows this workstation to access resources without going through the server. When it comes time to access a resource, the request is routed through the Security Reference Monitor on the computer that owns the resource. The request is validated against the privilege assignments for that resource when compared with the security key associated with that user ID or password.

The Security Reference Monitor also takes the results of its privilege authentication efforts to generate information for the auditing records kept by the operating system. These audit trails enable administrators and security types to see what has been happening from an access point of view to look for warning signals (like hundreds of invalid login attempts spaced only seconds apart, which is probably a sign of hacking) and also see general trends (how many users are logging in at a given hour of the day).

Working through the Windows NT kernel components shown in Figure 2.4, you come to the kernel itself. It is the heart of the Windows NT operating system and provides most of the central coordination involved with keeping the computing work going. You can think of it as the component that tells the central processing unit which tasks to perform. It is also the component that is designed to split work between multiple processors, if your computer has them. The basic units of assignment for processing are the threads that were discussed earlier.

You can tell some of the influences of the VMS architecture in Windows NT when you see the 32 priority levels that are available for threads under NT. The really high-priority items are reserved for real-time events, which are typically tied to a fixed schedule (such as a timer for a critical system process going off). The lower-priority (variable) items still need to get completed, but only when there are no high-priority threads waiting for service. The kernel is always in memory and runs on multiple processors.

There are two basic types of objects that the kernel deals with. The first is used to cause actions (dispatch), to keep all activities of the system flowing in the proper order. The second type is used to control the kernel itself, but it does not cause dispatches. Key examples of this type are the interrupts that are used by devices to indicate that they are ready for service.

Before you leave the kernel, remember this. First, the best type of kernel is the one that you never see. In older flavors of UNIX, you often had to adjust kernel settings to improve the performance of your system. This was very risky. On most operating systems, if the kernel is not adjusted properly, you are in real trouble. You often get the whole system crashing at seemingly random intervals and other confusing behavior. The good news about Windows NT is that Microsoft has put some algorithms in that let the kernel tune itself for the most part. When everything is working properly and your system is just churning away, life is good.

The final component is the Hardware Abstraction Layer. This is a really neat concept. One of the greatest blessings of modern computer architectures is that you have such a wide range of choices to find components that meet your needs. The bad news is that there are so many choices, selecting the right products and implementing them correctly can be a nightmare. There are dozens of different PC motherboards out there and that covers only the Intel-based architectures. It does not even begin to deal with the details of DEC Alpha, MIPS, PowerPC, or the other architectures.

The Hardware Abstraction Layer is designed to keep the rest of the operating system manageable. It takes the unique requirements of a given type of hardware system and translates it into a standard set of interfaces to the other components in the operating system. In that sense, it is similar to the device drivers in the I/O Manager. Examples of problems that it deals with are how to send signals to various different processors in multiprocessor systems. The Hardware Abstraction Layer is called by various device drivers to deal with the wide variety of I/O architectures available (SCSI, PCI, IDE, and so forth). It does this while trying to keep itself to a relatively thin layer, thus improving performance of the operating system.

This concludes my brief introduction to the key components of the Windows NT operating system that operate in the kernel mode area. They are truly the guts of the operating system, and you will be using them every time you access NT. The good news is that they function fairly well without your intervention. It is interesting to study their highly modular design to understand how the Windows NT operating system adapts to a wide range of hardware platforms.

Hardware Components


Unlike many other computer systems, Microsoft does not supply most of the hardware along with the operating systems. Yes, it does dabble in mice, keyboards, and a few things like that, but for the most part, you integrate hardware systems from many hundreds of vendors. The Intel world offers the widest range of vendors, and this makes the price and performance competition most intense in this environment. The other hardware environments tend to be more limited in terms of the number of vendors offering products, which reduces the hardware integration options. Magazines and the Internet are the best places to keep up with the latest advances in this technology.

There are a few additional points regarding the hardware architecture for Windows NT. The first point—that I cannot stress enough—is that you have to look at the hardware compatibility list for Windows NT before you buy the products. Windows NT developers have never tried to build drivers for every imaginable piece of hardware. Not all the hardware vendors who are looking to build cheap, home PC products have devoted the effort that it takes to build NT drivers; instead, they focused on Windows and Windows 95 drivers. You must also be very explicit. You need to check that the model X external disk drive from vendor Y is on the list. It does not help if there are disk drives from vendor Y on the list unless it is a model X external disk drive.

A second point is that you should really ensure that your hardware is completely set up before you try to install Windows NT. Windows 95 is a great tool to use to ensure that your configuration is correct. It has been designed to be easy for the very casual home user. There are a large number of Wizards, which seek out hardware configuration information and work to resolve conflicts. The classic examples of this are the interrupts and input/output memory addresses for add-in cards on an Intel-based computer. This can be quite tricky, because there are only 16 interrupts and you can use almost all of them if you have a PC with a sound card, CD-ROM drive, and network card. (I have one machine that uses all the interrupts; there are none available for future expansion.) Windows NT, on the other hand, is typically installed on higher-end workstations. Although you can resolve problems with Windows NT installed, the relatively long boot and reinstallation process with Windows NT can make this a slow process (if something goes terribly awry). Therefore, unless you have bought an integrated hardware platform that has all the conflicts already resolved and the interrupt/memory/version information documented for all the peripherals, you might want to solve these problems with a simpler operating system first and then install Windows NT on top of this configuration.



You can save a lot of time when installing the operating system if you ensure that the hardware is set up correctly and free of any conflicts (try loading Windows 95 on the machine to help debug hardware conflicts).

The final point is that your hardware configuration strongly affects your performance. This may seem obvious to most people, but I want to emphasize it here in the context of a BackOffice server. Most PC hardware has been designed for simple desktop computers. Its performance is good enough to keep up with the needs of your average users running their word processors. These products were not designed for a large PC server that is running a mail server with dozens of users attached to it, along with a World Wide Web server and dozens of other users who are just sharing files with one another via the server. Therefore, you have to be more concerned with getting higher-performance components in the PC market when you are picking out a server. Most workstation configurations will actually work as a server (assuming you have enough memory and disk space), but they will soon overload if you have more than a few users. What is enough? That is the subject of the performance monitoring and tuning: Chapters 4, "Monitoring Environment," and 13, "NT Integration with Netware and UNIX."

NT 4.0 Versus NT 3.51


How has the architecture changed for Windows NT 4.0 over that of Windows NT 3.51? The folks at Microsoft put a lot of effort into version 4.0 and it looks radically different. However, as a tribute to the original architectural design, the architecture did not change for this release very much. Instead, Microsoft made modifications to a few of the subsystems and implemented a new graphical user interface and its associated application programming interfaces.

The biggest and most noticeable change is that Windows NT 4.0 now sports the graphical user interface first introduced in Windows 95. Figure 2.6 shows the graphical user interface that Windows users have grown accustomed to over the years. You deal with the operating system through a shell known as the Program Manager. There are a number of common tools to manipulate files (File Manager) and performance system configuration functions (Control Panel). The applications run inside windows (areas on the screen with borders and control buttons), which are customized to the needs of the application.

Next the Windows 95 and Windows NT 4.0 graphical user interface steps up, which is shown in Figure 2.7. First, why change something that has sold millions of copies? The folks at Microsoft were quite careful about this process and got user interface experts to run experiments on people to see what functions of the graphical user interface were most intuitive and productive for the users. A few of the things that they implemented based on this study were the desktop, new tools for finding files, and increased use of the right mouse button and property pages.

FIGURE 2.8. Traditional Windows graphical user interface.

FIGURE 2.9. The Windows 95 and Windows NT 4.0 graphical user interface.

The desktop is probably the first thing you notice when entering Windows NT 4.0. Instead of being trapped inside the Program Manager application for processing support, you have a desktop that consumes the entire screen. You can access pretty much any application by creating a shortcut to that application and placing it on the desktop. It makes it easy to customize your environment to your personal taste. Each user ID also has its own environmental settings so that users can determine the look and feel of their environments.

The file location tools are also a big change in the NT 4.0 interface. You have My Computer, Network Neighborhood, and Explorer to work with as opposed to File Manager (which is also there in case you want to stick to stone knives and bear clubs). These tools enable you to navigate files located on your network as easily as you can find them on your local hard disk drive. You have a number of options for display and navigation. There is a heavy emphasis on drag-on-drop for copying or moving files. (I found that after learning and using these new tools, I have real trouble going back to File Manager on those servers that I work with that are still running NT 3.51.)

There is also increased use of the right mouse button and property pages. The right mouse button under Windows 3.51 is a rarely used feature, even though most users sit with fingers poised over both of the buttons on a traditional two-button PC mouse. The graphical user interface designers concluded that it was easier to teach people to use the second button on a mouse to call up menus that it was to have them move the mouse to the top of the screen and then find a function in a pull-down menu. Therefore, they implemented pop-up menus that are activated by right-clicking on a object (such as a desktop shortcut or a file). The most used feature from this pop-up menu are the properties pages (see Figure 2.8 for an example). This is a series of one or more dialog boxes that have tabs at the top or side to help you locate a particular piece of information. Instead of looking for a setup or configuration utility located somewhere in the same directory as your application or actually running the application and looking for options or settings menu selections, you point your mouse at the application, right-click, select Properties from the pop-up menu and then set the properties to suit your needs. It takes some getting used to but, once again, after learning this new environment, I hate going back to the old one.

FIGURE 2.10. Properties pages.

A few of the Windows NT utilities and services have changed their names and functions. Highlights of these changes include the following:

In conclusion, even though the two versions of the operating system look radically different, they are architecturally the same. There have been substantial changes, especially to the user interface functionality, but the modular nature of the architecture accommodates these changes well. By now, you should have an appreciation for the basis of the operating system. If you are searching for more details, check out the Windows NT Resource Kit or articles on the Microsoft TechNet CD-ROM or Microsoft Web page (www.microsoft.com).

BackOffice Architecture


As stated earlier, BackOffice is designed for the Windows NT environment. Therefore, it is safe to say that the BackOffice architecture is the Windows NT architecture. However, that is not the complete picture. How do the BackOffice components interface with Windows NT to get their jobs done? The simple answer is that they are Win32 applications (the 32-bit Windows application programming interface was used to build these applications). Most of these applications are implemented as services with multiple threads.

Let's take a moment to look at the concept of a service. The standard definition goes something like "services provide system functions." That never seemed to be enough for me. A service should be a specific type of application that is started and run by the operating system. Whereas normal applications that you are running are terminated when you log off the Windows NT computer, sessions are started and stopped by a Control Panel utility and run whether anyone is logged in to the system or not. They can be configured so that they start up automatically upon system startup. You can even write batch files that start and stop services as needed. The key is that services depend only on the operating system, rather than any single user logged in to the system.

This is very useful for most BackOffice components. You would hate to have to leave your workstation logged in to a special privileged account all the time, especially if your server is located in your office environment as opposed to a protected data center. You want your mail server or Web server operating around the clock ready to service user requests as they come in. Windows NT is not like the mainframe environment where you might shut down time sharing operations after normal business hours to begin batch processing. You can access it even when you are working into the wee hours of the morning.

Another important feature is that most services are implemented so that they can be accessed through application programming interfaces. You want a means for your client workstations to access the central mail server or Web server to transfer information. This also is important for client-server database communications. It is especially important to Microsoft’s goal of having an extensible architecture where BackOffice components can work together and users can build their local applications so that they can interface to BackOffice.

It is important to emphasize that BackOffice is dependent on the Windows NT architecture. In the DOS world, the operating system did not do all that much for you other than provide a way of executing applications. Operating systems such as Windows NT provide advanced services, such as network connectivity, printer drivers, and even fonts that your application can access. Therefore, BackOffice components enable you to focus on the job at hand (for example, processing electronic mail messages) rather than worrying about the low-level services. Microsoft tends to make its programmers use the operating systems services more than many other companies who think that they know better ways to get the job done. Therefore, you need to be aware of the services provided in the Windows NT operating system to fully control the BackOffice suite.

The Windows Interface


Another fundamental standard that defines the BackOffice environment is the Windows interface. Although it is not always observed by every vendor out there, it is followed well enough to enable you to pick up the operation of almost all Windows applications in a relatively short period of time. There are a couple of levels to this interface to look at in this chapter:

The first function, providing the basic interface of applications to the operating system, has been around since the earliest operating systems. The key here is that there is a wider range of services that are provided by the operating system. This application programming interface is quite extensive and growing. There is also another important set of interfaces in the BackOffice architecture—those of the BackOffice components themselves. This can become quite a challenge when you consider the number of new APIs that are coming out. Some months, there are several new APIs that are released. This is good, because software components are beginning to work together more and use one another for services, thereby making each application simpler. However, it can be a lot to keep straight as a programmer.

Some of the new technologies that Microsoft is coming out with create other interfaces between applications and the operating system. One that is particular interesting is the Component Object Model (COM), which includes Object Linking and Embedding (OLE). There are a lot of technical details to these technologies, but the key to focus on is that they provide ways in which you can access specific functions within another application if that application has been written to enable this to happen. You can, for example, call up a fully functional set of spreadsheet calls in the middle of your word processing document. They are not just for display purposes. You can actually alter the data and formulas to produce the desired results. You will soon see some of this technology in the basic user interface wherein Web browsing technology will be incorporated into a windows in the explorer tools.

Another key that you should be aware of in the Windows NT environment is the use of common components by multiple applications. The primary means for doing this is the dynamic link library (DLL). Here, you have a stored series of functions that any application can call. The DLL files are located either in your current working directory or in your file search path. A common example is that in a subdirectory of Windows NT main directory (\winnt\system32), you find a number of application and operating system DLLs, and this directory is in your search path by default.

This DLL concept sounds nice—and in most cases it is. It saves programmers from having to code every function in every application from scratch. They can call DLLs that have been tested and are therefore fairly reliable. The disadvantage to this concept is that a number of vendors feel that it is necessary to have updated versions of commonly used windows DLLs to make their application work better. This can sometimes cause problems with other applications that cannot work with these updated DLLs. You should keep an eye out for this when applications start acting "funny" soon after you install a new application.

The final point is that there are graphical user interface (GUI) standards for the windows interface. The windows API is a collection of basic, low-level functions that are supported by the operating system (such things as windows that are fixed size, windows that can be resized, and so forth). The individual developers could assemble these components in any number of ways to build an application. Although this might be a great victory for individual creativity, it would be difficult for users who had to adapt to a different way of thinking when working in each new application.

To solve this problem, Microsoft has published official graphical user interface standards. For example, if you have a print function for your application and use a pull-down menu system, the print command is normally located under the File pull-down menu. Help is also an option on pull-down menus and is located to the far right. These and hundreds of other little standards help users find things quickly and are an important part of the application environment. The good news is that Microsoft has worked hard in recent product releases to ensure compliance with standards, which makes your job easier in administration and user support.

The complication in this scheme is that the graphical user interface standards for Windows 95 and Windows NT 4.0 have changed in response to the changes in the basic navigation algorithms that are used. It is not as bad as you might think. You may click on folders and now have shortcuts on your desktop to access applications. You also will probably start using your right mouse button to call up properties pages for applications. Once you get used to these new concepts, you will probably even like the new interface better (most people that I have met do after they have had time to get used to it). The good news is that many of the application GUI standards remain the same. For example, Print is still located under the File menu in applications. There is an updated publication for the GUI standards for the Windows 95 interface that is published by Microsoft (if you have access to the Internet, look under the Microsoft Press section of the www.microsoft.com Web page for the latest version numbers, prices, and so forth).

This section was just a brief introduction to some of the important concepts of the Windows interface and how it relates to the BackOffice application. Purists may argue that architectures should only include discussions of the main functional components along with detailed specifications of the message formats used to communicate between the components. Perhaps that would be good for a Windows NT operating system internals book. However, this book is focused on BackOffice and the way you access the BackOffice tools; how they relate to the operating system is the book's focus. You should be feeling pretty comfortable with basic operating system concepts by now and ready to take on a few of the newer technologies used to make Windows NT and BackOffice work even better than before.

Advanced Windows Technologies


The good news is that the folks at Microsoft rarely rest on their laurels when it comes to their technologies. It can be a real challenge to keep up with all the APIs, technologies, and add-ons that they keep throwing at us month after month. This section covers a brief introduction to some of the technologies that BackOffice is (or will soon be) using that have recently been introduced as add-ons or new components to Windows NT.

There are entire books devoted to each of the technologies that are emerging in the Windows environment. This section focuses on three key items that may be of interest to general BackOffice users. The chapters in Part VIII, "Integrating BackOffice," cover these and other technologies in more detail. The three technologies important to discuss in this architectural section are the following:

Let's start with DCOM. Recall the previous discussion about the basics of COM/OLE and how it could be used to enable applications to access functions within other applications. This provides a good way to get functionality without having to write the application yourself. The next logical extension of this concept is to apply its principles across the local area network as shown in Figure 2.9 Under this scenario, you could access application functions from an application located on the server. Why implement another standard when there are already methods of accessing client-server data? The answer lies in the fact that the DCOM architecture enables you to access a wide range of functions within a wide range of applications. The existing standard protocols are designed mostly for information transfer, not for using functions from applications on other machines.

FIGURE 2.11. Basic COM and DCOM Concepts.

ActiveX technologies also enhance the modular nature of the operating system and serve to help simplify your locally written applications. Basically, ActiveX applications are modules that you can download from the network and call to perform specified functions. Examples of functions that can be performed by ActiveX components include playing MPEG videos or viewing a document prepared in Adobe Acrobat format. These can be much more than simple functions. In the case of the Acrobat viewer, it is a relatively complex view-and-print environment for authored documents (including text, graphics, and other information).

There is one particularly interesting concept for ActiveX components as they apply to Web pages. Imagine that you are on a Web page that has content that requires a specific ActiveX component in order to view it as shown in Figure 2.10. The HTML code can contain data that tells the browser that you are using the unique ID number of the ActiveX component that is needed. Your browser will check to see whether you have that component, and if not, you can download it and then run it. There are technologies involved with these components to validate that you got the correct component (for example, it is not something that calls itself the Acrobat viewer, but is actually a virus written by some hacker). This technology is still in its early days and is focused on Web uses, but it could well be extended to any number of operating system needs. Only time will tell.

FIGURE 2.12. ActiveX components.

The final technology is generally classified under the development code name from Microsoft of Nashville. It is a revision to the Windows 95 and Windows NT 4.0 user interface that integrates Web browsing directly into the desktop environment. Microsoft is still working out the final details, but the prototypes that I have seen make accessing information located on a Web site (either intranet or Internet) as easy from the desktop as it is to access local hard disk drives or shared network drives on current networks. It also enables you to use the current browser and search interfaces to move around between information sources.

Integrated Security


It's time to look a little closer at integrated security of the Windows NT operating system. Chapter 3 covers security in more detail, but for now let's check the following points related to security:

The first key feature of Windows NT security is that it is part of the operating system itself. In many environments, such as mainframes, you purchase add-on packages from IBM or other vendors that perform the security checking beyond the very rudimentary features of the operating system. The good news is that this provides some competition in the market. The bad news is that because it is an add-on, there are many more ways to get around the security system, and the security systems have to be more complex to deal with these holes in the operating system. There are interfaces from the security systems of Windows NT that enable you to use third-party security packages or even make one of your own (in your spare time). However, because the checking is integral to all operating system operations, it is much harder to bypass.

The next good feature of Windows NT security is that it is based on a single login model. I have worked in environments where you had to set up users with logins on the Novell servers, each of the UNIX servers, and on the local computers (for password protection on the screen savers). Although all the user IDs were the same (for example, jgreene), the passwords had to be synchronized manually by the users (that is, using passwd on UNIX, using control panel to alter the local screen saver password, and running the Novell password utilities). These accounts had different expiration times, so it became easy for users to forget to change some of the passwords, forget which ones were which, and then have to call the system administrators to get their accounts straightened out. There were also always the loud complaints about having to reenter a password every time a user moved to a new server or logged in to a new database.

In Windows NT networks, you basically get a single login ID for access to all resources when you are set up in a domain (more on domains versus workgroups in the next chapter. You log in once (when you log in to your Windows NT or 95 workstation) and then continue to access resources on other machines in the domain based on the privileges that are associated with this one account. The operating systems coordinate who you are and what your access rights are for you, behind the scenes. In this fashion, you only have one account to maintain and one password to remember.

Another good feature of Windows NT security is the fact that it is applied at the operating system level to all sharable resources. These resources include such things as directories and printers. As part of the process of telling Windows NT that you wish to make these resources available to other users on the network, you specify which users or groups have access rights. You can also specify, as in the case of shared directories, what level of access rights a user has. For example, I could declare that one person is limited to read-only access to a particular directory whereas another can read and write data to this directory.

Security can also be extended to the individual file and directory level if you have implemented the NT File System (NTFS) on your disk drives (covered in detail in Chapter 10, "Setting Up Windows NT"). The DOS operating system used a scheme for storing data on disk drives known as the File Access Table (FAT) system. This was starting in the early days long before the reliability and security concerns for computers were as high as they are today. This system worked fairly well, but it was designed in an era where you wanted to keep things simple to work on the less-powerful computers of the day.

When designing Windows NT, however, it was decided that a more complicated and powerful data storage scheme was needed for those users who wanted to implement features such as file level security. The design team came up with NTFS. Windows NT users have the option of choosing between FAT (which provides compatibility with DOS partitions on your PC) and NTFS (which offers advanced features such as file/folder level security). I tend to run servers using NTFS for security (people cannot reboot the server using a DOS disk and access data from an NTFS hard drive) and FAT for workstations that have both Windows 95 and Windows NT partitions.

Another good feature of Windows NT security is that it has APIs that enable you to query the security system from within your applications. The earliest PC-based applications had almost no security. If you could get to the PC, you had full access to its information. Early Windows-based PC applications tended to implement their own security scheme within the applications. Each one used its own user ID and password scheme or something similar. Some were obviously better than others.

Windows NT makes life a lot simpler for the users. Again, when the operating system does more for programmers, they have more time to focus on the business needs of the application, rather than focusing on building network drivers, print drivers, and security schemes into their applications. Because you have already gone through the validation process to get to the operating system, why not just ask the operating system who the individual is in some secure way (so that hackers cannot intervene)? You can then validate the user’s access privileges and get on with the main business of the application.

BackOffice makes excellent use of Windows NT security. With the exception of Microsoft Mail Server, which dates from an earlier, simpler time, the components of BackOffice make calls to the operating system security system to see who the user is and then determine what their privileges are within the application. When additional information about the user is required, it is integrated with the basic Windows NT tools. For example, when you are running Microsoft Exchange Server, it requires more information about the user than the operating system collects (address information, signatures for messages, mail options, and so forth). When you create a user with User Manager on NT, you get a second screen that enables you to fill in the Exchange Server parameters immediately after you complete the basic operating system user information screen.

Integrated Monitoring


One of the keys to being able to manage a system is being able to see what is going on with that system. Windows NT provides a good set of monitoring tools for this purpose. These monitor tools provide the following basic services:

Chapter 4 covers these topics in more detail, but basically these operating system tools provide interfaces so that application information can be recorded in addition to the operating system information. For example, the Event Viewer tool, which is used to see events of interest on your system, can also record events from your locally written applications. This saves you from having to write your own log display tools, maintenance tools, and so forth.

BackOffice products were designed to make heavy use of the integrated monitoring tools. Figure 2.11 shows the Event Viewer with some SQL Server and Exchange Server information recorded. This principle of integration applies to performance monitoring as well, which is useful because very few other applications come with the capability of seeing what is happening inside the application from a performance standpoint. You know that things are slow, but what is causing the problems? With the monitors built into BackOffice components that integrate with NT’s Performance Monitor tool, you can get the data you need to help solve problems.

FIGURE 2.13. BackOffice component using Event Viewer tool.

Integrated Administration


Recall the virtues of interfacing applications with the Windows NT security system pointed out earlier in this chapter. Recall also that the basic administration tools can be used to capture additional information needed by your applications, as is the case for Exchange Server. When you build extensions to BackOffice components locally (as is discussed in Part VIII of this book), you need to keep integrated administration in mind. Many programmers have the tendency to code things the way they always have. If they need additional information about a user for the application, they build a menu pick into the application menu system or provide a property page that has to be edited for that user. You may find it easier to deal with, as an administrator, if the programmers take a little time to learn the techniques for interfacing with NT’s administrative environment rather than building another administrator application that the system administrator will have to run after creating the user’s account.

Summary


This chapter provides a basic introduction to the Windows NT and BackOffice architectures. There are papers and entire books available on this subject. You can look in the Windows NT Resource Kit, TechNet CD, or visit the Microsoft Web page for more information. However, the goal here is to give you a basic overview of the operating system that is good enough for the BackOffice administrator and/or user. SAMS publishing offers another book in the Unleashed series that also goes into more detail: Windows NT 4.0 Server Unleashed.

Here are the more important themes related to the architecture:

In the next several chapters, you go into a little more detail about the various major environments within BackOffice. Chapter 3 covers the security environment. As stated before, Chapter 4 discusses the monitoring environment. Finally, Chapter 5, "Administrative Environment," reviews administration in a little more detail.

Previous Page Page Top TOC Next Page