Part 3: Logical units
I concluded part 2 by saying that the business logic must be accessible from the outside world through some remote interface. What outside world really means, remains to be defined in a future post. For now it shall suffice to say accessible by a remote front end. This interface must reflect the whole functionality of the application. If it does, we meet the prerequisites I posted in part 2.
Applications need to be distributed.
If we have a remote user interface layer... Check.
Applications need to be properly layered.
You might frown upon this one. At this point I merely mean that the data access layer is properly separated from the business layer, which in turn is properly separated from the user interface layer. And the ui layer does not have direct access to the database layer (and vice versa). Nothing extraordinary, just well-behaved architecture. If this is something we see often, is, of course, another story...
Functionality must be accessible individually.
What does this mean? How do you structure an application in blocks or parts or components or services in a way that makes sense? Fortunately it is not up to me to answer these questions. I am in the comfortable position to just demand that these building blocks be there. ;-) To identify them, you can take a look at the use cases of an app (as a first step). Send a message. Dial a number. Send an email. Such simple sentences describe what an actor does to achieve something with or within a system.
Android uses this concept as one of the main building blocks for its applications. An activity is an action (for example, play a song) combined with a particular user interface (what the user sees on screen while the song is playing). If an activity wants to initiate another action it sends so-called intents. Hence, the action (play song) is the business logic (playing a song) combined with what the user sees on the display while the action takes place.
Another way to find individually accessible parts may be to look at business processes. Each step may be a candidate for a building block, especially if the step is reused among multiple business processes.
Let us stop here for now.
Imagine, your enterprise application is well structured. It consists of several components that implement the business logic. They can be accessed through some remote interface. The ui layer is implemented as an individual program running on a pc. It could call the business components through some remote procedure call mechanism. Should it?
Part 2: Improvement to the worse?
In the first part we remembered that using full-fledged pc's for running the user interface of enterprise applications had become expensive. Each new application required a new front end on the client, which in turn reached its limits increasingly often. Keep in mind, that at that time (the first half of the first decade) there were no dual cores, no gigabytes of ram, no 64 bit systems, no gigabit Ethernet - at least not in the offices.
So, was the idea of building the user interfaces of enterprise applications using traditional client technologies bad? By no means. It offered tight integration with the client, for example by accessing local hardware (printer, scanner, chip card reader, ...), or to communicate with other apps. Today, Android developers take it for granted that they can utilize functionality of other apps simply by firing and consuming intents. In the early 2000s (and even before) that would have been possible, too. Typical Windows apps heavily relied on the component object model, which exposed functionality of a program to other apps. Sadly, competing technologies relied on incompatible object models. Out of the box, it was impossible to have a Java Swing-based client app talk to, say, MS Office, and vice versa. The constraints imposed by the hardware have already been mentioned. As I wrote in the first part of this series, the solution seemed simple.
A web browser seemed like a reasonable execution environment for user interfaces. If the user interface is rendered by the browser, there is no need for an additional rollout when a new enterprise application is introduced. What the browser would render, had to be prepared by the backend and then sent to the client. Hence, this transmission contains data and display instructions. User input is sent back to the backend and processed. Early web frameworks produced user interfaces that could not compete with well-designed rich client applications. No validation of user input, bad usability, delays due to server roundtrips, ... Even a decade later some aspects still require ridiculous workarounds. For example, have you asked yourself why generally agreed upon shortcuts (hotkeys) are not used in web based apps?
Anyway... This is not meant to be a rant against certain technologies. I am merely trying to set the stage for what I would like to call the mobile enterprise, that is, how organizations and their applications can embrace mobile devices. To do this, quite a few prerequisites must be met. A few of them are:
- Applications need to be distributed.
- Applications need to be properly layered.
- Functionality must be accessible individually.
If a physically distant client program is used as the user interface of an enterprise application, there MUST be a public interface. If this was well-written and thoughtfully designed remains to be seen, but at least it is there. My experience is that in typical web apps the separation between business logic and the ui layer is often fuzzy, if present at all. If all melts into one single .war or .ear file, why bother a costly separation of layers? Test yourself. What is a front controller or a business delegate?
The take away of this part: the need to properly structure an application and to establish well-defined interfaces is as urgent as ever. How this can be achieved shall be the topic of a future installment.
Go to Thoughts on the mobile enterprise #1
If the market wants another desktop-like system remains to be seen. Still, the Chromebooks have been quite a success. If Google is really planning to phase them out, Mountain View needs to make sure that the key advantages of Chrome OS are present in a future Android, too. Among others, these are...
- low maintenance costs
Part 1: Once upon a time
For a long time, the basic building blocks of enterprise applications were easy to choose: a programming language, a distributed component model infrasturcture, a relational database management system and a ui library. The database often resided on a dedicated database server, an (app) server hosted the business logic, and the ui was put on the client. The client usually was a Windows-based pc, running apps written in C++, Java, Basic, Pascal, or any other language the developer saw fit, the only prerequisite being access to some graphical user interface toolkit. Conceptionally, each enterprise application lived in its own world. Exchange of data with one of the few other applications was neither planned nor wanted. Why would department a share its information with department b?
And then came the problems.
Throughout the years, business processes became more complex. What once was done in one department, became a shared effort among several business units, requiring the use of several programs. Consequently, the users wanted the applications to cooperate with each other.
And then came the complaints.
Rolling out client software became expensive, time-consuming, prone to error. Building the user interface was said to be expensive, too. As was the necessity of frequently updating the hardware: more programs on the pc required more ram, bigger hard drives, faster cpus, networks with higher bandwidths. The solution seemed simple. If rolling out the ui is expensive, why roll it out at all? If upgrading the pc is expensive, why do an upgrade at all? The rise of the web brought a browser to every client (pc). Hence, wasn't it natural to use it as a runtime environment for the ui?
Let us stop here for a moment. As I have said at the beginning, enterprise applications used to be distributed: different layers ran on different pieces of hardware. Usually the ui layer (a program on a desktop pc) communicated with the business logic layer using some binary protocol, for example IIOP, RMI over IIOP or T3. The amount of data that needed to be transferred depended on the interface the business logic provided. If it was well designed, only small amounts of data had to be transmitted. And that data was just... data.
As we shall see in the second installment, this was going to change...
// manage your api key at http://www.example.com
private static final String apiKey = "...";
I think we agree that hinting at where to manage the api key is sensible. Any developer maintaining this code may have to manage the key. But if we remove the comment we need to pass the info elsewhere. I doubt that we should name a variable
apiKeyCanBeManagedAtHttpWwwExampleCom. Should we?
My machine is a Surface 3 Pro. If connected to the so-called Type Cover, it is an ordinary Windows 10-PC. The Type Cover has a keyboard and a trackpad which controls mouse pointer movements. In this mode, of course, double clicks on tree views work flawlessly. Touch mode kicks in if the Type Cover is removed. You can still see and use the desktop, and you can still use all apps. There is no mouse pointer, however, so which object is accessed depends on where you touch the screen with your finger. Single taps work like single mouse clicks. Double taps work like double clicks. Well, or should. To see if Java or Swing have issues here, I ran a pre-compiled SwingSet2. Double taps work as expected. So, I then wrote a small program that uses both JavaFX and Swing. Here is the source. And this is how it looks like:
Tap detection works as expected, too. At least most of the time. Once in a while the double tap does not get delivered, though.
At that point, I decided to get the NetBeans sources and try to debug then. Building does take some time, but in the end I was able to debug NetBeans - in NetBeans. Guys, this is awesome. I decided to debug
org.openide.explorer.view.TreeView. It attaches an instance of
PopupSupportwhich in turn extends
addMouseListener(). Everything is fine here. Debugging shows that
mouseClicked()is correctly called twice when not in touch mode, but only once when touch mode is active. When or where the tap gets lost still needs to be investigated. As of today I would assume that NB has nothing to do with this strange behaviour.
Heute wende ich mich einmal nicht mit einem technischen Thema an Sie, liebe Leserin, lieber Leser. Dies ist der letzte Post auf Deutsch. Nicht, weil ich mein Blog schließe, sondern weil ich mich entschieden habe, ab sofort auf Englisch zu posten. Der einzige Grund hierfür ist, hoffentlich eine noch größere Leserschaft zu erreichen. Bitte bleiben Sie Tommis Blog dennoch gewogen. Vielen Dank.