Santa and Contradictions

Perhaps you just happened to notice that my previous "proof" that Santa exists could be used for proving other things, too. For example, that Santa doesn’t exist. Or your children have already proved that they don’t need to go to bed.

OK, so what’s the catch? It seems possible to prove anything using this method, even contradictions. Is Mathematics inconsistent, after all? Well, no. It’s just that we’re not used to definitions causing contradictions. This is something that mathematicians realized at the beginning of the 20th century, when they investigated the foundations of mathematics.

For example, Bertrand Russell found a "paradox", when he postulated a set X that contained all sets that didn’t contain themselves. Which leads to a contradiction: Suppose X contains itself. Then X can’t contain itself, since it’s a member of X. Then suppose X doesn’t contain itself. Then X contains itself, by definition! So both cases give us contradictions. The conclusion Russell didn’t draw from this (I think) is "So X isn’t a set". Just to be extreme, suppose that S is a set that does contain 0 and does not contain 0. Anyone surprised that we get a contradiction from that? I guess not.

So, let’s check where S in the Santa example leads us. S is defined as "If S is true, then Santa exists". If S is to make sense, it must have a well-defined truth value; either true or false. We’ll check: can S be true? OK, then it seems to be the case when Santa exists, because if S is true, certainly Santa exists. It’s a logical possibility. But if S is false? Then the left-hand-side of the logical implication described by S will be false, and the implication itself will be true. Which means that S is true. But S is false! So the definition of S implies that S has to be true, and that Santa exists!

But if we view definitions as equations, things make sense. The definition of S is really an equation, which only has the solution "S is true". Other definitions have no solutions (like the set of all sets not containing themselves), and other might have several.


Christmas is Coming

Just a few months before Christmas! But be prepared when your children start asking you whether Santa really exists or not. It’s not as easy to convince them as it once was. The solution to convincing today’s enlightened children is of course to be very rigorous. We need to prove to them that Santa really exists.

So, let’s be pretty formal, and assume that S is the sentence "If S is true, then Santa exists". That’s just a definition; nothing unusual going on. Seems that if we prove that S is true, then we’ll be done. But we’ll see. Now, the actual logical proof starts.

Suppose S is true. This is just an assumption.
By the definition of S, we can just replace S by its definition, and we get
"If S is true, then Santa exists" is true. Well, not much gained yet. Probably we’re just warming up. But we can in fact use the assumption, "S is true" once more, together with that. Then we get "Santa exists". Not bad! But this is of course only because we assumed that S is true. So we’re not there yet. Let’s summarize what we got from the assumption:
"If S is true, then Santa exists". OK, well, this is the same as what S itself says. Finally something; we’ve proved S itself to be true!
But wait, if S is true, and "If S is true, then Santa exists" is also true, then obviously Santa exists. Done!

So, just sit down together, the whole family, a few days before Christmas, and carefully go through this proof, and you have removed one uncertainty from the celebrations. Also you need to know that there are also grownups who haven’t understood this fact yet.

This is my contribution for the people out there who still want to celebrate that old-fashioned Christmas!

(The proof freely from Boolos and Jeffrey, "Computability and Logic".)

The Common Sense Behind the ATAM

I thought I’d better say a few things about the Architecture Tradeoff Analysis Method, too. It’s really built upon common sense (for an architect, right), even if I can’t really judge if the building itself reaches way too far above the clouds. For a small or even medium sized organization, the answer is definitely yes. However, that doesn’t matter. The pieces of common sense behind it are good, and somewhat nontrivial. My idea is that anyone doing architectural work could benefit from those pieces, regardless of whether you run the actual method or not.

First, I’ll readily admit that I haven’t even understood the complete ATAM. I’ve only read and understood an old overview paper, and it seems that the method has evolved a lot since 1998, when it was written. Perhaps I’ll come back with corrections when I’ve read all about it (if I ever will). I promise only to tell you things that make sense to me, anyway! If you’re annoyed by this, just pretend that the paper has just been published! 😉

So, what is it all about? Actually, the abstract of the overview paper says a lot. Here it is:

This paper presents the Architecture Tradeoff Analysis Method (ATAM), a structured technique for understanding the tradeoffs inherent in the architectures of software intensive systems. This method was developed to provide a principled way to evaluate a software architecture’s fitness with respect to multiple competing quality attributes: modifiability, security, performance, availability, and so forth. These attributes interact—improving one often comes at the price of worsening one or more of the others—as is shown in the paper, and the method helps us to reason about architectural decisions that affect quality attribute interactions. The ATAM is a spiral model of design: one of postulating candidate architectures followed by analysis and risk mitigation, leading to refined architectures.

OK, that makes sense. If we add another server to increase availability, we increase the cost, and perhaps decrease security, if we aren’t careful. Perhaps we have to co-locate lots of code in order to increase performance, thus making the architecture less modifiable. ATAM is a method for making these tradeoffs explicit, and to have a structured (iterative) method of getting to a software architecture that satisfies all the requirements on those properties.

It’s important to note that ATAM itself does not include ways of assessing modifiability, performance, security and all that, sub-methods, such as the SAAM (or perhaps common sense) are used for obtaining those attributes. It’s really a "meta-method". But never mind, we’re not really interested in formalities of the method itself now.

I suggest that we dive directly into the steps of the method; they aren’t that difficult to understand.

  1. Collect use cases that should be supported by the architecture, and requirements that the resulting system should satisfy.
  2. Construct a nice architecture based on what you got in the previous step.
  3. Analyze all the relevant properties (or attributes, as the terminology goes), such as modifiability, availability, performance and so on.
  4. If all the relevant properties were good enough, we’re done, and we can proceed to design and implementation! (But if you’re a bit curious, you could actually go on anyway, for a round.) Otherwise, we know that we need to modify the architecture in order to improve upon one or more attributes.
  5. Look at several (sensible) ways of modifying the architecture, and see how the properties of the architecture changes. For example adding a server might increase availability and cost; like that. The properties that actually changed significantly are noted as sensitivity points.
  6. Look at what you got in step 3. Some of the changes you made to the architecture are likely to have affected more than one of the attributes. For example availability and cost, for adding a server. Those changes (scenarios, perhaps?) are noted as tradeoff points. Those are the points where we have to be careful when changing our architecture. Perhaps the properties we have to improve upon are connected to lots of other attributes in this way?
  7. Now, we use the knowledge about the tradeoff points found in the previous step, and redesign the architecture so that we believe that we’ve come closer to satisfying the requirements on attributes. The tradeoff points simply serve as guides for us here. For example, if your company has no budget for new hardware, perhaps you have to have another way of establishing the availability requirements than adding another server.
  8. Go back to step 3.

OK, this really looks like common sense, all of it. Probably, if you’re an architect, this is what your brain is already doing, or at least something like it. But anyway, I think we can gain a lot from this kind of "formalized common sense".  We can use it when it comes to communicating this kind of knowledge to others, and also if we want to check ourselves to make sure that we’re reasoning in a sound way (perhaps at times when you’re working a lot of overtime, and aren’t fully alert!). Sometimes our brains aren’t as accessible as we’d like them to. 🙂

How to Find the “Right” Architecture, Part III

This is the last post in a series of posts highlighting the Software Architecture Analysis Method, SAAM. The two previous posts are here (part I) and here (part II); please read them first!

We’ve come so far as to develop a relevant set of scenarios, and we’ve described the architecture on a level of detail that’s appropriate for the scenarios. Actually, we can have several candidate architectures at this step. Since the SAAM doesn’t give you a number like "this architecture got eight points out of ten", you should in fact compare at least two architectures to see which is better. But anyway, now it’s time to map the scenarios onto the architecture(s). How do we do that?

There are actually two kinds of scenarios, direct and indirect. The direct ones are already supported by the architecture (you probably constructed the architecture to support them), and the indirect ones are not, so they require modifications of the architecture in order to be supported. So, first, we decide which ones are of which kind.

Then, for the direct scenarios, we mark within the architecture which components and connections are used by the scenario. Now, if there are marks on lots components and connections, we have low cohesion for that scenario. The functionality represented by the scenario is spread out over many parts of the architecture. Thus, we can now compare different architectures with respect to how well the functionality of each scenario is kept together. Doing this mapping of direct scenarios to the architecture description should be done with all stakeholders present (see the previous post). They’ll learn a lot about the system!

But the interesting scenarios (for software architects) are the indirect scenarios. For each of those, we list the changes needed for the architecture, for example a change in a component, a change in a connection, or an addition of a new component or connection. For example, if we’ve numbered the scenarios from one to twenty, we can draw a number for the scenario on the architecture for the places where modifications are required. If you’re a bit more sophisticated, you could also factor in some kind of estimates of how difficult it would be to do each modification, but let’s simplify. We’ll end up with an architecture description (or several) with lots of numbers on them. Can they tell us anything? Yes indeed! The "bad" thing we’re looking for is a phenomenon called "scenario interaction".

What does it mean for two scenarios (let’s just look at two) to interact? It means that the two scenarios (representing extensions to the architecture, remember?) require changes in the same component or connection. Graphically, it would mean that there’s at least one component or  connection with two numbers on it. And why is that bad? For the same reason as tight coupling is bad: two functionalities are inherently connected; there’s no separation of concerns in this case. In practice, it would mean that to accommodate both changes, two developers might have to work on the same component on pieces of code that very much depend on each other.

It could also be that two scenarios interact a lot because they are very similar. That’s a very subjective criterion, of course, but we have to take that into account, too. For example the scenarios "change the password of the administrator" and "change the password of an ordinary user" are quite similar, and will probably have interaction in almost every component.

After doing that, we need to look at the amount of interaction we got, and make a (subjective!) judgment on what architecture is better. Or perhaps just whether the single architecture that was studied was good enough. This step isn’t easy to perform, but I think it’s better to keep it subjective than to try to get some objective measure of the quality of the architecture. The method simply provides you with the relevant objective facts, which make it easier for you to decide what architecture is the better. If you have a lot of courage, involve all the architecture’s stakeholders at this step, too.

So, finally, I hope I’ve managed to explain the SAAM and why it appears to be pretty intuitive and useful.

How to Find the “Right” Architecture, Part II

In this post, I’ll explain why coupling and cohesion aren’t good enough for judging the modifiability of an architecture, as well as explaining the first few steps in the Software Architecture Analysis Method (SAAM). It’s a sequel to a previous post, so please read that one first!

First of all, the concepts of coupling and cohesion are too abstract. You can’t stand in front of a customer arguing why your architecture is so very good because it exhibits such a nice degree of coupling and cohesion. Not even your fellow architects or developers will be convinced. Probably not even you would be convinced by such arguments!

If some concepts are too abstract, they’re probably wrong, or at least not useful. Read Joel Spolsky’s article "Don’t Let Architecture Astronauts Scare You" to understand why and how. For coupling and cohesion, there are a number of problems.

  • The coupling and cohesion takes into account all connections and all functionality of the system. So, for example, of your system isn’t portable between different OSes and different hardware, you’ll get too much coupling between components, a "bad thing". But perhaps you have no plan whatsoever to port your system!
  • If you want a complete picture of the coupling and cohesion of your system, you need to extract and describe each and every functionality and connection of your system, on every level of description and every level of detail. But you don’t know if all of those are relevant!Again, it probably contains lots of information about things that won’t change. And the complete picture will be an incomprehensible mess of details.
  • There’s no temporal information included. For instance, a connection from one component to another could denote a call done at startup ony, and another a call done every second. So, even if they are part of the same functionality, broadly speaking, they wouldn’t be part of the same use case. A change in a feature is mostly connected to a use case (or a scenario, as it is called in SAAM’s terminology), so that’s the level you’d like to look at.

So, let’s see how the SAAM improves upon this. When you perform a SAAM evaluation, you start by developing a collection of relevant scenarios, and at the same time, you develop a description of the architecture on a level that’s appropriate for those scenarios. So, you won’t get an architecture that contains too much detail. And actually, for describing the architecture, boxes for the components, with arrows between them, for connections, will be sufficient! No UML or cryptic architectural languages are needed. The scenarios will represent actual uses (or changes) of the system, whether supported by the architecture or not. For example, a scenario can be "Log on to the system using Kerberos authentication", or "Change the font of the characters in the settings dialog box".

We’ll see soon what we do when we’ve developed our scenarios and our architecture description, and how it connects to coupling and cohesion. But first we note that the scenarios (use cases) we’ve developed will be the basis for making other people understand the modifiability of the architecture. In addition, the scenarios are good for showing how the functionality is implemented in the architecture, of course. For the "communication" part of the method, the SAAM actually prescribes that the mapping of scenarios to the architecture, which is the next step of the process, should be done in front of all stakeholders of the architecture (that is, those having interests in the architecture, such as developers, users, architects, system administrators, and sales people).

To summarize SAAM so far:

  1. Develop scenarios and describe architecture
  2. Map scenarios onto architecture description (not yet described)

When we look at how the scenarios are mapped onto the architecture description, we’ll se how it generalizes the concepts of coupling and cohesion. But that’ll be in the next part of the series!

How to Find the “Right” Architecture, Part I

How do you know that you’ve found the "right" architecture for your system? In this and the following few posts, I’ll try to shed some light on this topic, because it’s a question that seems to be pretty common among software architects. Then, when you believe that you’ve find the right architecture, how do you convince your colleagues about that? Since skilled software architects usually rely a lot on their well-developed intuition for what a "good" architecture looks like, it’s not easy to find objective arguments when someone’s got a different opinion.

Of course I’ve got a great answer to that question. Just use the Architecture Tradeoff Analysis Method (ATAM). It’s a process published by SEI, so it’s good, by definition. All SEI processes are good, right? CMM and all that, you know. So just stop reading now, you’ve got a good answer to the question. Thanks for listening. Bye.

Still reading? You’re right, the answer isn’t that simple, after all. Well, the ATAM in fact makes sense, even if it’s very heavy-weight. The underlying ideas are sound, so there are many pieces of wisdom to extract from it. But since you’re still reading (hopefully we’ve lost the "process guys" now!),  you would probably not introduce even small parts of a full-blown process like this without understanding exactly what you would gain from it in your current situation. OK, so what’s my point, then?

My point is that there’s a relatively simple, common-sense (well, for a software architect) part of the method that can be used to answer whether an architecture is modifiable or not, and which also is very useful for communicating the properties of the architecture to other people. You can finally prove that you’re right! 😉 Also, I’d say its easier to get to an objective analysis of for properties like availability, performance och security (lots of papers written on those!), than for modifiability, so we’ll skip those for now.

So, I’d like to share my understanding of that important part of the ATAM, called the Software Architecture Analysis Method (SAAM); it is a simple method used for analyzing the modifiability of a software architecture. If a software architecture isn’t modifiable, it’s not much of an architecture, right? Systems always change after its architecture has been established, during development or later in its life cycle, so this is indeed a fundamental property to analyze. We’ll see that the SAAM indeed answers a large part of my initial question, at least with respect to the modifiability of an architecture.

So, enough talking. You want to know when your architecture is modifiable, and you want to be able to convince your colleagues, and perhaps even your customers, about that. How do you solve this problem?

First, I’d like to connect to two concepts about software design that most people know, who’ve been into software engineering, and that’s coupling and cohesion. High coupling means that we’ve got lots of connections between our components, and high cohesion means that related functionality is kept together within the system. High coupling means that a change in one component is likely to require a change in another component, and low cohesion means that most "features" of the system will be spread out over several components, so that a change in a feature will affect several components. So, in general, low coupling and high cohesion is "good" for modifiability.

In the next post on this, I’ll show why coupling and cohesion aren’t good enough for analyzing a software architecture, and how the SAAM is a generalization of those ideas, but anyway being much simpler to understand. More abstract and more concrete at the same time. Can you believe that? 🙂