EAI is not just for technical people, it affects everyone.
This article is aimed at senior executives with enough interest in the subject of Enterprise Architecture Integration (EAI) and sufficient grasp of the very basics to be able to gain a workable understanding of why things are the way they are and how they can be fixed.
It’s not an article for IT pros, though occasionally a little extra is added to accommodate those who may be lurking, but ignoring the odd technical passage should not prevent you gaining a useful insight . After all if you decided to read this you are either in deeper trouble than you know, or you do have some understanding,
So what is wrong with how we do things now?
I am answering this on three levels to help those who want clarity and to accommodate those in a hurry who know what they need.
At a strategic level
Information Technology has hardly been around long enough to be strategic. By its nature it attracts three types:
- Those who pretend to be followers, but really want someone else to handle it, like we might farm out the legal stuff. These are Hiders.
- Those who want to play with it and are not concerned with what it actually delivers in terms of benefit. The Players
- A few souls who are determined to make it pay by focusing on benefits and understanding the potential. These are the Entrepreneurs
Probably 80% of all the c levels I have met fall into the category of hiders. A nice silo in the next building with the very latest ticket system and that’s that covered. Actually no, that is ignoring a core part of your business and you can see why if you read on to the next section.
At least 90% of all IT directors and CIOs I ever met were “Players”. There’s nothing wrong with enjoying your work, in fact it’s important, but we go to work to deliver value as part of an overall strategy and plan not to amuse ourselves, so the onus is on the CEOs to drive this agenda and engage his/her key people to the strategy.
Business strategy as a driver for architecture strategy
Before engaging in Enterprise architecture, it is critical to be in possession of the very latest business strategy and if such is not available, or it is somewhat stale, then you must start out by engaging leaders in a process of thrashing out high level strategy sufficient to inform you architectural designs over the one to five and the five to ten year period s.
Remember, strategy is not a detailed plan, it is by its nature high level and is easier than you may think once you get people around a table .
Exercises like SWOT and PEST are a useful tool to surface key inputs. Focus on constraint based modelling and usually the space in which you are free to operate successfully dictates the strategy options you have.
At a process level
Software solutions are tools that make processes more efficient and more accountable. Processes are in the main tool by which every business creates value and when they are intelligently optimised and as much as possible automated, they create a lot of extra value.
At all other times Software solutions are a barrier preventing change, soaking up money and effort and worst of all creating illusions that are damaging. People “working the system” instead of delivering . e.g. the person too busy fiddling with the CRM to speak to a customer.
If you are not interested in the process your business adapts and follows you are not in charge and you need to re-evaluate,
At a technology level
First we had a system- eureka, then we had two systems and they didn’t talk to each other.
At first we thought we could develop our own systems around a single database and all would be well since they all used the same database. A few SMEs managed this at huge expense before admitting defeat mainly to the cost of bespoke development.
Big software vendors then saw an opportunity to tie us all in and they went on a trail that has ended up in ERP ( all things to every business) and because they are all in one piece, they do talk to each other internally, but woe betide he who wants their ERP to talk to something else. That’s was never the plan, was it? You can have a SAP business or an Oracle business just like your competitor, but you dare not try to differentiate.
Then of course modern businesses like to make acquisitions and disposals and this drives a need to be able to fairly quickly connect an external business or cut it loose again as market forces dictate.
Increasingly the success of outsourcing process relies heavily on integrating process at just the right level and doing it fairly quickly, securely and without prohibitive expense.
Technology silos create process and culture silos and the one that usually feels this most sharply is the customer. Silos are never ideal, but they can be accommodated in a federated business architecture and integrated more closely over time, this latter is very important to an acquisitive business.
Technology must be driven by principals that focus on creating value no w and in the future, it must deal with the now well while keeping one eye firmly on the future. It should be driven by the whole management team as a key strategic asset with a workable interface between technical specialists and business process specialists. Nothing less will deliver.
The problem with this is that it leaves the strategist nowhere to hide. If the strategy is to go east and the rocket is built then it can’t simply be turned around to face the other way. This is not a weakness in technology but a fact of life. Building a new office block or fitting out a manufacturing plant is no less prone to the impacts of change, but nobody expects to pick up their building and move it a few blocks while rotating it.
Businesses that do well with technology have very strong strategic direction and operate in mature markets where a large portion of their activity is stable and fairly slow to change, or their business is software. The best ones have ability to shield critical line of business technology assets from a more innovative and risky portion that is contained and risk managed. All businesses that win at technology have realistic expectations and an acceptance that technology is a huge core part of their business, leading to more fruitful relationships with technology experts and suppliers.
Apart from strategic planning and attention to relationships with IT, there is an awful lot that can be achieved by IT through investment in Enterprise architecture capability.
Engaging architects to own and drive the overall vision while maintaining the maximum flexibility to integrate new systems and replace underperforming systems is a key strategy in any IT estate.
What options do we have?
Strategically we must not get side tracked by people claiming that workld has gone agile and it is OK to do everything last minute. Strategic planning is just as important as it always was and the less change we drive through about turns and poor planning the better our results will be.
As in all things, communication is the lubricant the glue and sometimes evens the fuel. Unless you develop the capability to analyse and define problems accurately and to evolve solutions innovatively then all the technology in the world will create nothing but cost.
Business architecture is the domain of Enterprise architects and Business analysts. The work they do makes sure that the processes and teams you put in place are a very good solution to the problem and that when the systems are released to automate and support these processes, you get maximum value from your investment. If you are in the world where vendors set your strategy and Geeks select your systems then you probably view systems delivery as purchasing systems and implementing them. This is the lowest level of maturity and you need to seek help with moving your capability along as fast as you are able to contend with.
Segregating our architecture according to the propensity for change gives us a chance to maintain a large portion of our architecture in a very stable state with little change affecting it, maintain a layer of stable but more changeable architecture and finally a small layer of highly agile architecture that is designed to be charged rapidly without negative impacts on the remainder. This latter is the key.
More than 60% of IT spend goes on dealing with the impacts of relatively small changes on a monolithic architecture.
Below is a representation of a well evolved architecture principal that enjoys security and stability while allowing agility and innovation in a safe environment.
Typical industry architecture might include CRM and ERP systems that shoulder the brunt of the heavy lifting while the Organisational layers can include focused solutions for specific problems . Mashups developed from REST and SOAP APIs is a great example of organisational architecture that is fast and cheap to develop and creates the added value without introducing unnecessary risk
Only through maintaining an architecture capability can you hope to achieve this level of robustness and freedom.
Separate systems can be integrated in three ways potentially:
User Interface level
Portals and Mashups
Imagine the unfortunate person and there are many of the, whose work forces them to log in and out of many systems continuously all day with a pen and ad ready to grab ids and reference numbers before searching the next system and gradually solving the puzzle. I have sat many times with people such as this and I have utmost sympathy. Sometimes they say,” just get me a single log-in point and you will change my life”.
Sometimes a simple portal where links to all the systems are in a single portal page that remembers credentials and logs them in automatically is a stellar staring point when there is not the will or the finance to do better than this. A clever web developer can sometimes achieve this for a modest investment and it does deliver value.
One step further is to access APIs and scrape web pages or even windows screens in the Background to get at the data you need and present the user a single interface that secretly deals with all the others in the background. This is more complex and more difficult to maintain, but sometimes it can fill a gap and can also be valuable in proving the value proposition before investing something more stable
Most major commercial systems built since the mid 90s have an API of some sort using COM, CORBA,Web services, or some other form of communication to offer access to basic processes in a safe and reasonably painless way. Some more recent systems provide fairly complex web service interfaces offering REST/SOAP interfaces. Of course there is still work to be done to get around firewalls, maintain security and of course write and maintain the access code and as things change previously implemented code must be moved and changed.
The advantage of the API route is that the business logic protecting a user interface will normally be also present behind the public interface and therefore protecting the integrity of the system from broken updates and inserts etc, but on the downside, these connections to APIs are by nature, but not by necessity, synchronous calls that gradually que up a time bomb of delays that will eventually sink your architecture and bring it gradually to a standstill.
What do I mean by this?
Well, each time a system calls another, unless in special circumstances, the caller must wait for the callee to respond before that thread can do any other work. Typically an Operating system will have attributed memory and a process thread to the caller and these resources are now out of commission waiting for the callee to respond to the callor. What if the callee is waiting likewise for another system to respond. Not only is this a sure way to cripple your architecture but is nigh impossible to track down and fix once it has been allowed to get out of control.
When all else fails, you have the option of integrating at database level. What this means is that:
- You must identify precisely the data you will need from a system, then work out exactly how to access it and write a query to get the data and present it to your target system.
- Transform some or all of the data so that it will fit in the target system
- Write a query that places the data into the Target system in such a way that it is usable and it does not in any way damage the integrity of the target system.
- Often you need to execute these two queries as a single transaction with ability to roll back if an error occurs
- Test all of that and ensure everybody that there won’t be extra zeros on any accounts or thousands of lost customers or the any other horror stories.
The first step is often enough to break the strongest will. E.g. Oracle databases behind an ERP is an example of a database with no semantic naming conventions and requires you to look up the table name in order to know which number table combined which other numbered tables holds the data you need.
The final step will be like the first in reverse only this time you have to work out a bomb proof test strategy so you don’t discover the error after you have leaked millions of pounds.
Modern databases are relational meaning that data is normalised in very atomic classifications that drive the table schema. To add a small piece of data often requires updating several tables in a particular order so as not to create orphaned data and break the integrity of the target database.
The API method we discussed previously has already handled all this complexity for you so you can surely see the attraction. The problem is that APIs almost never provide sufficient access and often they simply don’t exist.
The nitty gritty of integrating systems safely
Now that I have got your feet wet, let’s have a look at the problem from a logical and conceptual viewpoint.
Let’s say we have 20 separate large systems ( a smallish enterprise) that overlap considerably in their business capability and are used by different departments, or even companies in the group. It is reasonable to assume that all are closed systems with proprietary databases and about 50% of our integration needs can be served by the existing APIs, but the rest we have to solve with Various connectors/endpoints.
It is reasonable to assume that being closed systems they will use different semantics and different data structures, depending on the database type behind them and the development language the apparently similar data types can have different sixes and characteristics.
e.g. We have an SAP erp system that stores FirstName, Initial, Lastname and a CRM that stores name as a single string with a space in the middle.
We now have different names for things and different structures as well. This will be repeated many times over between the 20 systems.
Let’s suppose we need all 20 systems swapping customer and product information.
The number of point to point connections would be n x n-1 or 380 connections.
Each system would need the ability to translate between its own names and structures and 19 other vocabularies. That’s an awful lot of programming.
In the architecture world, this is what we call the hairball effect. Take that 380 connections and 19 vocabularies and apply Moore’s law and you are nowhere near grasping the extent of the problem but it should be sufficient to open your eyes.
To make this whole endeavour manageable, we must solve the biggest problems first and reuse these solutions.
Component s of an integration project
- Translation between vocabularies can still cripple teams, let alone systems, so we must deal with this one first.
- Channels need to translate between transport protocols too
- Stacked up synchronous calls between systems each of which is waiting for the previous is crippling and invariably the circle gets completed resulting in melt down. We need an elegant reusable solution to this problem. Publish and subscribe is one example of an integration pattern that solves this problem while being easy on resources.
- With the core infrastructure in place to tackle integration we can then seek to reuse connectors from the marketplace for the better known systems and thus reduce not only the effort, and duration, but the levels of risk.
Translation costs can be reduced dramatically by using a Canonical model as a midway step.
Instead of needing to understand 19 languages, each of our systems would only need to translate to and from the canonical language.
That reduces the previous estimate of 380 to just 40 translations . Not only is it a huge saving, but it avoids the folly of trying to coerce other systems and teams to a shared vocabulary.
What we need is a simple protocol friendly transport system that accepts any protocol in and delivers any protocol out so that when combined with transaltors,all systems in the architecture can converse freely.
Getting messages to systems at or before the time they are needed is the key to good integration.
Think of an email server for a moment. I can send my messages any time that is convenient and my colleagues can collect and read those messages when they need to. When we relied on telephones, we had to be at our desk when it rang or we missed the boat. Email is an asynchronous method of delivering messages. It has Queues where your messages can wait for collection or wait to be sent and it can be configured to keep sending until the message is received and read. Guaranteed delivery ( I have in fact used an email server to integrate ships at sea in just this way).
To make a good integration system we need just a few more bells and whistles to help poorly equipped systems access their messages or in some cases to push the message right into their stores. Sometimes all we need do is to inform all and sundry that a new order has arrived and any system that needs the data can come and get it. This is called multicasting and it is very efficient.
Putting it all together
The Enterprise Service Bus is a design pattern that has been latched onto by vendors offering a custom collection of tools to solve known integration problems. A commercial ESB will normally contain all the components you need and many optional extras. It should also include or make available connectors for the major popular systems out there such as ERP and SRM systems and many others.
The latest breed of EAI solution is the cloud based ESB. The benefit is that no maintenance people need be retained for this very specialised field and if a true cloud solution, there should be unlimited and effortless scaling available. Although annual costs can appear high, most vendors will negotiate and TCO calculations should bear in mind the cost of maintaining a n on premise solution.
It is probably worth checking whether you are being offered a true cloud solution or a virtualised on premise solution on the cloud.
Steps to designing the optimum integrated enterprise architecture
- Understand the problem at a strategic Enterprise level and defining the value proposition.
Similar to requirements gathering in a solutions scenario, we must collate the problems to be solved and define them with sufficient accuracy to feed into accurate solution architecture.
- Addressing the viewpoints of stakeholders, resolving them with strategic concerns
previously raised and identifying the concerns to be addressed and the ideal priority.
The key difference from requirements engineering is that each concern or viewpoint must be handled individually and a business case made at a high level so as to avoid the investment of a lot of time in problems that are not worth solving or simply too tricky. EAI stakeholders tend to be 90% strategic and 10% end user in a perfect inverse of solution requirements.
- Understanding and mapping the processes to be supported and improved.
Until you have processes mapped, you simply can’t decide what data will be needed at what point and any integration you design without the guidance of process is likely to be hit and miss at best.
- Identifying the business events that kick off the processes you are supporting.
Business events are a terminology used a lot in business analysis also. They are in effect the thigs that happen in a business that drives the need for a particular process to begin. Understanding events and how they are linked together is critical to understanding the strains on a system and the order in which tings occur. Without this appreciation your timing can be hit and miss.
- Identifying the data needed to support the processes.
Out of events and processes comes the first steps in identifying the actual messages that will be needed and the metadata that these messages will need in order to be transported and processed. E.g. a new order in a store may be the vent that drives a purchase process, which in turn creates a customer message . The customer message needs certain metadata in order to be allowed past the security and to be linked to the account and product data in the target system. There are few shortcuts other than following the events and processes and analysing the messages required
- . Understanding where a data entity is created and through hat states it is passed and processed helps one to understand the best source of data at any time and to put best class governance in place. Defining a canonical meta-model to support translations between the many stores and systems is vital to contain the number of translators required.
- Identifying the performance required of the many integration flows and the volumes of messaging.
This requires rough estimation of the likely usage and sanity checking against the available resources. For safety it should then be load tested to discover at what point performance is reduced to a floor level and plans put in place for maintenance and review
- Defining a conceptual model of the architecture to support selection, procurement and technical architecture of the solution.
Identifying the key EAI patterns needed to support the architecture helps the architect to select the right solutions and avoid expensive bloatware
- Selecting the technology solutions and procuring them.
Preparing the procurement documentation, Inviting vendors to participate, carefully interrogating them and spotting the weaknesses and in advance is a key aspect of getting it right.
- Planning and risk managing the implementation.
Planning the implementation whether carried out entirely by the vendor by your own team or a combination is always tricky . Identifying with great accuracy who is accountable for what and having clear acceptance criteria and reliable testing and QAS procedures is absolutely key to success.
- Producing a test strategy that is fit for purpose.
A detailed test strategy should cover all the likely test cases with special emphasis on the intended use and immediate usage cases and act as a certification that the architecture has met expectations both functional and non-functional.
- Defining the ongoing maintenance procedures, capabilities, OLAs, SLASs , roles and responsibilities for the infrastructure and data governance concerns .
With the results of testing at hand a review and maintenance plan should be put in place to ensure that performance and security are maintained at a high level and contractual agreements are met.
- Creating a benefit realisation plan combining business change and benefit measurement strategies to capture the benefits and to measure and understand them.
Returning to the mini business cases for significant integrations and to the case for the infrastructure, it is now imperative to establish KPIs that will reasonably be used to easier and report on how well the project has performed as an investment. Not every business case must be proven in financial terms but even the most non tangible of benefits can be measured where there is a will.