Showing posts from October 2012

The identity bridge – the extended value of single sign on

05 Oct 2012

There is nothing new about single sign on (SSO) systems; they have been on the market for many years as a way providing a single point of authentication of users before proving them access to IT resources. What is new is the increasing capability of SSO systems to better manage the changing way applications are being deployed and accessed.
 
Here are some examples:
 
1. The rise and rise of software as a service (SaaS): the availability of on-demand applications is a boon to businesses as it saves running infrastructure in-house leaving it to external experts. There is a down side; having given an employee access to several online resources, when they leave you need to remember to de-provision them from each. However, if access is only via a SSO system, the user does not even need to know the access credentials for each system. Each new user; temporary or permanent, internal or external, can be quickly provisioned and de-provisioned according to profiles and rules understood by the SSO system. The traditional SSO vendors are changing their products to better support SaaS, for example CA SiteMinder. For specialist vendors such as Ping Identity, Okta and Symplified (the partner behind Symantec’s O3 initiative) this is a fundamental feature of their products.

2. The integration of external users and organisations: the degree to which external users are directly provided access to a given business’s internal IT resources is increasing rapidly. Doing so enables more integrated and efficient business processes and supply chains. Examples include car dealerships linking in to a manufacturer’s ordering systems and travel agents linking their customers to various travel resources such as airlines, hotel and car hire companies. Achieving this is eased if the SSO system can access and dynamically integrate a range of user directories, a capability that is integral to products such as Ping Federate.

3. The rise of bring-your-own-device (BYOD): even businesses that don’t really like the idea are accepting that the BYOD trend cannot be ignored and has to be managed somehow. One of the dangers with BYOD is that if employees access a range of different corporate resources, both internally provisioned and SaaS-based, all with different usernames and passwords that some of these will be remembered and stored locally on the device. This is a danger should the device fall into the wrong hands or when the organisation’s relationship with the user ends. Limiting access from personal devices to a single SSO entry point minimises the problem; indeed, the device itself can form part of the strong authentication of the user to the SSO system. Policies built into the SSO system can also limit what a user has access to depending on the type of device and their physical location.

4. The desire of employees to use consumer based web resources at work: business have been putting controls around what web resources employees can access via corporate networks for many years. Increasingly such rules and policies can be built into SSO systems in effect merging in the web and URL filtering capabilities that have been in the past provided by specialist content filtering vendors. Some SSO vendors, such as the UK start-up SaaS-ID have taken this to a new level be actually enabling their customers to change the appearance of third party web sites and limit the options that are made available.
 
It is clear that SSO systems have evolved way beyond the early use-case of saving employees from remembering a range of passwords. One of the down sides pointed to by the detractors of SSO is that it provides a single set of keys to the castle. However, linked with strong authentication this should not be an issue and should instead increase security, especially with the rise of BYOD.
 
Another criticism has been the complexity of deployment, but this has decreased with the rise of standards such LDAP (lightweight directory access protocol), SAML (security assertion mark-up language) and SCIM (originally simple cloud identity management) and the sophistication and increased of use of many current SSO systems.
 
A third criticism that could be levelled for all the above uses cases is an SSO system becoming s single point of failure. But, this is true of any network device that is used to provide user access to applications. Resilience can be built into SSO just as with any other system. Furthermore, for ease of access and to open up SSO to smaller organisations SSO itself is now available as a SaaS-based resource, for example Ping One and SaaS-ID.
 
For those organisations that have looked as SSO in the past and rejected it, perhaps now is time to take another look. The sophistication of the new offerings that have come to market in the last few years help address a broad range of problems and provide a secure policy based identity-bridge between users and the resources they need access to.
 
Quocirca’s report “The identity perimeter” is freely available here.
 
Bob Tarzey, Analyst and Director, Quocirca

Can IBM find the right homes for its Watson?

17 Oct 2012

At a recent event, IBM’s Watson was an area for discussion.  Watson sprung to fame in the US by beating two humans in a TV games show, Jeopardy!  This led to a lot of coverage for IBM – but also to a lot of problems.
Firstly, those that know of the win see Watson as akin to when IBM’s Deep Blue computer first beat a human chess master – it’s a great idea, but it is only a game, after all.  Bridging the gap between what Watson did on TV and what it could do for an organisation is proving a little difficult.

Secondly, outside of the US, Jeopardy! is not so well known, and IBM’s marketing around it has often led to a need to explain the programme before moving on to what Watson could possibly do in a real world scenario.  The feelings are that using Watson in, for example, Mastermind or University Challenge would run up against the same issue as seen in the US – a great games machine, but so what?  On top of that, would IBM then have to enter similar competitive programmes in Germany, France and everywhere else?  A knotty problem.

So, just what could Watson really be used for?  It seems that many of the discussions IBM gets into with prospects boil down to a perception that it is a pumped-up search engine – which misses the point.  Watson is the current exemplar of how to deal with real big data issues – it works against documents as well as formal data sets; it is excellent at picking out key aspects of information thrown at it; and it is getting better at assigning probabilities as to how accurate its results are against how the user is feeding it the questions and supporting data it requires.

Watson really is a fast working, highly knowledgeable assistant to humans who need to work against massive and complex data sets in order to get to a best response. Sounds a bit woolly?  Unfortunately, it’s a bit difficult to tie things down much more than that.

However, Watson is moving out from the Jeopardy! cloud in some areas. For example, it is being used in healthcare – from helping to reach a more accurate diagnosis for a patient to helping to calculate the correct approach to continuing care for a health insurance company.  The idea is NOT to replace the health professional, but to provide a dispassionate assistant that can use real-time information analysis to drill down and provide a range of options back to the professional.

This requires a degree of focus from Watson, and it is showing its best capabilities where its assistance can be narrowed down considerably.  At the moment, Watson is not there to aid a general practitioner: it is there to help – for example, an oncologist can use Watson to assist in not only helping to decide exactly what cancer a person has, but also to come up with advice as to which treatment is most likely to have the best outcome.  Even here, Watson works best if it can be further constrained – for example, to concentrate on leukaemia or lymph node cancers.  The greater the focus, the more accurate the results.

Does this make Watson too specific for general use elsewhere?  Certainly, as an on-premise solution, it is only for very large organisations with a highly specific need.  Pharmaceuticals with research needs in a specific area where external information from other sources is rapidly changing is one possible target.  Organisations with large patent libraries where “prior art” is important or searching out possible beneficial overlaps is another.

But how about WaaS (Watson as a service)?  Is Watson’s specificity a major block to a capability here?  Look at the legal sector – many organisations are small and could not warrant investing in an on-premise Watson implementation, yet having a capability to use a Watson approach in, for example, case law where an assistant that can sift through legal precedent and other information sources and advise on approach would provide immense help to paralegals, lawyers and barristers (and their equivalents worldwide). 

There is great deal of promise from Watson – and by its very nature, anything that is done in the real world at this stage is not going to be wasted.  As Watson improves, it can take the silos of information from early-stage Watson implementations as feeds, using a federated approach to build up a networked “Super Watson”.

IBM may still be some way away from what the original researches set as a vision – to replicate a computer system as seen in Star Trek.  However, the steps along that journey are well under way, and the healthcare examples of real world Watson usage are already showing the strength of the system.  Is this Watson’s only spiritual home?  It shouldn’t be – but IBM has to be able to more succinctly and effectively message how Watson is different to search engines, business intelligence and business analytics to be able to get its point across.

Clive Longbottom, Service Director, Business Process Analysis

Cleaning out the data pipes

22 Oct 2012

Parkinson’s Law states that work expands so as to fill the time available. Something similar could be said about network bandwidth; left unchecked, the volume of data will always increase to consume what is available. In other words, continually increasing network bandwidth should never be the only approach to network capacity provision; however much is available it still needs to be used intelligently.

There are three basic ways to addressing overall traffic volume:

·        Cut out unwanted data
·        Minimise volumes of the kind of data you do want
·        Make use of bandwidth at all times (akin to peak and off-peak power supply)

There are two types of unwanted data. First, there are the legitimate users who are doing stuff they really should not be doing. From the network perspective, this really only becomes a problem when that stuff consumes large amounts of bandwidth such as watching video or downloading games, films and music. A mix of policy and technology can be deployed to keep users focused on their day jobs and thus making productive use of bandwidth.

The technology available includes web content and URL filtering systems from vendors such as Blue Coat, Websense and Cisco and filtering/blocking network application traffic with technology from certain firewall vendors including Palo Alto Networks and Check Point. In both cases care must be taken to ensure false positives are avoided that end up blocking legitimate use.

The second source of unwanted data is external and insidious; cybercrime and hacktivism. At one level this means pre-filtering of network traffic to keep spam email etc. at bay, especially as spammers have started exploiting increased bandwidth to send rich media messages. Most organisations now have such filtering in place using services such as Symantec’s MessageLabs or Mimecast’s email security.

Perhaps more serious is to avoid becoming the target of a denial of service attack (DoS). Generally speaking, these are aimed at taking servers out, but one type, the distributed DoS (DDoS) attack does so by flooding servers with network requests, so also has the effect of slowing or blocking the network. Technology is available to identify and block such attacks from vendors such as Arbor, Corero and Prolexic.

So now (hopefully) only the wanted traffic is left, but this will still expand to fill the pipe if left unchecked. One way to keep it under control is to keep as much “heavy-lifting” as possible in the data centre. This means deploying applications that minimise the chat between server and end user access devices. To achieve this data processing should be at the application server with just results being sent to users.

For the data that does have to be sent, techniques such as compression, de-duplication and caching can minimise the volume further. Two types of vendors step up to the plate here, those that optimise WAN traffic for example Silver Peak, Riverbed and Blue Coat. Such products also help with the local caching of regularly used content but there are also services providers that specialise in doing this, notably Akamai.

All of the above will free up bandwidth for applications that must have the capacity they need at the time the user wants it; telephony, web and video conference etc. Others applications such as data backup or uploading data to warehouses for number crunching must be given the bandwidth they need but this can be restricted to times when other applications are not in use, which in most cases will be overnight.

Of course, for global companies there is no single night time; the same is true in certain industries which may have urgent network needs at all times of day, for example healthcare. When this is the case, then both urgent and non-urgent network requirements must run side by side and this requires certain network traffic to be prioritised to ensure quality of service (QoS), an issue that it only makes sense to address when the data flowing through data pipe is clean and wanted.
 
Bob Tarzey, Analyst and Director, Quocirca

Baby, you can drive my CAR (Continuity, availability, recovery)

31 Oct 2012

Organisations are now almost entirely dependent on their IT platforms – a mix of applications and data that need to be highly available to allow the organisation to carry on its day-to-day activities.  Historically, the focus has been on disaster recovery as a safety net for any failure in the IT platform, using backup and restore as the primary means of recovering data.  However, this used to take days, and many vendors in the space built their markets on compressing this time down to less than one working day, and then down to just a few hours. 
It has to be remembered that for as long as service is suspended, particularly in areas such as e-commerce, financial trading or in supply chain management, the organisation could be losing significant amounts of money every minute the systems are unavailable.  This has forced a move to investigating business continuity – but this has its own set of problems.

The idea with business continuity is that as far as is possible, when a component of the IT platform fails, the business retains some level of capability to continue operating.  This involves ensuring that all applications and their data, along with required connectivity are available.  For high levels of business continuity, everything has not only to be architected such that every component is backed up by at least one similar component, but this redundancy also has to be managed across multiple geographic zones, so that a major issue in one geography (e.g. floods, earthquake) can be managed by failing over to a different geography.

Full business continuity where the organisation owns all its own infrastructure is therefore out of reach of the majority due to its high expense. 

An organisation should therefore look to create a strategy that provides an optimal approach to systems and data availability, balancing the constraints of budget and corporate risk.  Through a strategy involving a mix of continuity, availability and restore approaches together along with the use of cloud-based services, a cost-effective business solution can be put in place.

Firstly, an organisation needs to decide where business continuity is mandatory.  This will be apparent by defining which business processes are so important that they cannot be allowed to fail.  The applications and data that facilitate these processes will need to be fully available, and a highly virtualised internal or external cloud environment with shared resources may give much of what an organisation is looking for in this space, particularly if the business decides that it can carry the risk of an outside chance of a catastrophic problem, such as earthquake or flood.

The next aspect to look at is the availability of the underlying data.  In many cases, securing the availability of the application will be easier than doing the same thing for the underlying data – which may for example  have been accidently deleted by the user.  In this case, granular data availability where single or multiple files can be pulled back from a secondary store by administrators or accessed directly by end users enables rapid recovery from, in this case, the accidental data deletion.

It may be that the problem is caused by data corruption, leaving the application with no valid data to work against.  In this first case, being able to elegantly fail over from one dataset to an alternative “live” data set provides a form of business continuity, with downtime measurable in seconds. 

However, this may not always be possible, and the use of a full data backup and recovery strategy will be required.  Datasets that have been backup up to a reasonably current time can be recovered rapidly, minimising downtime and enabling the organisation to be back up and running rapidly to a known point, known as the recovery point objective (RPO).  Even where the application and the data fail together, fully functional images can be recovered that include the application as well as the data. 

Using backup and restore is not the same as business continuity, as some downtime will be required to regain full functionality.  However, through the use of tools such as data deduplication to minimise data volumes and snapshotting of changes to data stores on a regular basis to minimise incremental backup sizes, combined with wide area network (WAN) acceleration, full restores can now be operated in short timescales of minutes or hours, which can be agreed with the business as the recovery time objective (RTO).

The last aspect is the need for long term storage of information assets, whether this is for business or governance reasons.  This will need an archive strategy to back up the continuity, availability and recovery plans, and could involve vaulting of assets to an external for long term management, or an on-going plan for rolling data from one storage medium to another in house.  Beware the belief that long term storage of data can be done on one type of storage medium – if the information is required to be stored over just a 10-year period, think back to 2002 and try and remember what was being used at that point: it is essentially out of date already.

Like a great many other decisions for the CIO, this one is always going to be a trade-off. Different processes and activities vary in terms of operational and financial significance – so the optimal solution in terms of protection vs expenditure requires careful analysis – and the CIO will need a clear mandate from the board because all processes are not equal. 

A corporate information availability strategy needs a blended approach. The organisation will need some aspects of business continuity, but unless it has exceedingly deep pockets, it will also need to address how information availability and recovery can provide a low risk, cost effective strategy to provide an optimal platform for the business.

Quocirca’s free report on the subject can be downloaded here.

Clive Longbottom, Service Director, Business Process Analysis, Quocirca