There are a 100 days until the General election in the UK……..

As the political parties start their campaigns in earnest what is probably the most unpredictable Election campaign in the history of the UK is just 100 days away. In the last election, 5 years ago, Social Media had just started and the first iPhones and iPads were in use.

So how will technology be used in this election? Well without doubt it will be a mainstay of the political parties attempts to bombard each and every one of us with “facts” designed to steer us one way or another, but it seems to me that in this era of “Apps” we are missing one that allows us to filter the huge amounts of data out there. There are multiple organisations offering us ways of compare providers of insurance, energy etc. So why can’t we have a an independent provider to give us “comparethepoliticalparty.com” or “politicians4u”?

All the data exists – such as the number of times they have attended parliament, how often they have voted, what they have voted for, what expenses they have claimed, what real-life experience other than politics they have. For parties we could have economic performance, tax rates, education performance, health service stats, crime stats and so on.

And with bio-metrics in passports and driving licences, and smartphones with contactless payment, as well as debit cards, why do we use bits of paper to vote still?

It all seems rather strange that this area is still behaving like this but perhaps this is a reflection of the politicians themselves, still living in a time long past and within an environment that is rapidly being bypassed by the social media. Politicians are losing their power through the power of mass independent campaigns initiated simply and easily by like-minded individuals.

So perhaps technology is actually bringing the power back to the people and traditional politicians are under threat like never before. Let’s hope that we can actually filter the facts and hence truth from the mass of “data”.

We shall see in 100 days time.

 

 

Getting intelligent with your business data

Gartner predicts that by 2012 more than 35 per cent of the top 5,000 global companies will fail to make insightful decisions about significant changes in their business and markets, because they lack the right information, processes and tools. In today’s competitive environment, this can mean the difference between company success and failure. 

The challenge for both companies looking to implement business intelligence (BI) with standalone systems and those that will add it onto an existing ERP system, is how can the data that is gathered be used effectively? 

There are several mistakes that businesses make when it comes to implementing a BI project. 

  • Most organisations fall into the trap of having a loose BI reporting strategy, but no idea of what processes to use and why. This can leave the BI project floundering because there are no guidelines on what data to gather and how it will be used.  
  • Organisations underestimate the time and effort required to implement BI properly, as well as the impact it will have on existing processes. Underestimating the business resources needed, how long it will take and failure to plan how the organisation can continue “business as usual” operations during the project can cause serious productivity strains.  
  • Lack of end-user training.End users directly experience the impact on existing processes and will need training on what the new processes are. Without training to demonstrate why new processes are necessary and how they work, there will be reluctance to use them resulting in lower productivity, confusion and errors.  
  • The majority of organisations fail to undertake post-project analysis to see what value they are getting from the implementation. Often after a project is completed, the organisation is gathering data but doesn’t know if it is using it correctly. As a result, there is no quantitative or qualitative proof to justify the expense of the project.  

Adopting best practice

While mistakes are easy to make, adopting a few best practices can turn BI implementations into successful, value-creating projects. One of the most important best practices is not to treat BI as a point solution for just gathering data. Instead, it should be integrated into business processes, ideally throughout the entire organisation. However, in order to do this support is essential. Many organisations mistakenly leave the championing of BI to the IT department. Yet, IT should only be in charge of supporting the processes. Instead, board sponsorship is crucial, as is having as many senior stakeholders involved as possible. For example, the chief financial officer and chief executive officer should be involved, as well as stakeholders in each business unit such as HR, procurement and IT. 

Ensuring an intelligent return

After spending significant funds on a business intelligence solution, business units and IT need to work together to determine exactly what information is required and when they need it in order to avoid the common mistakes made with business intelligence implementation. Organisations should carefully plan their strategy to ensure they get maximum value from BI, otherwise, they’ll have nothing to show for it and won’t be able to demonstrate its value. However, when done correctly, BI can arm organisations with the critical information needed to become more agile and flexible, allowing them to respond faster to changing markets and giving them an edge over the competition.

The User Group is currently involved in the BusinessObjects charter, aimed at increasing engagement and collaboration between SUGEN and SAP to better meet the needs of SAP BusinessObjects customers.  This stands to benefit the entire SAP user group community, as new and existing users look towards business intelligence deployments in the future.  In the coming months we hope to update members regarding how the charter is progressing, so watch this space.

Official….the biggest yet

This years conference which takes place on Monday and Tuesday is the biggest yet – more delegates, more speakers and more exhibitors. You can still book on line for the last few seats – but you need to be quick.

Great keynotes, Chakib Boudhary, Ray Wang, Kriss Akabusi, Tim Noble, and Richard Newman plus 0ver 90 other speakers. Great venue, great Networking dinner – and a great opportunity to learn.

…………….and great value!

Security and Offshoring……

Following on from the issues of Satyam and the risk that exist for work being done in that part of the world  comes another story highlighting this risk. This time instead of corporate risk , the risk is to individuals. The story, on the BBC, highlights the selling of customer credit card details from Indian call centres.This common theme appeared when customers were buying software from Symantec.

The article includes video of the data being passed, and includes comment from an Indian lawyer that until India has some robust data protection legislation this is likely to continue.

The bottom line is that when outsourcing, make sure that you take an holistc view of the challenges of the process that you are moving including national legislation.

The Cost of Data

About a month ago I posted some thoughts on the cost of data, actually more from an ecological view but did state the point that going green was great from a corporate and social responsibility view but actually was good common sense to reduce the cost of storage.

Today an article talks about the rising cost of data storage in terms of rack prices. It shows both the UK and Austria as the highest prices and talks of large price increases in other locations. An immediate thought would be how can this be, given the economic changes and reducing energy prices? What lies behind this is the increase in the amount of data being generated and then the associated energy required to store it.

Today it is estimated that the world is generating 40 exabytes of information a year – that’s a huge figure!  That’s more in one year than the last 5000 years. And at Google alone there are 2.9 billion searches a month of that data which does beg the question – who was asked the questions before Google existed?

So the cost of data center storage is going up because its in short supply and your new cost also includes the vendors building more facilities. I commented on the shortage earlier and the key is data management. In a SAP sense you need to think about your archiving and I know some will say its cheaper just to buy more disk, but ultimately this will lead to performance problems. The SAP archiving process doesn’t just parcel up data and remove it from the “live” database it also deletes data that is no longer valid such as idocs that have completed their tasks. The other issue that exists is the cost of keeping that data live – energy, space and performance trade offs. Some data has legislative requirements that it is kept, and you need to consider how in 10 years or so you can access that data so the storage mediums and methods need very careful consideration.

But the fact remains “dead” data should either be deleted or stored at lowest possible cost – don’t ignore it.

SAP v Oracle – Utilities

PC World today carries a story about another combat zone for the big two.  My friend Ray Wang from Forrester comments in the article that in his view that  shortly you will be able to view on line your minute by minute utility consumption. I am sure Ray is correct in that prediction. The size of the task however shouldn’t be underestimated and there is a reality here of cost vs benefit. In the UK utility bills are read by a manual meter readers and sometimes estimates are used for individual billing.  The reality is that depending on the type of property and its age, there are differing challenges, so any access at all to meters, comms issues, and the physical age of the meter are challenges.

The other issue is simply the amount of data. In my experience in a much simpler world of linkage of manufacturing equipment to SAP there are challenges – in the main this is nothing to do with SAP. The first thing you would say is this is not mainstream SAP ERP activity – simply too much data and very, very particulate and also very “industrial”.  SAP, of course have developed their add on solution to cope with this but in this area the key may be with your hardware strategy – and I’m not talking about IT hardware directly. I’m talking about your physical metering and networking strategy.

For example within the manufacturing world, key to the success, would be a sourcing strategy for your plcs and then the initial collation of the data may be better served by the manufacturer of these. At this level the data is incredibly detailed and very often more important locally than centrally. The trick then becomes serving both parts of the enterprise, the local engineers who want detailed data, and the central controllers who want the bigger picture. So my advice would be consolidation locally and a very careful selection of the data that needs to move away from the local source. In my experience very often the manufacturers of the actual data recorders have the knowledge of the pitfalls and that’s why I would always encourage close involvement with them.

So its no surprise that SAP have done just that themselves!

We’re running out of data centres – apparently.

Despite everything happening around us, news is that we’re now up to 82% of capacity across Europe.  That capacity combines energy, cooling and space. From personal experience I know it is tight, but because it is tight it made me go back to basics, and actually the problem is around process.

The amount of kit in a data centre at its simplest relates to processing and storage. The one thing that most businesses don’t do effectively is manage their data from a business perspective – in this respect IT has to take the lead and educate the business functions of the cost of data. I once worked out the cost of data as closely as was possible, it’s an inexact science but taking the cost of hardware, data centre cost, administration, security etc etc came up with around 50p per megabyte per year. That shocked people, and it was meant to!

People like data. They like the “power” it brings and often this brings a very personal behaviour with it. So it’s kept personally. The problem with this is that it spreads like a virus across the enterprise. Take for example a marketing department who prepare a new corporate presentation, for all the right reasons. It has wonderful colours, and great photographs promoting the business, and because its slick it’s a fair size file. Now that marketing department wants to share that with people and emails it to the top 500 people in the organisation. That increases the data by 500 fold. Now I know there are technical things that can be done to improve that – but how many organisations are structuring that data to minimise the space taken?

The other problem is “just in case”. This is usually a sign that a business doesn’t have a structured data retention policy that defines what is kept and why.

Also important is how that data is retained. Should data be in “near” storage or “far” storage? Near storage usually means on line spinning disks, far storage usually means off line storage media that occasionally may be needed, and likely can be stored away from the powered data centre.

So that’s some thoughts on data, but processing power is another big issue. A question I raise is; how many of you have actually looked at the percentage of time your server’s CPUs are running at full throttle? I’m guessing few and I’m also guessing the answer in the main is a low figure. The challenge is to look at how the processing is done and reduce the number of servers in use, because I guarantee you can run with less.

So there you have it – examine the data you think you should be storing and examine how you get the processing done with less kit and I think you’ll be surprised by how much you can reduce not just the space but also the cost.

Addendum:

Check out this videocast on data duplication and how to remove it.