Continuous Improvement: making sure requests don’t enter the void

For a long time, a complaint from SAP users has been the lack of visibility and influence they have in the product development process.

SAP’s Customer Connection program for Continuous Improvement , of which the UK & Ireland User Group is a part, is a big step towards dealing with this issue. The thinking is simple: each quarter, SAP and its user group community will identify four focus topics, from vertical markets to specific functions, under which improvement requests will be collected. This will then provide SAP with more informed insight, whilst also preventing user development requests from disappearing down a black hole. A further benefit will be a reduction in the time that product modifications come to fruition. The aim is for it to only take 9 months from request collection to execution which is a considerable improvement on before!

So what does this mean for User Group members? We’ll be asking you to identify potential improvement requests and nominate “subscribing customers” through our SIGs. There will be a central Idea Place portal, where all participating user groups can post requests. So please, let your SIG chairs know what issues you’d like to see addressed even if they don’t necessarily meet that quarter’s specific topic. The SIG chairs will be in touch with members directly to pool requests, so make sure to get involved.

If you need any further information on the Customer Connection programme then do get in touch with the User Group team.

Can in-memory computing answer the big questions about Big Data?

Like it or not, the amount of information an organisation will deal with only ever increases, writes Alan Bowling, chairman of the UK & Ireland SAP User Group. This has led to the concept of “Big Data”, whereby organisations will increasingly rely on large amounts of information from a variety of sources to analyse, improve and execute their operations.

There are several reasons given for this. First, simple availability: the use of more technology in business is resulting in more and more data. Second, regulation: organisations must retain more and more information to prove compliance. Finally, there is an increasing recognition that organisations must use every single resource at their disposal. As a result, data that once might have seemed irrelevant is now pored over for any perceived value.
The big question for Big Data is what to do with it. Most organisations will naturally want to carry out in-depth analysis of the data within their ERP systems, digging deep to analyse and predict the most effective way to do business and determine future strategies and tactics. However, as data volumes increase, so organisations hit a stumbling block: how do they process such a huge amount of information in a timely manner?

One option is to only study a proportion of the whole mass, yet this can easily provide inaccurate results as organisations are basing their decisions on an incomplete view. With enough computing power, this obstacle is removed as organisations then have the performance for high-speed analysis of entire masses of data at once. In-memory computing tools, such as SAP’s HANA In-Memory appliance, are designed to produce this power so that organisations can analyse vast quantities of business data as and when it is received and needed, from a variety of data sources.

In-memory: not like a sieve

The concept behind in-memory computing is relatively simple. Traditionally, data will be placed in storage then, when needed, will be accessed and acted upon in the computer’s memory. This results in a natural bottleneck that reduces speed – even with the fastest SSD hard drives, there will still be a gap where data must be accessed, transferred to memory, and then returned so the next batch of data can be used. As volumes of data increase, so the time needed simply for access, let alone actual analysis, increases too.

In-memory computing takes advantage of a better understanding of how data is shaped and stored, the constantly falling price of memory and the related greater affordability of faster solid state memory to do away with the traditional concept of storage. Instead, data is stored directly in the computer’s memory. As a result, when it needs to be analysed it is already available and can be accessed near-instantaneously.

The most evident benefit of in-memory processing is its speed. Without the bottleneck of having to access data in storage, organisations can swiftly analyse information and use it to create the best possible strategies.

This speed is vital; rather than analysing information that is days or weeks out of date, organisations can perform complex queries in minutes, meaning their business operations can be investigated and improved based on the situation as it is rather than the situation as it was last week. At the same time, in-memory computing’s power means that organisations can investigate entire sets of data rather than representative samples, meaning they can be sure they are acting on all of the facts.

This power and speed provides other benefits. Rather than trying to streamline analysis speeds by presenting data in a rigid format that only responds to certain pre-ordained queries, organisations can instead save data in a more unstructured format. By relying on the power of in-memory computing to compensate for this lack of structure, organisations also have far more flexibility in how they access the information.

For example, if an organisation using in-memory tools suddenly decides to study its HR processes based on new customer feedback data, it does not need to restructure the data on file to accommodate a planned selection of new queries. It simply asks the questions as and when they appear. It is these benefits that mean in-memory is already used by many organisations, for purposes from maximising sales to analysing gene sequences.

Taking the plunge

To an extent, the decision to adopt in-memory computing is less one of “whether” and more one of “when”. If an organisation is large enough and collects enough information, the inevitability of Big Data means that it will have to adopt in-memory computing at some point so it can continue to function.

For certain sectors where huge amounts of data are practically a requirement, such as utilities or finance, in-memory computing is already a hugely disruptive technology. Organisations in these sectors would do well to make the move to in-memory computing early, rather than being left trying to catch up with the competition. For others, the choice is less clear-cut. An organisation with relatively little data may feel the costs of an in-memory implementation far outweigh the benefits.

What is clear is that the move to in-memory computing, while it might be inevitable, will not necessarily be straightforward. Organisations should take advantage of all sources of information at their disposal, from suppliers to user groups, to help them make their decision. Whether the best decision is to implement in-memory now, in the future, in-house, via the cloud, or simply not at all, organisations need to be sure they have made a well-informed choice. This choice also needs to cover the most important factor of all – as powerful as in-memory computing is, like all technology it is worse than useless if it is not used to the correct end.

This article first appeared in Computer Weekly on 24th May :