subject: Grid computing: a real-world solution?
posted: Sat, 14 May 2005 15:37:49 +0100


[online version contains links to more info - additional article on
"service oriented architectures" at bottom - Stu]

ttp://www.theregister.co.uk/2005/05/13/grid_computing/

Grid computing: a real-world solution?
By Quocirca
Published Friday 13th May 2005 11:28 GMT

Analysis The problem with grid computing has traditionally been tying
it down into a real-world context. The theory is great – getting lots
of individual technical components working together as if they were
one big resource - but it’s the wackier or conversation stimulating
applications that have received all of the attention.

Everyone donating a little bit of their PC’s power when they are not
using it to help in the search for extraterrestrials is the kind of
thing you can talk about in the pub, wine bar or over dinner. Same
with the notion that the day will come when we no longer rely
computers all over the place in our homes and businesses but simply
consume processing power through a socket in the wall as we do with
electricity.

All interesting stuff, so it’s no wonder that people latch onto the
scientific and utility computing aspects of grid.

Start telling your non-IT friends about average utilisation rates in
your data centre or computer room, though, or the hassle you have to
go through when the sales department wants to beef up the call centre
system because it’s running like a dog, and they soon start yawning.

And this is one of the biggest challenges with grid - the perception
of what it’s for. An emphasis on the interesting and unusual creates
the impression of it being very niche or futuristic. But in reality,
grid’s greatest potential impact in the immediate term is undoubtedly
in addressing the really boring and tedious operational problems that
IT departments struggle with on a daily basis – systems maintenance,
coping with growth and shrinkage, squeezing more out of hardware so
available budget can be spent on other things, trying to keep users
happy, whatever they try to run and when, etc.

When we consider grid computing in this context, the term often used
is “Enterprise Grid”, as opposed to “Scientific Grid”, “Utility
Computing” or that very optimistic term “The Global Grid”, which is
based on the notion that one day, all computers will be joined up and
will work in harmony to solve the entire world’s computing problems
for free.

Back in the here and now, Enterprise Grid is really just the next
step in the evolution of computing architectures. If we take concepts
such as server virtualisation and clustering, and add a degree of
automation to them so physical servers can be allocated to and de-
allocated from different workloads with no or minimal manual
intervention, then we get the processing power dimension of grid. If
we add similar automation to storage and databases, we have the data
dimension. With automatic or semi-automatic provisioning and de-
provisioning of hardware and software assets based on changing
demand, we can reduce the need for the PFY (Pimply Faced Youth) to
run around rebuilding servers manually with all of the risks and
delays that go with that.

Such capability can enable a more responsive service to users as well
as taking pain, overhead and cost out of IT operations. If you can
move lots of small servers around quickly to where they are needed,
there is also less need to oversize individual servers to cope with
the peaks and troughs of normal use. Taking a really simple example,
if one application peaks whilst another troughs and vice versa, a
grid computing environment will simply switch resources between the
two as appropriate.

This is the theory, but can people out there in the real world of IT
relate to it?

Well increasingly, the answer to this is “Yes”. As part of recent
Quocirca study commissioned by Oracle to generate statistics for its
recently publicised Grid Index, we measured an average level of
knowledge in Europe (self declared) of 4.7 on a scale of 0 to 10,
where 0 = “Completely unfamiliar” with grid and 10 = “Deep
understanding”. In itself, this might not seem very impressive, but
when you consider that 9 months earlier, the average level of
familiarity we measured was just 2.2, it is clear that the number of
IT professionals taking notice and getting educated on grid is
growing rapidly. Furthermore, average knowledge levels for the
virtualisation technologies that underpin any grid architecture were
between 6 and 7, with similar growth.

Without getting too bogged down by statistics, though, one of the
most valuable aspects of this kind of research is the way we can pull
out interesting correlations. For example, appreciation of the
operational and service level benefits was directly proportional to
familiarity, suggesting that the relevance of grid becomes clear as
people begin to unders tandit–i.e.itisnotallvendorhype.
Another revealing observation was that server utilisation was
significantly higher amongst early adopters of grid and
virtualisation technologies, confirming that the theoretical
efficiency gains in this area are real.

At a more strategic level, we discovered that commitment to grid
computing and service oriented architectures (SOA) go hand in hand.
This should not be a surprise as the component based software model
that’s usually associated with SOA leads to more flexibility and
finer control for those investing in grid. Conversely, a grid
environment helps get the best out of component based software
architectures for those investing in SOA. Wherever you start, one
will naturally lead to the other.

But there were also a number of challenges highlighted in the
research ranging from concerns over solution maturity, through skills
availability, to the fluidity of standards, all of which represent
calls to action for IT vendors to continue investing in R&D,
education and collaboration, both amongst themselves and with
standards bodies and special interest groups.

Nevertheless, bearing in mind that grid computing is enabled by an
evolutionary set of technologies, it is not necessary for everything
to be in place at once for IT departments to start moving in that
direction. Whether you buy from Oracle, IBM, HP, Dell or any other
major infrastructure solutions vendor, it is probably time to start
asking them about virtualisation and grid options when you are next
in the market for server hardware and/or software. They all have
offerings of one kind or another in this space and looking at these
technologies in the context of a real world project or bid is
probably not bad thing to do. You can always defer adoption if you
don’t like what they come up with but at least you will have received
some free education on a rapidly developing part of the market.

In the meantime, more details of the research referred to in this
article, which was based on interviews with 1,350 IT professionals
worldwide, are available in a free Quocirca report entitled Grid
Computing Update. An Oracle document discussing geographic variation
in grid related activity based on the same study is available here
(PDF)

---

http://www.theregister.co.uk/2005/05/12/_steve_mills_soa/

IBM has moment of SOA clarity

by Gavin Clarke in San Francisco
Published Thursday 12th May 2005 07:40 GMT

IBM's head of software hit the SOA trail Wednesday, bringing IBM's
version of "clarity" to the debate on SOAs and encouraging ISVs to
turn their applications into modular suites.

Steve Mills spoke to journalists ahead of a customer event in San
Francisco, California, on Wednesday, to press the line hat IBM has a
lead in Service Oriented Architectures (SOAs).

Helping drive the message was news of a $94m, 10-year consulting and
technology SOA deal with Fireman's Fund Insurance, to consolidate up
to 70 per cent of the US west-coast-based company's applications and
put more business online. IBM also announced ERP specialist Lawson
Software will next year start to integrate its back office suite with
IBM's WebSphere middleware, DB2, Rational and Tivoli software.

Mills said he was using his San Francisco trip to explore SOAs,
layout a roadmap for IBM's technology, take feedback and help some of
IBM's largest and most sophisticated customers understand what IBM is
delivering. "We often get some deviated definitional ideas floating
around," he said of SOA, before giving IBM's definition.

A SOA is, according to Mills, an application infrastructure that
integrates business processes, re-uses existing assets and allows
applications to act as a service, serving the needs of the business.
To reach the SOA nirvana, customers must first understand their
processes, then model those processes into code, and then use
orchestration, workflows and choreography for data and transactions
to flow.

SOAs are not about one single piece of technology. "In an industry
hunting for the next widget this evolution is about the application
of technology to the business problem in the broadest contest, it's
no longer about the individual gadget," Mills said

As such, Mills said ISVs should stop turning out blocks of product
and make applications modular, able to "combine and recombine... in
new composite ways that might not be possible following monolithic
design structures."

It's important for the application vendors to make the right boundary
choices the customers don't want an "endless bucket of bolts," Mills
added.

Surprisingly, for the owner of a large services operation, people do
not figure in IBM's SOA nirvana. When asked about revenue from its
SOA business, Mills said IBM is using WebSphere, Tivoli and Rational
to tackle the $600bn labor cost overhead associated with in-house
software customization. "Customers are looking for technologies that
automate and displace that labor," Mills said.

An uncomfortable looking Mills did indicate, though, IBM expects to
benefit in its services and products businesses as customers' move to
SOA. "Proportionately, services is a larger part of the industry and
IBM than is software... most customers' money goes to purchase labor
today. That's the reality and a statement of the challenge," Mills
said.

Mill's appearance in San Francisco comes amid increased concern that
vendors like IBM are hindering the cause of SOAs by using different -
even conflicting - definitions.

Industry standards group the Organization for the Advancement of
Structured Information Standards (OASIS) last week announced a
committee to develop a reference model for providing a basic
definition of SOA.

IBM was missing from the committee's original line-up - like the
majority of enterprise software companies who claim they provide SOA
products. IBM has, though, applied for membership since then.

---
* Origin: [adminz] tech, security, support (192:168/0.2)

generated by msg2page 0.06 on Jul 21, 2006 at 19:03:51

 search: