School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project CogX project

Scaling UP vs Scaling Out: In the design of intelligent systems
(DRAFT: Liable to change)

Aaron Sloman
School of Computer Science, University of Birmingham.
(Philosopher in a Computer Science department)

Installed: 19 Aug 2012
Last updated: 19 Aug 2012
This paper is
A PDF version may be added later.

As explained below, this is part of the Meta-Morphogenesis project/conjecture:

A partial index of discussion notes is in


Introduction: A potential confusion

I have just discovered that there is a very different distinction between scaling-up and scaling-out used in connection with infrastructure options for computing services. A few randomly selected example web sites explaining and discussing the distinction and how to choose between options are:
"To scale horizontally (or scale out) means to add more nodes to a system, such as adding a new computer to a distributed software application. An example might be scaling out from one Web server system to three."

"To scale vertically (or scale up) means to add resources to a single node in a system, typically involving the addition of CPUs or memory to a single computer."
Feb 23 2011: Scale-out vs. scale-up: the basics
Posted by: Randy Kerns
"Scale-up, as the following simple diagram shows, is taking an existing storage system and adding capacity to meet increased capacity demands."
"Scale-out storage usually requires additional storage (called nodes) to add capacity and performance. Or in the case of monolithic storage systems, it scales by adding more functional elements (usually controller cards).One difference between scaling out and just putting more storage systems on the floor is that scale-out storage continues to be represented as a single system."
Scale Up/Out and impact of vRAM?!? (part 2)
21 July, 2011 by Duncan Epping - with 86 Comments

The distinction I am concerned with is totally different, and refers to different kinds of functionality, not two ways of providing the same functionality.

Relevance to meta-morphogenesis

The meta-morphogenesis project is an attempt to survey changes in information processing in evolution, in development, in learning, in social systems and cultures, including changes that speed up or extend the mechanisms for producing future changes in information processing mechanisms - as explained in:

It seems likely that many of the examples of transitions producing meta-morphogenesis involve evolution, development or learning producing a new form of interaction between previously evolved, developed or learnt mechanisms.

Possible forms such transitions can take include the following (a tiny subset of the space of possibilities waiting to be investigated):

The above transitions can occur in individual learning, in genetically and environmentally facilitated developmental processes, in modifications to the genome, or in some cases in social collaboration and interaction, so that tasks originally performed by individuals can be performed better by pairs or groups.
It is sometimes suggested that that was what led to development of human language, but an alternative conjecture about evolution of language as initially supporting internal processes and only later being used for communication is offered in:

Scaling up

For many years researchers in AI have emphasised the needs for system designs to "scale up", i.e. they should not only perform well on relatively simple problems but also continue to perform well as problems get more complex.

This can be interpreted in various ways, but it often refers to a need to avoid designs that have exponential complexity, so that if the size of the the problem increased by a factor of N (e.g. 20) then either the time, or the storage space required, or both, increases by a factor of 2**N (e.g. 2**20 which is 1,048,576).

The size measure may be number of data items on which a system needs to be trained, the size of an image to be processed, the size of sentence to be parsed, the size of a plan to be constructed, the size of "genome" to be evolved, and many more.

Much research in AI has been concerned with attempting to defeat the "combinatorial explosions" that usually arise out of exponential relations between problem size and time or space requirements. There have been huge improvements based on many different techniques, including use of powerful heuristics (e.g. detecting and using symmetry), structure sharing between partial solutions, and using statistical/stochastic methods for sampling solution spaces instead of ensuring exhaustive coverage. Some of these methods require the goal of optimality to be abandoned, but often very good but non-optimal solutions are found.

Scaling out

In parallel with all this for many years it has also been known that solutions that work well for a particular type of task may be hard to integrate with mechanisms that perform well on other tasks in systems that need to be able to combine competences. I have referred to this as the need for solutions to "scale out", in contrast with the need to scale up.

Possible examples of scaling out include

Previous discussions and papers referring to scaling-up vs scaling-out
(In the sense considered here.)

(This is a first draft web page and may be modified and extended later, especially if I get comments, criticisms or suggestions for improvement.)

Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham