Recursive Machines And Computing Technology
Home → Articles → Recursive Machines And Computing Technology
V. M. Glushkov, M. B. Ignatyev, V. A. Myasnikov, V. A. Torgashev.
The paper analyzes disadvantages of traditional computers. It shows that partial revision of von Neumann principles fails to provide a leap in the development of computing technology. New principles of program and structural organization of digital computers are offered. Their name, "recursive computers" (RC), stems from the recursive method of defining their internal language and structure. The main features of their internal language and their tentative architecture are considered, and their capabilities are evaluated.
1. Introduction
More than a quarter of a century has passed since digital computers first emerged. During this period of time, three generations of digital computers have evolved, and their speed, reliability and memory size have increased, while their cost, dimensions and power consumption have decreased.
Yet, the basic principles of digital computer structure and program organization, which were proposed by von Neumann as far back as 1946, remained invariable. These principles include:
- low level machine language (commands are elemental operations performed on elemental operands).
- sequential centralized control of computational process (control is by a single sequence of commands, each providing fulfillment of one operation and passing control to the next command),
- linear organization of storage consisting of similar series-numbered cells.
These principles provide simplicity of digital computer structure and involve a minimum number of logic (active) elements. Such computers (Princeton type, or von Neumann computers) are known as classical or traditional. As long as logic elements were much more expensive and less reliable than memory elements and digitally solved problems remained simple enough and external devices were few, the above principles were certainly progressive. But now they hamper the development of computing technology.
2. Disadvantages of traditional digital computers
Simplicity of traditional digital computer structure results in unjustified complication of software systems, which now seem to reach the upper limit of complexity and still do not satisfy users in many respects. Consider also, that each of the above principles contributes essentially to the complexity of software.
A great gap, for example, between the machine language and the external one is responsible for the emergence of involved translators constituting a major part of any software system. Difficulties related to the development of translators and the large memory sizes required to contain them make software designers prefer less effective languages (such as Fortran, Algol-60) because of their greater simplicity from the viewpoint of translator construction. As for users of medium and, particularly, smaller computers – they have to manage with curtailed forms of external languages. Especially great difficulties arise in constructing translators for deeply recursive languages, which are most efficient in programming complex tasks.
It is obvious that linear organization of storage is not commensurate with the structure of data and programs for the majority of real tasks. As a result, one has to introduce into the software rather elaborate systems of dynamic allocation of memory which, as a rule, require either sophisticated processor hardware and considerable internal memory sizes or great consumption of machine time.
The heart of any software system is the supervisor which integrates all digitally solved tasks into a single program and controls the computational process. Great complexity of the supervisor is attributed primarily to the principle of centralized, sequential control employed in traditional computers, since this principle forces the integration of all programs inside the computer into a unified sequence of commands. A supervisor designer must provide for all situations which may happen in the course of computing.
Another substantial disadvantage of traditional digital computers is that their throughput obviously depends on the speed of their components. When developing super high-speed components for supercomputers capable of performing tens of millions of operations per second, we are approaching the fundamental limitation, namely, the final velocity of electromagnetic wave propagation. It is felt, therefore, that computers of the Princeton type have practicably exhausted their potentialities of raising throughput. As to increasing throughput by paralleling computational processes, it is almost impossible to do so on a large scale (except for some particular tasks), as long as the first two von Neumann principles remain in force. This is because operational complexity and time consumption of the supervisor and the system of dynamic memory allocation grow in proportion to the number of computational processes performed at a time.
The present rapid advance of microelectronics and the emergence of large scale integrated circuits result in the cost and reliability of logic elements being near those of memory elements; hence, there are grounds to revise the von Neumann principles, as the minimum number of logic elements no longer leads to the minimum cost of the computer (this is especially true if the software cost is taken into account).
3. Revision of von Neumann principles
Most papers on organization of non-traditional computers revise only one of von Neumann's principles (namely, the first one). For instance, [1] gives theoretical reasons for the necessity of the transition to high level internal language and to advanced systems of interpretation. Efficiency of such a transition is proved by the experience gained in designing both small machines (МИР [2]) and large machines (SYMBOL [3]). It should be noted, however, that raising the internal language level more often amounts to just implementing in' hardware some of the modern medium-level algorithmic languages (Algol-60, Fortran). As for smaller machines, one has to develop still simpler languages, usually oriented to calculation tasks. The purely sequential, nature of the above languages (and the same is true, by the way, for the overwhelming majority of the other modern algorithmic languages) limits the possibilities of accelerating the interpretation processes through paralleling them on a wide scale. Therefore, for the sake of preserving computer throughput, one has to limit the depth of interpretation which, in turn, does not allow one to simplify essentially the internal software system.
The principle of sequential control of the computational process has remained inviolable over the entire period of computing technology development. Even when developing highly parallel computer systems, designers parallel the very computation through local changes of the internal language level. For instance, the ILLIAC-IV computer employs numerical matrices as operands. In the STARAN-IV computer, we see that for any two-place operation, one operand is an elemental number (or word), while the other one corresponds to a multitude of numbers (words) possessing some common tag. In the STAR-100 computers, the sequences of elemental commands are integrated into operators, employed for one-dimensional numeric or symbolic arrays, with the commands carried out in the series-connected processors. In all these cases, a partial raising of the machine language level does not facilitate, but even complicates, the problem of software system organization.
Efforts taken over the last decade to develop computers with decentralized parallel control have not resulted in real computers, primarily because only the computer structure was subject to change, while principles of program organization remained traditional. As a result, programming became sharply more complicated for simple tasks, not to speak of powerful software systems.
So, von Neumann machines apparently fail to meet the demands of modern computing technology. However, no convincing alternative to these machines has been proposed as yet. The long life of Princeton type computers is attributed to a profound matching of their structural and program organization and to the mutual harmony of the von Neumann principles. Any partial revision of these principles inevitably involves contradictions between machine structure and computational process organization. Therefore, this way would hardly lead to creation of a viable computer model capable of coping with all tasks modern computing technology may face. Only complete replacement of all the von Neumann principles by a new system of principles, affording as full matching of computer structural and program organization as in traditional machines, can bring success.
Following is a description of these new principles and the appropriate type of computers. Their name, recursive computers (RC), reflects both the structure of their internal relations and the structure of the machine language.
4. Principles of recursive computer organization
The first principle consists of limitless level of machine language; that is, language structure does not limit basically the number of language levels and complexity of operators (commands) and operands. For instance, operands may be represented not only by numbers or sets of characters as in traditional machine languages, but also by vectors, numerical or symbolic matrilines, multidimensional arrays etc. The potential infinity of internal language levels defines its recursive nature unambiguously. Such a language allows one to define all the elements of the lowest level (their number is always finite) and to set recursive rules of transfer from any level elements (except the lowest ones) to the previous level elements. A specific form of RC internal language will be discussed later.
The recursive nature of the internal language provides compact representation of both application and service programs, essentially simplifies the structure of translators from any external languages, affords effective programming directly in a machine language, and brings about conditions for programming in languages close to natural ones.
Such a language, however, is totally incompatible with sequential control of the computation process. In fact, the higher the level of some program element, the more time is spent on its interpretation as computation is carried out at the lowest level. So, with the number of internal language levels growing, the fraction of total machine operation comprising control and interpretation processes increases, and efficient throughput drops accordingly. For faster execution of the whole program, it is necessary to provide extensive paralleling of control processes making transfers to succeeding program elements (commands) without waiting for the end of the previous ones. However, if these elements are complex enough, it is next to impossible to define their sequence of execution in advance. Therefore, the principle of centralized sequential control should be replaced by the principle of decentralized recursive parallel control of the computation process.
The second principle of RC organization is that all program elements for which operands are available are to be executed. Readiness of some operator (program element) for execution is checked only if all higher level program elements comprising the given operator are in the state of execution (interpretation). Such a method of control provides efficient dynamic paralleling of computation processes and permits the removal of interruption processing programs from the software system, since interruptions make no sense in this case; in general, many supervisor functions become unnecessary. Besides, the very process of programming in a language (either internal or external) which employs operands for defining the program execution procedure becomes essentially simplified and, less sensitive to the programmer's errors.
The principle of recursive parallel computation process control is incompatible with the linear organization of storage, primarily because of the need for simultaneous and independent access to different parts of the storage. Furthermore, the problem of dynamic storage allocation becomes essentially unsolvable in this case. In order to fit RC memory structure to the above principle of control, it is necessary to provide an opportunity for program integration of the memory elements into tree-like or net-like multilevel configurations corresponding to the structure of data and programs represented in the RC internal language. Moreover, all RC internal memory can be divided program-wise into independent parts of an arbitrary size, with each of them corresponding to a certain program element. The number of these parts is dictated, on the one hand, by the number of processors within a RC and, on the other hand, by the number of program elements ready for execution.
The third principle of RC organization is that memory structure should be re-programmable in order to convert the structure of data and programs represented in the internal language. Such a storage organization removes most problems of dynamic storage allocation.
Computers operating on the above principles could have far more forms of structural organization than traditional computers, which usually differ from each other only very slightly. Nevertheless, all machines in question have a common feature, which is an opportunity for unlimited paralleling of computation processes (including control processes). Therefore, these machines could comprise any number, however large,-of processors and storage units. As to infinite, non-uniform structures – they can be well described with recursive definitions. Initial elements of the structure, namely zero rank elements, are described, for this purpose, in a usual way. These elements are used to construct first rank machines, which would be elements for constructing second rank machines, which in their turn would be elements of third rank machines, and so on. If the process of transfer from n-rank machines to n+l-rank machines is defined unambiguously for any n_>0, then the structure of such machines is recursive.
The fourth principle of RC organization is that, basically, there are no limits to the number of machine elements. This offers one an opportunity to design a large RC family ranging from small machines to super-systems and nets of computers, with common construction, technologies and programs. What is more, a great number of functionally equivalent elements available in a machine allows one to radically solve the problem of availability, since failures of certain RC parts would somehow affect parameters but not disable a machine.
Recursive structure rigidly defines connections of elements in a machine. At the same time, establishing certain fixed links does not allow one, in the general case, to fit the structure of programs and data to the machine structure. This inevitably results in more complicated software. To obtain the needed match, one has to incorporate switching elements in the RC providing flexible, reprogrammable connections between various RC elements, in the way it is done in modern, automatic telephone exchanges. Thanks to this, RC structure could be adapted to the structure of tasks the machine solves.
The fifth principle of RC organization consists of a flexible, reprogrammable structure. Any group of RC elements, possibly remote from each other, with at least one processor among them, can form a relatively independent computer while executing a program.
Spatially sharing various programs in a RC permits the removal of unexpected effects of programs on each other. Replacement of time sharing by space sharing would make the user a full master of a RC part allocated to him. Not only could he use any RC commands, but he could also insert any commands of his own, even in cases when these commands coincide in their identifications and code with RC routines, but have different meanings. This feature would improve RC versatility (compared to traditional computers), and serviceability would increase accordingly.
One naturally can judge to what extent the above principles are effective after they are implemented in actual computers. To gain an idea about recursive machine capabilities, let us consider one of the possible RC versions. The framework of this small paper makes it practically impossible not just to describe a RC in detail, but even to present all its essential features. Therefore, the following part of the paper is, unfortunately, of rather fragmentary nature.
5. Basic features of language of recursive machines (LAREC)
As was mentioned before, RC machine language is multilevel. There are 5 types of LAREC elements.
The initial level of the language comprises the following elements:
- variables,
- lists,
- microprograms,
- operators,
- arrays (LAREC elements of higher levels).
From these, one can establish files, catalogs and blocks.
Any LAREC element consists of a local number, a description and a body. The local number of an element is, to a certain degree, equivalent to its address within a higher level element which directly contains this element. Multilevel language structure makes it possible to restrict local numbers to one byte and to a half byte for operators and some forms of variables. This means that any LAREC element can directly comprise up to 256 elements of lower levels.
The description of any LAREC element defines its type and form and contains, depending on the description's form, a number of parameters defining the internal structure of the element body.
A variable is a LAREC element whose body consists of an ordered population of data which can be considered in the program as an entity (operand for some operator) . A RC uses three main varieties of variables: characters, words and lines. Besides, any user of the RC can define up to 256 additional varieties of variables (vectors, matrices, polynomials etc.). The form of a variable also characterizes the elements of its body: numbers may be complex or real, decimal or binary, integers or fractions, fixed or floating point, with fixed or variable format. Apart from numbers, elements of a variable's body may be represented by words consisting of bit or byte characters and by names. One byte is allotted to define any of the main parameters of a variable (formats and word orders, amounts of words within a line). However, there is an opportunity to increase the parameter which characterizes the number of words in a line to two bytes. Thus, a line may comprise 65 536 words. Formats and order (for floating point numbers) are measured in half-bytes. Thus, the maximum size of words within a variable is 128 bytes.
A list is a LAREC element whose body, depending on the list's form, consists of a set of couples: NAME-NUMBER, NAME-NAME, NUMBER-NUMBER, NUMBER-NAME. Lists allow correspondence of the numbers of LAREC elements to their names (external or internal identifiers). Besides, names or numbers of LAREC elements may be changed by means of lists. For instance, during transfer of a variable from one LAREC element to another, the variable's local number is changed. Instead of local numbers of elements, lists use global numbers representing a sequence of local numbers of those LAREC elements which comprise an element sought for. Notice that the first number in this sequence is the local number of the LAREC element directly included in that very element which comprises the list itself, while the last number is the local number of the element sought for. Formats of names and numbers may be defined both for the entire list and individually for each, list element. One byte is allotted for definition of formats or a number of list elements, with an opportunity of expanding to two bytes.
A microprogram is a LAREC element whose body is an ordered sequence of standard format microcommands. Description of a microprogram includes only a number of microcommands constituting it. The purpose of microprograms will be discussed after the other machine language elements are defined. An operator is a LAREC element whose body contains one or two (depending on the operator's form) local numbers of those LAREC elements which are directly in the same element as the operator. Operators are the simplest elements of the program represented in RC internal language. They correspond, to a certain degree, to traditional computer commands, since local numbers included in the operator's body are addresses of operands. Also, the operator's form defines the type of operation which is to be performed on the operands. However, in traditional machines, each command is associated with only one microprogram, while in RC, each operator is fitted with a variety of microprograms. A concrete microprogram is defined from this variety in accordance with operand forms. For example, one and the same multiplication operator may be associated with multiplication operations such as number by number (binary or decimal, integers or fractions, floating or fixed point), number by line, line by line (multiplication of the respective elements of lines, numbers of elements in operands being equal) and file by file of different dimensions (resulting in a new array whose elements are products of corresponding elements of initial files). The same operator is used both to perform the "AND" operation on binary digits and to integrate two symbolic words into one word inside which a multiplication sign is placed. If one or both operands contain names (not included in lists), then by these names one must find variable numbers and symbols, but not names. Only then can one determine which operation from the operator's work possesses the same local number as the operator does itself.
An array is any LAREC element whose body consists of LAREC elements of lower levels. An array is called one-dimensional if there are no arrays among its elements. Otherwise, it is called multidimensional.
A file is an array whose elements are only variables and/or files.
A catalog is an array whose elements are only lists and/or catalogs.
A block is an array whose elements include either a block's name or whose body consists of only blocks and/or operators. Each block (like each operator) has a 4-digit number. However, different units and operators may be included with the same numbers in an array block only if their simultaneous operation is impossible due to the structure of the block element internal links. Thus, the number of different elements of array blocks cannot be significantly more than 16. The block body normally includes: a list of input addresses, a list of output addresses (both have the form NUMBER-NUMBER), a list or name catalog (the form NUMBER-NAME), as well as a file of initial conditions and constants, a file of internal variables and an array of microprograms. The last three arrays together with array blocks can be replaced by a block name. Such a block corresponds to the procedure, and in order to carry it out, one has to find the respective arrays by the name.
Any block has input and output ports (up to 16 of each type). The list of input addresses correlates each input port of a block with the output of an adjacent block. The list of output addresses correlates the output port of an internal block with such an output port. Lists of addresses mainly define program structure.
The name list puts names in correspondence with some output ports of internal blocks (including the block with number zero). The name lists provide agreement between external names of program variables and their machine addresses. Besides, the programmer (or translator) can have access by means of the name list to those program elements whose addresses are unknown.
The internal variable array contains output and intermediate results of the block's work. This array is empty before the start of work of the previously operationless block.
The microprogram array contains microprograms providing the realization of internal blocks and operators included in a given block. Before the start of block operation, this array is empty and is gradually implemented in the course of the block operation. However, if the programmer wants to employ his own method of realization of standard operators (blocks) or to give different meaning to these operators (blocks), he can introduce his own microprograms into his program, and the operators (blocks) employed by him would be interpreted in accordance with them.
There are several forms of blocks, differing in the way of execution: free block, block-function, block-compiler, sequential block.
A free block permits an arbitrary order of execution of constituent blocks and operators, depending on the readiness of operands, availability of ready microprograms for operators and availability of free processors and memory modules.
A block-function begins execution only when microprograms are found for the operators included either directly in the block or in internal blocks. Each operator is correlated with a particular processor, and all these processors are interconnected according to the block structure. This block implements parallel computation the way it is done in analog computers.
A block-compiler also starts computation only when microprograms are found for all its constituent operators. However, these microprograms are compiled into a unified microprogram. And for implementation of the block, only one executive processor is allotted, in which all the operators are executed sequentially. A block-compiler corresponds to traditional methods of execution of programs in computers.
Only one element (block or operator) of an internal block array can be executed in a sequential block at any given moment, after which a successive element starts execution. Operators which are equivalent to transfer commands and are included in the number of the block element permit a change in the natural order of the block element sequence.
Among RC operators are those that provide replacement of any block element (except internal block arrays) by their operands. Thus, there is an opportunity to change external relations of the block, variable names, initial conditions and running values of variables during the course of executing the program.
Among microprograms, one can distinguish: executive, manager (administrator), supervisory, input-output and check microprograms. Executive microprograms correspond to concrete implementations of operators. Administrators provide block operation with each form of block associated with a corresponding administrator. Supervisors allocate RC resources within program blocks, i. e., memory modules, processors, linking elements and external devices. Input-output microprograms provide control of selector and multiplexor channels. And, finally, check microprograms, monitor correct execution of the program, localize and disengage faulty RC elements and recover distorted parts of the program.
6. Architecture of recursive machines
Recursive machines by their essesence are multiprocessors. However, if we take modern computer processors and memory units (with a capacity of several К bytes) as basic elements for the construction of a RC, we would see that a RC even of second rank, if not first, reaches, in terms of costs, the level of computation of supersystems, while being still far behind conventional computers in terms of throughput. We are interested in using the recursive philosophy to construct not only supersystems but also, and to a greater degree, medium- and small-size computers, which form the major part of computer stock. Therefore, one should choose the simplest and cheapest devices as the basic elements of a RC. It is reasonable also to use in processors a sequential method of processing data, especially because in this case data formats can be changed over a wide range. Besides, the fifth principle of RC organization unambiguously specifies sequential transmission of data between RC elements; otherwise, the element connection system becomes practically infeasible. It should be noted that the very structure of RC internal language, though it looks complicated, allows us to simplify RC processors.
Analysis shows that a processor providing execution of any RC machine language elements can be implemented in the form of LSI circuits of MOS transistors with an integration level of some thousands of components. This processor (microprocessor, to be more exact), is as simple as a primitive calculator. Presently, LSI circuits for calculators cost less than $10, and their cost decreases with every passing year.
Unlike microprocessors, which are much simpler than their traditional protopypes, RC internal storage units become essentially complicated from the functional viewpoint. In particular, it is quite necessary to use an associative mode of data retrieval, without which it is practically impossible to implement the recursive parallel method of control. Besides, it is desirable to have an opportunity to organize buffers and boxes in storage without direct involvement of microprocessors. The second cause of RC storage complication lies in the fact that very small memory units, modules, should be used as basic elements in order to be in line with the third principle of RC organization. But as this takes place, the necessary equipment to control storage naturally increases. On the other hand, absence of sophisticated address decoders offsets, to some extent the increase of equipment in RC storage. Furthermore, the demand for storage element speed becomes significantly less due to the sequential input and output of data and the absence of buses which formerly passed through a great number of storage elements. Moreover, the functional complexity of RC storage calls for charging it with quite a number of functions which are performed by processors in traditional computers.
Hardware expenses of RC memory, for 32-byte modules, increase about 1,5 to 1,7 times costs of traditional memory per bit, implemented in the form of LSI circuits. However, if one takes into account more effective use of RC memory as compared with traditional machines, due to fitting data structure to storage structure, compact writing of application and service programs and wide use of variable formats, then it turns out that hardware expenses for the same tasks with reference to one bit of stored data are less in RC than in traditional machines.
Unlike microprocessors and memory modules, RC connecting elements have no counterparts in traditional machines. These elements (modules), linked up in a switching field, provide automatic retrieval and connection between different RC parts. Besides direct connections, the switching field makes it possible to form branches from main paths, thus constructing tree-like and net-like structures from microprocessors and storage modules. The microprocessor at the main trunk base may control any tree. This main microprocessor may have access either to all the tree elements simultaneously (for example, during associative retrieval) or to a single branch. Each branch has a number equal to a local number of the respective LAREC element. By using a sequence of local numbers, the microprocessor can have access to any specific tree element. Further, the main processor may allow another processor to be switched to any tree branch. In this case, the allotted branch is at the full disposal of another microprocessor till it is needed by the main one again. While the second microprocessor is busy with the branch, it can, in its turn, allow still another microprocessor to be switched to a part of the branch. Thus, a great number of microprocessors can be switched to one tree at a time. While executing a program, the structure of RC internal relations (its internal architecture) continuously changes. One branch begins to grow and ramify at the expense of adding the nearest free elements. Other branches, on the contrary, die off, since released elements (memory modules or microprocessors) are automatically excluded from the tree. Still other branches pass from one tree to another. As a whole, the internal architecture reflects the structure of tasks under solution, taking into account the recursive structure of the internal language.
RC external architecture is fixed during manufacture of the machine and remains invariable except for the "growth" of the RC due to adding new elements and external devices.
Assume that switching modules (SM's) have 6 groups of external terminals by means of which these modules are interconnected. Let us construct a switching field from the switching modules in the form of a rectangular parallelepiped with the dimensions.
Each of the internal SM's is connected .with 6 other SM's, so that all its terminals are engaged. As to peripheral SM's, they have free terminal groups to which the other RC elements can be switched. If we leave one of the sides measuring 1 x m free and connect microprocessor, memory, and input-output modules to the others, we obtain a RC of the first rank. Qualitative correlations between different elements are defined, depending on the class of tasks which the RC is intended to solve, while the arrangement of elements in the switching field can be chosen arbitrarily. After constructing another switching field in the form of a parallelepiped measuring 1 x m~ x n and connecting the RC of the first rank to it, we obtain a machine of the second rank. This process may continue to infinity. In the general case, architecture of such a RC can be defined recursively.
Any RC consists of the switching field constructed in the form of a rectangular parallelepiped. One of the sides of this field contains the external RC terminals, while the other sides are engaged with other RC's (their external terminals) or with RC initial elements (microprocessors, memory and input-output modules).
A minimum RC, in terms of number of elements, contains two microprocessors, one input-output module and several memory modules, part of which are used for storing microroutines. The internal language of such a RC uses only blocks of a sequential form; i. e., the machine operates in a traditional mode. The first microprocessor operates in an administrative mode, while the second one is in an executive mode. Switching modules are unnecessary in this case, for the external architecture agrees with the internal one. Such a RC is equivalent to a powerful calculator. With storage capacity increased to 25 to 30 modules and a set of operators and standard blocks extended, such a machine is capable of solving complex enough, though not demanding, tasks with high throughput. Further growth of the RC, however, requires the introduction of the switching field into the machine and increasing the number of microprocessors.
Recursive machines offer very effective solution of the reliability problem. By using combined methods of hardware and algorithmic check, one can dependably find any errors occurring in the course of computation. The regular structure of the RC makes it possible to localize the faulty element simply enough. If a SM fails, modules adjacent to it set all links going to the faulty SM to the "engaged" position, and now any routes would be laid to by-pass this module. Similarly, other types of failed elements are switched off (the SM adjacent to the failed element considers this element engaged and does not deal with it any longer). Switching off failed elements involves no changes in programs and does not affect RC operation. True, the limit characteristics of the RC slightly worsen due to switching off, but this worsening would be noticeable only after at least 5% to 10% of all the elements fail. Therefore, large RC's (exceeding 1 000 elements) practically need no repair over their lifetimes.
7. Conclusion
The five principles of structural and program organization of computers set forth in this paper suggest one common idea, RECURSIVENESS. In fact, these principles, in a more concise form, sound like this: recursive internal language of machine, recursive parallel method of control, recursive memory organization, recursive external architecture of machine, recursive internal architecture. So, it is natural to apply this term to machines where these principles are implemented and call them recursive.
In traditional computers, all tasks are tailored to a tough machine structure, which is a Procrustean bed, in truth. The "feet" of tasks are cut off, and so are the "heads" – due to introducing lots of limitations, that sometimes emasculate the essence of tasks. The extreme complexity of modern system software is attributed to the same reasons.
The recursive philosophy of computer organization allows flexible reconstruction of machines in accordance with the structure of tasks. They offer simpler software in many respects, less capacity of working internal memory and lower requirements on the external memory.
Recursive machines consist of a great number of simple elements of small nomenclature interconnected in a regular way. The same elements are used to construct both simple calculators and supersystems. This all favors mass production of RC's with maximum use of LSI and provides for their economic efficiency.
RC's provide an effective solution of the reliability problem, removing the need for repair of the machine over its lifetime (till it grows old morally).
The authors hope that a discussion of the recursive machine concept will make it possible to formulate the main problems of computation technology development.
Published in: "Proceedings of IFIP Congress", 1974, p. 66. Reprinted with authors' permission.
V. M. Glushkov, USSR Academy of Sciences Institute of Cybernetics, Ukrainian Academy of Sciences, Kiev, USSR;
M. B. Ignatyev, Leningrad Institute of Aviation Instrument Making, Leningrad, USSR;
V. A. Myasnikov, Main Department of Computing Engineering and Control Systems State Committee of the USSR Council of Ministers for Science and Technology, Moscow, USSR;
V. A. Torgashev, Leningrad Institute of Aviation Instrument Making, Leningrad, USSR.