Virtual Experimentation
[the place where Art and Science meet together]
CMAP (Centre de Mathématiques APpliquées) UMR CNRS 7641, École polytechnique, Institut Polytechnique de Paris, CNRS, France
france telecom, France Telecom R&D
[Site Map, Help and Search [Plan du Site, Aide et Recherche]]
[The Y2K Bug [Le bug de l'an 2000]]
[Real Numbers don't exist in Computers and Floating Point Computations aren't safe. [Les Nombres Réels n'existent pas dans les Ordinateurs et les Calculs Flottants ne sont pas sûrs.]]
[Please, visit A Virtual Machine for Exploring Space-Time and Beyond, the place where you can find more than 10.000 pictures and animations between Art and Science]
(CMAP28 WWW site: this page was created on 01/28/1997 and last updated on 10/03/2024 17:06:41 -CEST-)
Abstract: Mathematics play a very particular role in the quest for Knowledge.
Whether mathematicians are involved in invention or discovery, the tools that they develop
have constituted the very basis of Science for more than 2000 years.
Mathematics, which has been considered for too long as a mere language in which to formulate
the laws of nature, is now recognised as a creative thought process
that can be used to discover new entities and phenomena...
Keywords:
Anaglyphs,
Art and Science,
Artistic Creation,
Autostereograms,
Celestial Mechanics,
Computer Graphics,
Deterministic Chaos,
Fractal Geometry,
Intertwinings,
Mathematics,
Natural Phenomenon Synthesis,
Numerical Simulation,
Physics,
Quantum Mechanics,
Rounding-off Errors,
Scientific Visualization,
Sensitivity to Rounding-Off Errors,
Software Engineering,
Stereograms,
Texture Synthesis,
Virtual Experimentation,
Virtual Space-Time Travel.
Contents:
FOREWORD:
Science is defined
as being a coherent set of knowledge
pertaining to certain categories of objets or
phenomena and, since the dawn of
civilisation, its limits have been the bounds of
what we know as the "Universe". In order to
apprehend it, we have two means at our
disposal: observation and experiment. The
observation of a fact consists of close
scrutiny through our senses, in particular
through visual comprehension.
Unfortunately, as Science progresses, so
direct access to "objects" on the outermost
limits of knowledge becomes the exception
rather than the rule. No physicist, for
example, can boast of having seen an
"elementary" particle with his own eyes. As
for experiment, which is itself defined as a
test carried out to study a phenomenon, it
contains the notion of contact or manipulation
and, for many years, was considered as "the
only process available to us when we wish to
learn about the nature of things..." (Claude
Bernard). Yet here again, the situation has
changed. Although, in past centuries past,
certain manipulations were already beyond
our reach, today, at a time when our gaze and
intellect carry us further and further forward,
the situation could scarcely be otherwise.
None of today's cosmologists, for example,
is capable of creating a new Universe...
In this insatiable quest for Knowledge,
mathematics play a very particular role.
Whether mathematicians are involved in
invention or discovery, the tools that they
develop have, for more than two thousand
years, constituted the very basis of Science.
Mathematics, which has been considered for
too long as a "mere" language in which to
formulate the laws of nature, are now
becoming increasingly recognised as a
creative thought process that, by means of
formal manipulations and strict, logical
reasoning, can be used to anticipate the
existence of entities whose "reality" cannot
then be contested, since it has been
observed... The remarkable success of
General Relativity or of the Standard Model
of elementary particles and their interactions
(without which nothing could exist) confirms
this evolution in thinking.
Yet Scientific Knowledge is
undoubtedly not the only way of
comprehending the infinite wealth of
phenomena in our Universe. Art, the quest
for Beauty and the Indefinable, is another
way forward, a means of progress that is
parallel to the means provided by Science,
and we know that still more possibilities
exist, probably more than we could ever
imagine. Yet it is an undeniable fact that, with
a few exceptions (Leonardo da Vinci being
the best-known), these two paths seldom
cross. Setting aside the frequent lack of
pluridisciplinary knowledge in creators
working within specific domains, there are
very few major works influenced by the
Science of their day and, inversely, very few
scientific theories that make use of the
Harmony provided by the senses (we
exclude, of course, the notion of aesthetics as
expressed in a scientific theory).
Computer science, which has enabled
researchers to make progress to an extent that
was inconceivable just a few years ago, will
soon be giving artists the means of achieving
heights and countries that are as yet
unexplored. Moreover, it will reconcile them
and place them fairly and squarely on the road
to the invention (or discovery?) of new
realities that are at present slumbering in the
memories of our machines...
1-THE NOTION OF VIRTUAL EXPERIMENTATION:
In addition to the experimentation that we shall qualify as
being of laboratory type or natural or again
real, which is performed either a priori (as in
the case of natural phenomena) or a posteriori
(to verify the predictive power of
mathematical deduction), there exists virtual
experimentation (so-called in preference to
"numerical experimentation" as it highlights
the complementarity of the two approaches).
This second type of experimentation become
a feasible proposition thanks to the enormous
progress made in the field of computer
science at both software and hardware levels,
progress which John Von Neumann had
anticipated and used at the end of the 1940's.
In order to comprehend the real meaning
of the expression "virtual experimentation",
we have to define what we mean by a model.
The state of the system studied by a physicist,
for example, is represented by a set of values
(spatio-temporal coordinates, temperature,
pressure, etc...), while its evolution is
described in terms of a set of equations that
combine the various characteristic values.
Inasmuch as a scientific theory has no
meaning unless it is refutable and no interest
unless it is predictive, the equations in the
model must be manipulated and solved in
order to use their creative power to the full.
Two complementary, and non-exclusive,
approaches are conceivable in this situation.
The formal approach provides the "exact
analytical solution" which is generally
impractical or even impossible to implement,
even in the simplest cases (take as an example
the so-called N-body problem in which N is
strictly greater than 2 (see figures "N-body problem integration (N=2) displaying a perfect Keplerian orbit"
and "N-body problem integration (N=4) with one yellow star and two planets (the red one being very heavy and the blue one and its white satellite being light)")
; a recent "revival" of traditional physics has
shown that even the simplest systems
governed by determinist equations could
behave in an unpredictable, even chaotic,
manner - see figures "Bidimensional visualization of the Verhulst dynamics -(grey,orange,red) display negative Lyapunov exponents, (yellow,green,blue) display positive Lyapunov exponents-"
and "N-body problem integration (N=4: one star, one heavy planet and one light planet with a satellite) computed on three different computers (the Red one, the Green one and the Blue one: sensitivity to rounding-off errors)"). The second
method, the numerical approach, proceeds by
means of approximations (of the system
being studied and of equations) and provides
results in the form of digits. In both cases, a
computer is now a vital piece of equipment,
both for the formal manipulation of
expressions and equations and for the
purposes of numerical calculation, given the
complexity, one might even say the
"monstrous nature" of the operations
required.
Virtual experimentation, then, produces
results in the form of numbers. In general,
the quantity of numbers is such that their
analysis in alpha-numerical form is
unthinkable and even absurd (in 1987, a
report by the National Science Foundation
indicated that more than 90% of the results
produced by super-computers were
irremediably lost because there were no
suitable means of exploiting them). Take the
example of a 2D turbulence simulation code
entered in a computer. In a square field
sampled by one million points, the number of
floating values obtained is of the order of a
billion. In hard copy, this would represent the
equivalent of one thousand telephone
directories! There is, then, only one viable
solution in this situation viz: the production
of synthetic pictures. And in these
circumstances, we can define Virtual
Experimentation pragmatically as the
production of measurements on a
mathematical model contained in the memory
of a computer system and the analysis of
these values by the production of animated,
color synthetic pictures. This approach takes
the researcher into a feedback loop, in which
vision plays a vital role, arguably the role that
formed the basis of scientific curiosity at the
outset.
2-PICTURE SYNTHESIS, A SCIENTIFIC TOOL:
in Man, vision
is the most highly-developed of all the
resources at his disposal of apprehending his
environment, whether in close proximity or at
an almost infinite distance. The eye, and the
visual cortex, has a very wide bandpass
which provides "global" perception of
colored shapes spontaneously "leaping out"
of a changing, moving, surprising and
noise-filled background. The idea of using
the eye as the main tool in the analysis of
numerical results is therefore quite natural.
The picture, the synthesis of
experimental results (whether from virtual or
more "traditional" forms of experimentation),
will provide a global representation by the
means of colored and moving shapes, rather
than a representation produced by following
the linearity of a text or a series of digits. This
means that elements which are spatially
distant from each other are "juxtaposed" and
"connected" by a string consisting of other
elements presented in a similar fashion (for
example, in a given virtual experiment, all the
particles with a velocity of between V and
V+e will be presented in the form of points of
the same color). In this case, the visual
system will be able to perform its function to
the full. Complex forms will become distinct
and the scientist will be able to ascertain a
hidden order in the equations.
Thus pictures that started as nothing
more than an intermediary between scientist
and model, and as a communications and
teaching tool, will be a vector of discovery
used to see and observe something that no
other "instrument" had been able to show us
before - for example, the dynamics of
elementary particles (see figure "Quark and gluon structure of a nucleon") or the
Universe as a whole (see figure "Artistic view of the Big Bang")-. Used
jointly, the computer, numerical calculation
and picture synthesis enable us to see the
invisible and access the inaccessible.
2.1-A MEANS OF SYNTHESIS:
This is the fundamental application as regards
the problem with which we are faced. It
provides a means of synthesis (in the
etymological meaning of the term) and gives a
more global approach than could be obtained
from the observation of unprocessed results
in numerical form (see figures "Fractal synthesis of mountains with vegetation and stormy clouds",
"Mountains and light cloud dynamics -this sequence being periodical-", "Black and white display of a tridimensional function" and
"Tridimensional display of the dynamics of a linear superposition of 6 eigenstates of the Hydrogen atom (bidimensional computation)").
2.2-A MEANS OF VALIDATION:
The results of numerical simulations and
laboratory tests become easier to compare,
when the representative codes used are the
same (see figure "Bidimensional visualization of a bidimensional turbulent flow -sensitivity to initial conditions-").
2.3-A MEANS OF DEBUGGING:
generally speaking, a model is a complex
"object" both as regards its mathematical
formulation and in respect of its expression in
the form of data. Numerous conceptual and
technical errors can therefore arise at various
stages of development. Often, such errors
become apparent in the form of spatial or
temporal discontinuities which the eye will
pick up immediately. And sensitivity to initial
values or the precision of calculations can be
more easily studied (see figure "N-body problem integration (N=4: one star, one heavy planet and one light planet with a satellite) computed on three different computers (the Red one, the Green one and the Blue one: sensitivity to rounding-off errors)").
2.4-A MEANS OF COMPREHENDING ABSTRACT CONCEPTS:
picture synthesis can provide
a representation (in most cases, an arbitrary
representation) of abstract concepts. This
facility gives mathematics, and in particular
pure mathematics, the status of an
experimental science that it enjoyed in the
beginning (see figures "Tridimensional zoom in on the Mandelbrot set",
"A shell (Jeener surface 1) in motion", "The normal field of a shell (Jeener surface 1)" and
"2.pi rotation about Y and Z axes of a quaternionic Julia set -tridimensional cross-sections-")...
2.5-A MEANS OF MANIPULATING INACCESSIBLE OR INVISIBLE OBJECTS:
picture synthesis provides a means of seeing
phenomena (when described by a valid
model) that no other instrument would show
either because they are too short-lived
-temporal dimension- or because they are too
small -spatial dimension- (see figures "Quark and gluon structure of a nucleon" and
"Tridimensional display of a linear superposition of 6 eigenstates of the Hydrogen atom (tridimensional computation)"). However, it also provides a means of
manipulating phenomena that are inaccessible
in a laboratory, which means that, after
having developed the corresponding model, a
scientist can study, for example, the
dynamics of the Solar System (see figure
"From Pluto to the Sun (non linear scales)").
2.6-A MEANS OF COMMUNICATION AND TEACHING:
all discoveries and
theories must be publicised at several levels
i.e. professional (publication and
communication), educational and, finally, to
the public at large (simplification and
popularization). In all three cases, pictures are
a vital means of communication and a
quasi-universal support medium of an
importance that is now fully accepted. Thanks
to the dynamics that the computer can
generate, it can be used to pinpoint one detail
(see figure "Isotropic random walk of 64 particles on a bidimensional square lattice"), illustrate an abstract concept
(see figure "Tridimensional zoom in on the Mandelbrot set") or assist in memorization
(see figure "Tridimensional display of 36 eigenstates of the Hydrogen atom (bidimensional computation)").
Pictures do not replace written text; they are
an essential complement to it as can be seen in
everyday life.
2.7-A MEANS OF DISCOVERY:
One hundred years ago, Heinrich Hertz stated
that, "There is no escaping the feeling that
these mathematical formulations (the model)
have a life of their own, that they are more
knowledgeable than those who discovered
them and that we can extract more science
from them than they contained originally".
The visual synthesis of results can engender
and highlight forms whose structure and
regularity may suggest an essential pointer to
a scientist working on a subjacent theory (see
figure "N-body problem integration (N=4: one star, one heavy planet and one light planet with a satellite) computed on three different computers (the Red one, the Green one and the Blue one: sensitivity to rounding-off errors)"). Thus, even in the most abstract
fields (in particular the field of so-called
"pure" mathematics), it will be possible, by
associating pictures and computers, to carry
out experiments and make new discoveries
(see figure "Tridimensional zoom in on the Mandelbrot set").
Just as the microscope showed us the
"infinitely small" and the telescope the
"infinitely large", so the computer, a
revolutionary instrument (or, to be more
precise, a meta-instrument dubbed here the
Virtual Space-Time Travel Machine) will
enable us to regard our world in a new, richer
way. It may well be that a new Copernican
revolution is on its way, thanks to the
computer...
2.8-A MEANS OF CREATIVITY:
last but not least, picture synthesis is a
means of conciliating (or reconciling) Art and
Science (see figure "The Lorenz attractor"). The techniques
implemented, in particular in the production
of realistic documents, i.e. documents that
mislead man's senses (see figures "Fractal synthesis of mountains with vegetation and clouds" and
"Mountains at sunrise"), now make extensive use of scientific
knowledge (mathematics, physics,
hierarchical systems, etc...). In addition,
scientific images can only be optimal (in the
communication sense of the term) if they
comply with a certain number of criteria
(mainly as regards chromatic harmonies,
proportions and relative relationships) which
have been known to traditional artists for
many years. These points will be discussed at
length in Chapter 4.
3-PICTURE SYNTHESIS, AN ARTISTIC TOOL:
for the first
time since the dawn of mankind, computer
science makes it possible to draw up an
objective description before and during the
creative process in the field of plastic arts
(and this is only one example among many).
The description is given in a language that,
theoretically at least, is devoid of any
syntactical and semantic ambiguity. Except in
the case of "simplistic" applications (and the
term is by no means pejorative for the
perspectives are vast), the creation of a work
of art is no longer gestual but conceptual: the
work must be described using a
tool-program. Paraphrasing Mac-Luhan, we
can perhaps say that "the work of art is the
tool". Indeed, the tool-program describes the
work of art objectively and its execution
within a computer brings it to "life". In
addition, in the same way as the compass
contains an infinite number of circles, so a
program can contain an infinite number of
"adjacent" works, or works of the same type
which can be regrouped under the generic
name of potential works (see figures "Fractal synthesis of mountains with vegetation and stormy clouds",
"Mountains at sunrise"
and "Cauliflowers, seaweeds, shells,... with fog"). Finally, a work of art created in
this way, can, like a plant, animal or
community, live and develop more or less
independently once its behaviour pattern has
been thought through and described by the
artist. This gives us the notion of a dynamic
work of art. It is, of course, possible to
object that this restricts artistic spontaneity but
the answer to this criticism is, firstly, that this
is only a new possibility and that it in no way
excludes more traditional methods of creation
and, secondly, that this type of formalisation
was already proferred many years ago in
respect of music for which a language was
developed (that is remarkably simple
compared to the subtleties of some
programming languages, although this
comment should not be taken as qualitative
with regard to the musical works written in
this way). Thanks to this language, great
composers from right across the musical
spectrum have created some of the most
wonderful elements in our universal heritage.
This being so, the experimental method,
which is an essential basis of scientific
research, can become one of the bases of
artistic creation. Testing, comparing,
retrieving, transforming, combining,
memorizing, modifying and deleting are just
some of the possibilities offered by
computers. This means that colors and forms
become "pliable" and "plastic". The artist can
alter them, change his mind, return to an
earlier precise point in his creative process or
delete everything... Better still, the "itinerary"
followed by the work of art can be
memorized and rescreened at will. Surely
everybody would enjoy reliving the various
stages in the creation of some of the great
works of the old masters? In our example,
thanks to the infallible memory of the
computer, this can become a dream come
true.
It has to be said, however, that the
techniques used create a certain number of
problems. In particular, they destroy the
uniqueness that is characteristic of traditional
works of art. Any copy (in the computing
sense of the term) is absolutely
indistinguishable from the original. But what
is an original and where is the original? Is it
the program (see "the work of art is the tool")
or the set of digits that code the pixels in these
new mosaics, or is it the picture that is
displayed on the monitor? Is the monitor seen
as a support medium with the same degree of
inherent majesty as Carrare marble? The
answer is definitely negative at the present
time... Finally, generally speaking, these new
works are, and will be, the result of
collaboration (between artists and computer
scientists for example). Who, then, is the
author? Who should receive the royalties?
Should all those involved more or less
explicitly be named?
The last problem discussed at this point
is without doubt the most difficult to resolve
since the response cannot be backed up by
good logical argument inasmuch as it is
metaphysical in nature. But having put
forward the usual "proviso", let us ask
whether a computer (or one of it successors)
will ever be capable of creating a work of art
such as man creates today? Without wishing
to be arrogant and presuming to answer this
question, it nevertheless begs a number of
comments. The first is that, when we say
"this is an artistic creation", we may be
expressing no more than our surprise at the
result of a complex combination of elements.
The second comment is that, if we look at the
problem from the teleonomic point of view
(i.e. everything has a reason to exist and
tends towards the same objective, viz.
comprehension of the Universe), there is no
valid reason to believe that intelligence is of
necessity restricted to the single human form
we know at the present time. In particular,
"mechanical" intelligence may appear
(implying that this is the result of a creative
process achieved by another form of
intelligence). In the same way that man
succeeded in flying after just a few hundred
years, and in almost reaching the stars, while
"natural" evolution took billions of years to
accomplish the same objective (in a different
manner and without presuming to achieve the
same degree of performance), it is quite
possible to envisage the development of
"artificial" intelligences (without any
necessary link with the techniques of the
same name) which might achieve the result
(i.e. Knowledge) using other channels than
our own (and of which we know practically
nothing, in particular as regards a true
definition of thought or intelligence?). If this
were to be the case, it is more than likely that
communication will be difficult, or even
impossible, with these new entities and we
may then come face to face with something
that is to be our final invention... These two
comments, without wishing to take sides in
this great debate nor lower man to the status
of a mere machine, invite us to retain our
modesty and yet, in spite of everything, to
explore these fascinating paths wending their
way between Art and Science, the Rational
and the Sensitive...
4-PICTURE SYNTHESIS, A NON-NEUTRAL TOOL:
The construction and synthesis of a picture on the
basis of "abstract" numerical results is, in
itself, a (post-)processing operation, and it is
evident that such an operation cannot be
totally neutral. It has its own limits. One of
the most important comes from the fact that
the display medium is bidimensional (sheet of
paper or television screen for example).
Whatever the "objects" being represented,
projection will be necessary if their dimension
is greater than 2. This implies the use of
various techniques in order to reconstitute the
missing dimensions (perspective, elimination
of hidden surfaces and edges, shading,
anaglyph, stereoscopy, autostereograms -see
figure F11-, etc...). Motion must also be
introduced into the picture because, firstly,
most models are dynamic and, secondly,
motion is an essential element in our
comprehension of complex objects
(tridimensional perception of our environment
uses stereoscopic vision, head movements,
and the acquired knowledge that is vital in
such a situation but which, in this particular
context, we often lack). It is obvious that the
transformations to which the results are
subjected are often irreversible and are the
source of artefacts (e.g. aliasing, optical
illusions, etc...) which have to be mastered.
The picture obtained in this way, like any
other experimental result, must therefore be
regarded with a degree of circumspection and
any "surprise" must be analysed completely
and satisfactorily before one can hope for a
nomination for a Nobel Prize or other such
honors...
Cartography has already highlighted the
problem of modes of representation. Not all
the required data can be included in a single
map and, moreover, it is difficult to know
how to represent selected data in a pertinent
manner. In the field of numerical
experimentation, semiology is in the very
early stages of development. It will have to
take account of our cultural codes (in
particular as regards color and perspective)
and underline the fact that aesthetics and the
informative are not synonymous!
4.1-DEFINING THE SYNTHESIS OF NUMERICAL RESULTS:
4.1.1-PICTURE SYNTHESIS:
In its generally-accepted form, synthesis is
defined as being an assembly process in
which elements pertaining to a given "object"
are gathered into a coherent whole, thereby
providing a general oversight. In our context,
the object is the system being measured or
simulated and the elements are the results
obtained (by measurement or simulation).
Synthesis will therefore be an application that
can be used to pass from the space of results
to the space of display.
4.1.2-THE TEMPORAL FACTOR:
the dimensions of the spaces of results and
display include the time factor (except in
certain specific cases such as static problems
or the problem of representation of
"mathematical" objects, for example). We
shall start from a hypothesis which, although
it is not systematically realised at present, can
be approximated or simulated i.e. that the
picture display system is capable of
producing dynamic displays. Working from
this basis, it becomes clear that "three
different time factors" will be useful:
- Tp: the inherent model time, i.e. the
variable t which is included in equations,
- Tm: the time corresponding to an
observer movement around the "objects"
represented (as we shall see in
paragraph 4.2.1,
this is enormously helpful during the
observation of complex systems with a spatial
dimension greater than 2), and
- Td: the time that could be used to
represent a dimension which cannot
otherwise be displayed. For example, and for
the purposes of simplification, the graph of a
function y=f(x) can be represented either in
the traditional manner within a cartesian
coordinate system in which a curve shows the
value of the ordinate y for every value of the
abscissa x, or in a less conventional manner
by a mobile point on an axis for which the
instantaneous abscissa has the value f(t).
It quickly becomes obvious that these
three different times can be used
simultaneously and it is clear, as we shall see
later, that a number of precautions must be
taken inasmuch as all three time factors refer
to the same physical time (see figure "Tridimensional display of the dynamics of a linear superposition of 6 eigenstates of the Hydrogen atom (bidimensional computation)").
4.1.3-SPACES OF RESULTS:
spaces in simulated systems, whether
"physical" or "abstract", are generally subsets
of Rn as regards the mathematical formulation
and subsets E of Qn as regards computer
programming (real integers being
approximated by "almost" rational numbers,
using floating point representation -see
for a review of problems inherent to their use-).
The results themselves will
generally be considered as functions defined
in each point of E. Depending on the
situation, they may be:
- scalar, i.e. having a rational value at
each point (e.g. a pressure field -see figure
V14-),
- "multi-scalar", i.e. having several
rational values of different types at each point
(e.g. a pressure field and a temperature field),
- vectorial, i.e. having a vector at each
point (e.g. a motion field -see figure F3-).
Finally, it is possible to imagine more
"exotic" functions (e.g. by "overlaying" a
curl field with a velocity field).
In fact, it is clear that the above list is far
from complete and is given solely in order to
show the complexity of the problem. It is
easy to convince oneself, as we shall illustrate
below, that scalar functions in one- and
bidimensional spaces are the easiest to
represent, and are highly instructive, leading
the way to a wide range of possibilities.
4.1.4-SPACE OF DISPLAY:
in this case, the situation is simpler. In a
realistic, pragmatic context, far removed from
all hypothetical resources (e.g. using real time
numerical holograms for example...), the
only way of displaying information is by
means of a "video" system. In simple terms,
it consists of four basic components:
- an "interface" linking the simulation
resources (in this context, the word
"interface" has the widest possible meaning);
- a picture memory containing at least as
many words (of one or more bytes) as there
are points to be represented;
- a video generator (see
"computing illusions" in paragraph 4.3.3.4)
which provides the correspondance between
numerical values and colors and which
produces television signals;
- and, finally, a monitor which will
display the pictures that stimulate our visual
senses (reproduction resources such as a
camera, VCR, videodisk, etc... can be added).
Thus a display system brings us back to
a bidimensional scalar field which is
memorized in the picture memory in the form
of a matrix, each of its elements appearing in
the form of a colored pixel. It is then easy to
understand how the visual synthesis of
results usually reduces the quantity (and
quality) of the data contained in the results,
and to what extent visual synthesis (except in
a few cases such as 2D scalar fields, although
this statement must be tempered for reasons
relating to the choice of representation modes
as we shall see in
paragraph 4.2.5)
can present only one point of view at a given
moment in time (this remark does not ignore
the notion of "windowing" but underlines the
bidimensional aspect of the display medium).
4.2-OPERATIONS REQUIRED FOR THE SYNTHESIS OF NUMERICAL RESULTS:
let us now look at the most common operations used to
pass from the space of results to the space of
display.
4.2.1-PROJECTION:
this operation
is used to pass from the n dimensions of the
space of results to the two dimensions of a
plane (e.g. the space of display).
Unfortunately, it does not provide an
overview of the results and it is there that the
inherent movement of the observer according
to time Tm can be very useful, on condition
that times Tp and Td are frozen. The
development of techniques of the flight
simulator type, although they are constricting
(inasmuch as they demand the computation of
pictures in real time, i.e. in a fraction of a
second), is to be highly recommended and
will be looked at in detail below. In particular
when applied to objects that are, at present,
unknown to us, for example, tridimensional
fractal aggregates (see figure "Black and white display of a tridimensional function") or certain
works of art, they would render apprehension
possible or easier.
Projection is mandatory when the
dimension n is greater than 2 but it has to be
said that several such operations might be
used in succession. For example, a
tridimensional surface could be projected onto
the three planes of a cartesian coordinate
system, which would itself have to be
projected onto the display space plane (or
monitor).
4.2.2-TRANSFORMATIONS:
transformations concern the manipulation of
results without changing the dimension of the
spaces. They could be used, for example, to
reduce the dynamics of a field or, on the
contrary, to partially enlarge the dynamics in
certain important or interesting areas.
It will be possible to differentiate
between the operations that transform the
space of results (e.g. translations, rotations,
similarities, etc...) and those which transform
the results themselves (e.g. the changes in
"dynamics" mentioned above).
In all cases, and this is important despite
its general character, the picture produced in
this way must contain data recalling the type
of non trivial operations performed on the
results!
4.2.3-REDUCTIONS:
The above
operations reduce the results mandatorily and
implicitly; there are others that act explicitly,
for example averaging, thresholding and
filtering. The latter is particularly useful in
extracting the main structures from a field
(see figure "The simple case of a bidimensional scalar (fractal) field presented in four widely differing color palettes", top right).
Finally, cutting is
used to extract a sub-space of results, e.g. a
plane, and will be useful in the "internal"
comprehension of spaces with a dimension
greater than 2.
4.2.4-THE CHAIN OF PRIMARY OPERATIONS:
this provides the changeover from the space of
"unprocessed" or "primary" results (i.e.
results provided directly by simulation) to the
space of bidimensional "secondary" results.
4.2.5-THE CHAIN OF SECONDARY OPERATIONS:
this final chain is used to produce the picture, and
consists of the selection of representation
modes, in particular the color palette. This
final operation, which is often considered as
no more than a puerile form of coloring, is
frequently neglected and completed in an
anarchic fashion yet it is a vital part of the
process as a whole and it requires the
definition of, and compliance with, a certain
methodology.
4.2.5.1-The problem of color selection:
without contesting the notion of
color and RGB additive synthesis as used at
the present time in television (although it must
be remembered that it is based on the
reproduction of all "subjective" colors, or
colored visual sensation, with the use of three
basic colors i.e. Red, Green, and Blue), it
should be observed that, generally speaking,
there are no specific colors associated
intrinsically with a given space of results.
With the exception of so-called "realistic"
picture synthesis, coloring will be an arbitrary
process (for example: "what color is a
pressure field or a quark?"). In spite of this,
two general remarks can be made on the basis
of correlations that the human mind
comprehends more or less unconsciously and
that are based as much on innate knowledge
as on acquired information:
- R1: we associate "cold" colors (green,
blue, etc...) with low, negative values and
"warm" colors (red, yellow, etc...)
with strong,
positive, or even dangerous values.
- R2: we see a correlation between scales
of increasing luminance (levels of gray
between black and white) and sets of
numerical values that progress in the same
way.
These two observations may not provide
an automatic definition of the colors required,
but they provide a guideline for the selection
process and avoid a number of frequent
errors. It is, for example, fairly common to
see pictures published in scientific revues in
which an incoherent code confusing the
"aesthetic" with the "informative" is used for
arbitrary associations such as:
- values 0 - 9 yellow (warm, very light color)
- values 10 - 19 blue (cold, very dark color)
- values 20 - 29 red (warm, dark color)
- values 30 - 39 cyan (cold, light color)
- etc...
A picture using this type of incoherent
order cannot be "read" without a color
look-up table. Moreover, even with the table,
the picture is difficult to read because the code
contradicts the two "cultural" observations
indicated above (R1 and R2). The mind
switches between two possible interpretations
(e.g. values 0 - 9 are low, yet they are coded
using yellow which is naturally associated
with high values).
This example also reminds us that there
is no one natural order between colors but
rather a number of orders, in particular the
order of colors in the rainbow and in
television (viz. black, blue, red, magenta,
green, cyan, yellow, white, in which what
we wrongly term "colors", such as white, are
classified in increasing order of luminance).
This being so, we shall distinguish
between:
- monochromatic palettes (cf. R2) which
are useful for "single-tone" scalar fields,
- and polychromatic palettes (cf. R1)
which are required when distinguishing
between various zones, for example in scalar
fields (the zones represent positive and
negative values, values between two given
thresholds, etc...). Within a given color,
luminance can vary monotonously,
periodically, or in peaks.
Let us give a simple example of this, a
2D fractal field produced by research into
natural phenomenon synthesis. In this case,
the space of results bears a strong similarity
to the space of display and, therefore, to the
picture. Each pixel in the picture has a
corresponding point in the field (defined on a
780x575 grid) and the numeric segment of
possible values is made to correspond with
the [0,255] segment of the numbers of
utilizable colors. Beneath each of the pictures
in figure "The simple case of a bidimensional scalar (fractal) field presented in four widely differing color palettes", which
all illustrate the same
field, there is a color scale indicating the color
associated with [0,255], if read from left to
right. This type of indication must be
systematically included with every picture in
order to define or recall the code used.
On figure "The simple case of a bidimensional scalar (fractal) field presented in four widely differing color palettes" (top left), a
monochromatic palette is used: the high
negative values are represented in dark
luminance, while the high positive values are
represented in light luminance. On figure 14
(top right), an "all or nothing" two-colored
palette is used (note that the warm color,
yellow, has been linked to negative values
while the cold color, blue, is linked to
positive values, in order to create the
"uncomfortable sensation" described above,
see observation R1). Although both pictures
concern the same field, the visual impression
is very different and only the main spatial
structures stand out. In this case,
low-frequency filtering is in operation. On
figure "The simple case of a bidimensional scalar (fractal) field presented in four widely differing color palettes" (bottom left),
the previous palette
is periodical, and the high spatial frequencies
become obvious. Finally, in figure 14
(bottom right), a polychromatic palette is
used, to comply with the two basic
observations R1 and R2:
- increasing luminance (varying in three
peaks) is used to show the progression of the
numerical values (R2),
- green (a cold color) is linked to negative
values (R1)
- red ( a warm color) is linked to positive
values (R1)
- gray (a neutral "color") restricts the
visual conflicts between red and green (as
defined below in
paragraph 4.3.3.3),
- finally, a yellow line marks out the zero
value level.
As predicted above, shapes stand out
from the pictures spontaneously, although
algorithmically their automatic extraction from
the field would have to be anticipated a priori
and this would certainly be a fastidious (or, in
most cases, impossible) operation, difficult to
communicate and, most importantly, far from
complete, while the mind of the observer,
which is always in a state of expectation, will
react to the unexpected and any surprise.
4.2.5.2-Various representation modes:
Coloring, though, is only the
simplest of the representation modes,
associating colors with numerical values. As
shown in figure "Four different tridimensional visualizations of the same bidimensional scalar (fractal) field",
one or more spatial
dimensions can code one or more numerical
values. On the four corresponding pictures,
by playing on lighting and the angle of
observation, the values of the field presented
in figure "The simple case of a bidimensional scalar (fractal) field presented in four widely differing color palettes" is made to correspond with the
"orthogonal height" of the field, which then
defines a 3D surface. The highest points have
the highest positive value, while the lowest
points have the lowest negative value.
This mode may appear superfluous, but
while it is difficult to compare the numerical
values of the field on two distant pixels using
only color and luminance (there is no absolute
reference, as shown in
paragraph 4.3.3.2),
it is easy to make the comparison by means of
altitude. Moreover, the surface defined in this
way could be colored using a second field
and the conventional "mapping" technique
(see figure "Wavelet transform of a bidimensional fractal field"), which makes it easier to
obtain correspondances and determine
correlations. Finally, "spatial" defects that are
imperceptible in the original field could then
be amplified.
This simple 2D case, then, shows that
on the one hand there is no single "natural"
solution to the problem of display and that,
on the other, precautions are required in order
to avoid errors and incoherences (see
observations R1 and R2). When traveling in
higher dimensions, it is obvious the problem
is worse (see figure "Different modes of representation of a tridimensional cross-section of a Quaternionic Julia set").
4.2.5.3-Dynamic display:
finally,
the dynamics of a model have to be followed
through, for two reasons:
- Tp is an intrinsic part of the model,
- certain (temporal and, in some cases,
spatial) correlations and, for example,
"spatial" pairings between neighbouring
structures (for example, positive and negative
value subsets) only appear during motion.
4.3-THE DANGERS:
if, as we
have seen, display is the only reasonable
method of quickly and pertinently analysing
the results of numerical simulations and
measurements, it carries with it certain
dangers of which any potential user must be
aware:
4.3.1-A PICTURE IS ONLY A SINGLE VIEWPOINT:
yet in general several points of view are required
(see figures "The simple case of a bidimensional scalar (fractal) field presented in four widely differing color palettes" and
"Four different tridimensional visualizations of the same bidimensional scalar (fractal) field"), along with
various representation modes and,
consequently, various color palettes, if one is
to apprehend the complementary aspects of
the system being modelled (this has overtones
of the wave/particle duality found in quantum
mechanics). Unfortunately, the useful modes
are not usually known before experimentation
begins and, because of this, it is necessary to
collect results from a large number of
experiments (on the picture itself this time).
In this respect, the provisions of a specialist
expert system will be appreciated.
4.3.2-TRANSFORMATIONS:
whether they concern primary or secondary
results, these transformations are not neutral.
In particular, they may weaken essential
elements and, inversely, strengthen
insignificant details (see figure "Detail filtering and enhancing with colors"). Yet
again, we are reminded that a critical eye and
a degree of circumspection is absolutely vital
for users. And in this case, as in others, the
simultaneous use of several complementary
modes of representation will often resolve
doubts or, on the other hand, call apparent
certainties into question.
4.3.3-ARTEFACTS:
they are introduced by, and particularly linked to:
- the digital character (mosaic) of the
synthetic picture (common problem of
aliasing or a spatial and/or temporal sampling
defect caused by the presence of excessively
high frequencies),
- the absence of absolute references for
colors and luminance,
- optical illusions (Mach bands,
simultaneous contrast, etc...),
- and, last but not least, the capacities of
the machine itself.
In this respect, the work carried out by
artists (especially those of Itten) should be
consulted without any false sense of shame
for it often contains useful advice with regard
to chromatic balance and proportions (for
example, the use of the golden rectangle for
the format of "windows"). As we shall see, a
picture is not a neutral tool. It is evident that
there is no such thing as a neutral tool, yet
unfortunately, in this context, picture
synthesis is considered to be just that, and the
use to which it is put often confuses the
aesthetic and the informative. Let us
remember that the picture produced in this
way is, in this context at least, usually
arbitrary, and that in fact, using the same set
of data, a large number of very different
representations could be imagined and built
up. Moreover, the eye (or to be more precise
"the entire visual system") is subject to optical
illusions that are well-known but rather too
often forgotten. There are four categories of
illusion, each of them responsible for a failure
to appreciate correctly what is being seen:
4.3.3.1-Geometric illusions:
They cause geometric deformities in the
perceived shapes. For example, the well
known Zšllner illusion introduces waves on
lines that are objectively straight and parallel.
4.3.3.2-Luminance illusions:
this type of illusions shows quite "clearly" that the
eye has no absolute reference for luminance
(luminance represents the intensity of gray at
a given point, its minimum representing the
intensity of black and the maximum the
intensity of white). This being so, two zones
with identical luminance separated by several
other zones with a very different luminance
will appear to be dissimilar (see figure "2 identical grey squares moving over a grey scale").
It is, therefore, a total "illusion" to attempt to
interpret numerical results quantitively and
generally if the results are represented by sets
of luminances. Major relative errors might
well be committed but, more seriously,
inversions in the orders of size might be
introduced. Moreover, when the luminance
gradient changes sign, "over-luminous"
bands (the so-called Mach bands) appear.
Lack of knowledge of this phenomenon
might result in erroneous conclusions about a
numerical field.
4.3.3.3-Chrominance illusions:
luminance illusions have already shown that
the visual system functions globally rather
than locally. This being so, the
neighbourhood of a given pixel (or even the
entire picture) affects the perception of the
pixel itself. The same is true of chrominance:
the "subjective" color of a zone is strongly
influenced by the color of adjacent zones.
As to the black-and-white broadcasting
of a color picture, it can occasionally give rise
to rather unpleasant surprises. For example, a
TV color scale seems to have
non-monotonous luminance whereas its black
and white version shows that this is not the
case.
4.3.3.4-"Computing" illusions:
the previous three categories of illusion were
known to physiologists and those interested
in plastic arts long before the arrival of
computers. The fourth and final category,
however, is much more recent and arises
from the notion of false color. Most display
systems in use at the present time provide a
means of coloring and re-coloring a picture.
A digital picture in false colors can be defined
as a matrix of pixels with integer coordinates
X and Y. Each of these pixels contains a
single memorized numerical value L, which
often takes the form of a byte. When video
signals are generated, the matrix is reread.
The L bytes are retrieved in succession and
"colored" before being displayed. The
operation involves the use of a Look Up
Table, which has as many entries as there are
possible values L in each pixel (28=256 in
pixel coding on one byte). Each of its entries
is programmed and contains a triplet (R,G,B)
that give the intensity of Red, Green, and
Blue respectively to be applied to every pixel
with the same value L. This means that an
identical picture can be screened differently
depending on the definitions of color loaded
in the "LUT". This is one of the major
dangers, but it is also the least well-known
and the most underestimated. To take a stark
example, which some might find arguable or
even naive but which has the advantage of
being simple and devoid of any physical
consideration (there is, in fact, nothing to see
in this field), we use four different color
palettes, each of which can be used to draw a
number of conclusions (all of them
unfortunately incompatible) in respect of the
subjacent structure. Figure "The same bidimensional scalar field displayed with 4 different color palettes" -bottom left-
apparently shows two characteristics of the
field. Horizontally, it is uniform; vertically, it
increases and decreases. Note in passing that
no quantitative data can be deduced from this
observation. In particular, it is impossible to
define whether the increase (resp. decrease) is
linear, sinusoidal or gaussian. As for figure
"The same bidimensional scalar field displayed with 4 different color palettes" -bottom right-, it
also indicates horizontal
uniformity but this time there are vertical
undulations. Finally, on figure
"The same bidimensional scalar field displayed with 4 different color palettes" -top left
and top right-, there is a notable variation in
the horizontal plane. In these conditions,
which would be the only possible display for
this arbitrary field? The response is obvious:
a single display does not exist, and none of
the displays described above is complete.
Again, it will be possible to criticize this
example, perhaps for its simplistic aspect or
its lack of physical significance. Yet let us
remember that, for a given field, there is not
usually any a priori display mode and,
moreover, everyday experience shows that
this type of usage is made of color (examples
drawn from well-known reputable scientific
publications are available...).
From the above considerations, it is
possible to draw a number of conclusions.
First and foremost, there is no single a priori
mode of representation; in fact, several
modes, all of them complementary, will be
required in order to avoid interpretation errors
or to apprehend various aspects of the
"objects" represented. The second lesson is
that, contrary to a widely-held belief, display,
in particular, the display of results from
numerical simulations, is an "art form".
Coloring is not a puerile child's game; it is a
very difficult operation demanding a "certain
instinctive grasp of plastic arts" and
considerable knowledge of the subjacent
phenomena. As we have seen, when carried
out unthinkingly, it can lead to conclusions
that are completely false.
It is important to remember that an
attractive picture is seldom a good picture (as
far as its informative aspect is concerned).
4.3.4-UNKNOWN OBJECTS:
this problem has already been mentioned (see
figure "Black and white display of a tridimensional function") and is becoming increasingly
common, especially during the study of
fractal objects and chaotic systems. One
possible solution would be a high level of
interactivity at simulation and display level. A
structure could then be explored in real time,
whatever its complexity, just as a pilot
training on a flight simulator can explore an
unknown landscape; clues must be added to
help the understanding of the third dimension
(see figures "Reconstruction of a 3D structure -a cubic lattice-",
"Cauliflowers, seaweeds, shells,... with fog" and
"Autostereogram of a quaternionic Julia set -tridimensional cross-section-"). Moreover, a
large number of tools must be accessible to
the user instantly so that he can follow a new
path without delay. Finally, these same tools
must be suitable for sequencing and
combining as required. This point will be
discussed at length in Chapter 5.
4.3.5-THE QUALITY OF THE DISPLAY SYSTEM:
all these considerations show that a display system
adapted to the analysis and comprehension of
the results of numerical simulations must
have a certain number of qualities, viz:
- a capacity to manipulate n-dimensional
data easily (in general, n<=4 -which includes
the time-);
- good spatial definition: the 512x512
format is a minimum and the problem of
compatibility with standard "video"
equipment as regards recording should also
be looked at. In this context, the use of
High-Definition TeleVision will probably be
very useful;
- good chromatic definition: 256
redefinable colors (in a much larger palette)
are, again, a minimum for use in "false
colors", and operation in "true colors" is vital
in many cases (e.g. transparency management
and "mapping" function). The
palette-definition tools available must be
simple, powerful and interactive;
- high level of interactivity: this is the
only factor that enables a scientist to test,
explore, and experiment without letting his
mind wander while he waits for the computer
to respond (beyond a one-second delay, a
user, whether artist,
engineer, or scientist,
runs the risk of "switching off"
intellectually). This is vital in both simulation
and display systems so that the feedback loop
is not "broken";
- high communication potential: this
provides for effective dialogue between
simulation and display, as well as assisting
user/system exchanges.
This results in the architecture of a
specialist system for virtual experimentation
which will be described in detail in Chapter 5.
4.4-THE NEED FOR GRAPHIC SEMIOLOGY ADAPTED TO VIRTUAL EXPERIMENTATION:
the techniques of picture synthesis are now
sufficiently well-developed to be used
productively on a large scale in scientific
research. However, the bases of graphic
semiology must be laid down so that a
scientist is not blinded or dazzled by an
attractive, but non-informative, aestheticism,
and common codes must be defined as they
were in years gone by for mathematical
notations or cartographic features. Picture
synthesis is one of the links in the chain that
makes up Scientific Research. The chain itself
must not be subjected to stress as a result of
the non-use or ill-use of this new tool.
It has been shown, moreover, that
scientific display was a difficult, even
perillous exercise, which could only be
performed correctly in close cooperation with
the world of Art. The means of selecting
optimal proportions and using the correct
colors are questions which artists have
already addressed. Finally, we said that
Computer-Assisted Artistic Creation could
not operate without scientific tools:
perspective, textures, and modelling of
natural phenomena and living creatures are all
fields that are studied in depth by scientists.
Picture synthesis and data processing in
general bring together and reconcile the two
worlds of the Arts and the Sciences, both of
which are striving to achieve knowledge of
the same object viz. our Universe.
5-DEFINITION OF A DATA PROCESSING SYSTEM FOR VIRTUAL EXPERIMENTATION:
using the above ideas as a starting point, we are faced
with the problem of defining and
implementing a data processing system that
would give form to this concept while, at the
same time, ensuring the perennity of the
developments created in this way. Firstly,
however, there must be an inventory of
subjacent ideas which correspond to the main
tendencies currently observed in computing
and which must be accepted without
reservation:
- the existence of standards for software
and hardware,
- the notion of a distributed system based
on the idea of a network (or, preferably,
networks),
- interactivity,
- later generations of parallel computing
systems,
- the production of animated color
pictures,
- possible simulation of certain cognitive
processes.
5.1-HARDWARE STRUCTURE:
The objective is to achieve a delocalized
system in which the tasks are spontaneously
performed on the most suitable and/or the
most available system. To take a simple
example, display and interaction would take
place on a workstation, while the equations
would be resolved on a super-computer in a
manner that was totally transparent to the
user. As far as the architecture is concerned,
a system that is suitable for
virtual experimentation is based on five
"logical" subsets (in the sense that no
hypothesis is put forward here as regards
implementation) that have a high level of
interconnectivity: the simulator (SIM) which
provides the numerical resolution of the
problem and produces the results, the data
base management system (DBM) which
stores the results, the display (DIS) which is
used for a synthetic presentation (in the
etymological meaning of the term) of the
"objects" in "real time" (i.e. without any
noticeable delay), the communicator (COM)
which provides the links with the exterior i.e.
the user and other systems, and, finally, the
user (UTI) (engineer, scientist, or artist) who
is an integral part of the system, like the
observer in quantum mechanics. These five
components form a feedback loop that makes
it possible to implement the concept of Virtual
Experimentation to the full (and to adapt it for
the purposes of Computer-Assisted Artistic
Creation).
-----------
| |
| |
| G B D |
| |
| |
-----------
/|\
|
|
|
\|/
----------- ----------- -----------
| | | | | |
| | | | | |
| S I M |<--------------->| C O M |<--------------->| D I S |
| | | | | |
| | | | | |
----------- ----------- -----------
/|\ /|\ |
| | |
| | |
| | |
| \|/ |
| ----------- |
| | | |
| F E E D - | | - B A C K |
-----------------------| U T I |<----------------------
| |
| |
-----------
The simulator SIM provides the
numerical resolution of the problem and
produces the unprocessed -or raw- results
(e.g. non post-processed numerical values
which are usually represented in the form of
64-bit floating points that provide the
dynamics and precision required for scientific
and industrial calculations). Generally
speaking, these results consist of a set of
positional values (explicit or implicit
space/time coordinates, for example in the
case of a regular mesh) and caracteristic
values (which provide the solution to the
equations at the given pixels, and for
example, speed, pressure, temperature, curl,
etc...).
The data base management system DBM
stores the "semi-processed" results (given
that 64 bits will frequently be converted to 32
or 8 bits for obvious reasons) for a given
length of time, in accordance, for example,
with an implicit principle of "buffer loop" (in
which the oldest data is deleted by the most
recent data). Note that the abbreviation DMB
was selected with the aim of introducing into
scientific computations the Data Base notion
that is usually found in other contexts. It may
not have been feasible to anticipate rapid
high-capacity "Data Base" architectures in
previous years, but this is now a real
possibility. Moreover, it is obvious that the
results of numerical computations must be
"stored" in such a way as to be quickly and
"intelligently" retrievable during synthesis by
DIS (speed being a vital factor at interaction
level and the display parameters often being
impossible to anticipate a priori because of the
very definition of the concept of
Experimentation).
The display system DIS is used for the
synthetic presentation (in the etymological
sense of the term) of results in real time (or,
at least, within a time scale that is as close as
possible to real time). Experience has shown
that, in this field of picture synthesis, it is
difficult to define the algorithms required for
the various representation modes in advance.
One can even say that every new virtual
experiment is likely to pose new display
problems. This is why, at DIS level, the
architecture selected must be parallel and
programmable. This being so, all the
necessary algorithms in existence or
developed in the future will be definable
inasmuch as the parallelism will compensate
for the lack of highly-specialized circuits. DIS
contains a picture memory which must allow
for several pictures to be accessed
simultaneously so that, for example, one of
them can "mask" another, or so that one can
be "mapped" onto another, or again so that a
comparison can be made between several
pictures. There must be facilities as regards
coloring (i.e. the possibility of working in
"true" and "false" colors) and the production
of signals compatible with present-generation
television (i.e. 50/60 Hz signals, 625/525
interlaced lines, codable and recordable in
PAL/NTSC; it would also be useful to
prepare for the implementation of HDTV, or
High-Definition TeleVision). Finally, it is
essential to present the user with several
complementary "angles of observation" of the
same "object" simultaneously (the notion of
"angle of observation" is very general in this
respect and goes far beyond the mere position
of the observer. In particular it includes, in
this context, the various possible
representation modes). This is why it is vital
that DIS should take on board the notion of
windowing.
The communicator COM provides the
links between the system components (not
forgetting the"outside world"). Interaction
with the user UTI must be as natural and
simple as possible at hardware level. Because
of this, a user must have at least at his
disposal a keyboard, a mouse, an interaction
screen and a display monitor (these displays
are usually "simulated" on the same physical
support given that the notion of windowing
necessarily exists), and reprographic
resources (hard copy, photographic cameras,
video, etc...). As regards interaction between
components (and between systems), it is vital
to provide every possible means of rapid
communication for two purposes viz.
inter-user communication (message handling,
file exchange, etc...) and, of course,
inter-system communication. The use of local
area networks (ETHERNET, FDDI, ATM,
etc...) and wide area networks (ATM, etc...)
is therefore a sine qua non condition for the
useful development of the concept of virtual
experimentation.
5.2-SOFTWARE STRUCTURE:
Having looked at the material aspects, let us
now consider the particularities of the
software required. Five major questions have
been addressed:
- can the reliability of software be
enhanced using a straigthforward
programming environment?
- can this programming environment be
machine and system independent?
- how to optimize the use of available
resources?
- how to guarantee the reliability of the
results produced?
- how to analyse these results?
Answering this five questions gave birth
to a virtual programming environment that
hides all discrepancies of underlying systems used
[More information about the fourth question].
UNIX has been selected as the operating system. It is now
implemented on all the computers (from the
smallest to the largest) and the notions of
process (which open the door to implicit and
explicit parallelism), pipe (which facilitates
the easy implementation of complex
processing strings in accordance with the
"producer-consumer" principle), or socket
(for inter-computer dialogues) give a realistic
view of the operation and control of the
logical architecture described above. As far as
windowing is concerned, the X-Window
standard would appear to be the optimum
solution since it provides portability and
includes the notion of a shared application by
definition. Unfortunately it is impossible to
find two implementations of the UNIX
system that are strictly identical (in particular
the specific bugs constitute a major class of
discrepancies that is too often neglected).
This last point justify the use of shells that
hide every components used (compilers,
librairies,...); the whole set of these shells
define the so-called virtual programming
environment.
The shell that "wraps" the C compilers
plays a particular role; it defines a new
programming language (dubbed the K). As a
matter of fact, language remains one of the
thorniest problems. It is clear that, at the
present time, FORTRAN is still the most
widely-used language in every research and
numerical simulation. In the educational field,
it is appropriate to teach modern, structured
languages to as many students as possible! It
should also be remembered that the logical
architecture described here is intrinsically
parallel and that this parallelism must be
simultaneously exploitable both explicitly (by
anybody developing a new parallel algorithm)
and implicitly (by anybody who is totally
unconcerned by algorithms). In addition to
this, the introduction of object languages and,
more generally speaking, techniques of
artificial intelligence in the field of numerical
simulation could be more than useful. In
particular:
- the "objects" being simulated could be
processed as objects. This would facilitate the
transition from model to computer creation,
the development of the object, the use of the
parallelism in the architecture and, most
importantly of all, interaction with the
modelized system.
- a new scientific language could be
defined. Its principle would be the same as
the one found in expert systems i.e. a
distinction between knowledge (in this case,
numerical resolution methods) and "concrete"
problems to be solved; then, several methods
of resolution could be tested and compared on
a single identical problem.
- dialogue with the user could be
provided by an expert system. It would
provide a natural (i.e. user-friendly) dialogue,
implicit implementation functions for the
system as a whole (e.g. automatic selection of
representation modes depending on the
current problem) and assistance in result
analysis.
The K language is independent of the
underlying compilers used (currently C) and
it defines a pre-fixed syntax as well as strong
principles for software writing and
documentation:
- simplicity,
- generality,
- harmony,
thus making of this process a new Art.
In particular, it promotes the following
structure for each program P:
-------- I(1)
-------- I(2) -----|-------- ...
| -------- I(1)
-------- ... ... |-------- ...
| | -------- I(1)
| -------- I(2) -----|-------- ...
| -------- I(1)
-------- I(n-1) ---|-------- ...
| | -------- I(1)
| | -------- I(2) -----|-------- ...
| | | -------- I(1)
| -------- ... ... |-------- ...
| | -------- I(1)
| -------- I(2) -----|-------- ...
| -------- I(1)
P=I(n) ---|-------- ...
| -------- I(1)
| -------- I(2) -----|-------- ...
| | -------- I(1)
| -------- ... ... |-------- ...
| | | -------- I(1)
| | -------- I(2) -----|-------- ...
| | -------- I(1)
-------- I(n-1) ---|-------- ...
| -------- I(1)
| -------- I(2) -----|-------- ...
| | -------- I(1)
-------- ... ... |-------- ...
| -------- I(1)
-------- I(2) -----|-------- ...
-------- I(1)
where I1 denotes the atomic instructions
defined at the lower level of K (the general
purpose one) and Ii more and more specific
instructions (dubbed the molecular ones), the
program P being In, the most specific
instruction (for a given problem).
CONCLUSION, THE FUTURE:
as far as display itself is
concerned, enormous progress wil be seen,
of course, thanks to the increasingly-rapid
manipulation of tridimensional models and
the usage of high-definition television
(HDTV) and stereoscopic devices. The notion
of virtual reality, which has been developed
in other fields (driving simulators for
example) and introduced into this context,
will enable researchers, engineers or even
artists to become more closely integrated into
their models, and this in turn will facilitate
comprehension. It should be remembered at
this juncture that, unlike applications of the
CAD type, the "objects" that are studied and
displayed are often far removed from current
understanding, indeed many of them have no
"natural" image at all (pressure, for example).
Representation of some of them may even be
prohibited (in particular in quantum
mechanics) and in this case a scientist must be
given the maximum facilities in order to
understand the results obtained. In this field,
the techniques of artificial intelligence seem to
us likely to be of particular use.
Although it is difficult to imagine, in the
medium and long term, the impact of data
processing and its latest developments
(Artificial Intelligence, parallel computation,
picture synthesis, sound production, optical
disks, multimedia systems, etc...) in familiar
domains such as teaching, it is even more
difficult to project their impact in the field of
scientific research. Science defines our
rational view of the Universe and the place
that Man occupies within it. These views
have been subjected to major changes over
the centuries, as a result of the genius of
Plato, Copernicus, Newton and Einstein, to
name but a few. At present, their worthy
successors are using tools that were unknown
to their illustrious predecessors but which
will enable them to look even further
forward, thereby opening the door to a new
Copernican revolution or maybe to an
anti-Copernican revolution defining Men and
Computers as the virtual center of the Universe.
Copyright © Jean-François COLONNA, 1997-2024.
Copyright © France Telecom R&D and CMAP (Centre de Mathématiques APpliquées) UMR CNRS 7641 / École polytechnique, Institut Polytechnique de Paris, 1997-2024.