Subjective aspects of the application of mathematical modeling of military operations in the work of military command and control bodies. Determining the limits of applicability Why the use of models affects the limits of applicability

Educational film, television and video recording have much in common. These means make it possible to show a phenomenon in dynamics, which is, in principle, inaccessible to static screen means. This feature is put at the forefront by all researchers in the field of technical teaching aids.

Movement in cinema cannot be reduced only to the mechanical movement of objects across the screen. Thus, in many films on art and architecture, the dynamics consist of individual static images, when not the subject itself changes, but the camera position, scale, one image is superimposed on another, for example, a photograph of it is superimposed on the task diagram. By using the specific capabilities of cinema, in many films one can see manuscripts “come to life”, in which lines of text appear from under an invisible (or visible) pen. Thus, the dynamics in cinema are also the dynamics of cognition, thought, and logical constructions.

Of great importance are such properties of these teaching aids as slowing down and accelerating the passage of time, changing space, turning invisible objects into visible ones. The special language of cinema, which is “spoken” not only by films shot on film, but also by messages created and transmitted by means of television or “canned” in a videotape, determines situations in the lesson when the use of cinema (understood in a broad sense) turns out to be didactically justified . So, N.M. Shakhmaev identifies 11 cases, pointing out that this is not an exhaustive list.

1. Study of objects and processes observed using optical and electron microscopes not currently available to the school. In this case, film materials, filmed in special laboratories and provided with qualified comments from a teacher or announcer, have scientific reliability and can be shown to the whole class.

2. When studying fundamentally invisible objects, such as, for example, elementary particles and the fields surrounding them. Using animation, you can show the model of an object and even its structure. The pedagogical value of such model representations is enormous, because they create in the minds of students certain images of objects and mechanisms of complex phenomena, which facilitates the understanding of educational material.

3. When studying such objects and phenomena that, due to their specific nature, cannot be visible simultaneously to all students in the class. By using special optics and choosing the most advantageous shooting points, these objects can be photographed close-up, cinematically highlighted and explained.

4. When studying rapidly or slowly occurring phenomena. Fast or slow


filming, combined with normal projection speed, transforms the passage of time and makes these processes observable.

5. When studying processes occurring in places inaccessible to direct observation (volcano crater; underwater world of rivers, seas and oceans; radiation zones; cosmic bodies, etc.). In this case, only cinema and television can provide the teacher with the necessary scientific documentation, which serves as a teaching aid.

6. When studying objects and phenomena observed in those areas of the spectrum of electromagnetic waves that are not directly perceived by the human eye (ultraviolet, infrared and x-rays). Shooting through narrow-bandwidth filters on special types of films, as well as shooting from fluorescent screens, allows you to transform an invisible image into a visible one.

7. When explaining such fundamental experiments, the staging of which in the conditions of the educational process is difficult due to the complexity or cumbersomeness of the installations, the high cost of equipment, the duration of the experiment, etc. Filming such experiments allows not only to demonstrate the progress and results, but also to provide the necessary explanations. It is also important that the experiments are shown from the most favorable point, from the most favorable perspective, which cannot be achieved without cinema.

8. When explaining the structure of complex objects (the structure of human internal organs, the design of machines and mechanisms, the structure of molecules, etc.). In this case, with the help of animation, by gradually filling and transforming the image, you can move from the simplest diagram to a specific design solution.

9. When studying the creativity of writers and poets. Cinema makes it possible to reproduce the characteristic features of the era in which the artist lived and worked, but also to show his creative path, the process of birth of a poetic image, his manner of work, the connection of creativity with the historical era.

10. When studying historical events. Films based on chronicle material, in addition to their scientific significance, have a tremendous emotional impact on students, which is extremely important for a deep understanding of historical events. In special feature films, thanks to the specific capabilities of cinema, it is possible to recreate historical episodes dating back to long ago. Historical accurate reproduction of objects of material culture, the characters of historical figures, economics, and everyday life helps to create in students a real idea of ​​the events that they learn about from textbooks and from the teacher’s story. History takes on tangible forms and becomes a vivid, emotionally charged fact that becomes part of the student’s intellectual structure of thought.

11. To solve a large complex of educational problems.

Defining the boundaries of film, television and video recording is fraught with the danger of making mistakes. The mistake of unlawfully expanding the possibilities of using these teaching aids in the educational process can be illustrated by the words of one of the characters in the film “Moscow Doesn’t Believe in Tears”: “Soon nothing will happen. It will be all television." Life has shown that books, theater, and cinema have survived. And what is most important is direct information contact between the teacher and students.

On the other hand, there may be a mistake of unreasonably narrowing the didactic functions of screen-sound teaching aids. This happens in the case when a film or video film or television program is considered only as a type of visual aid that has the ability to dynamically present the material being studied. This is certainly true. But besides this, there is one more aspect: in the didactic materials presented to students using a film projector, video recorder and television, specific learning tasks are solved not only by technology, but also by means of visual means inherent in a particular type of art. Therefore, the on-screen teaching aid takes on the clearly visible features of a work of art, even if it was created for an educational subject related to the science and mathematics cycle.

It should be remembered that neither a movie, nor a video recording, nor television can create long-lasting and lasting motives for teaching, nor can they replace other means of visualization. An experiment with hydrogen carried out directly in the classroom (an explosion of detonating gas in a metal tin can) is many times more visual than the same experiment demonstrated on the screen.

Security questions:

1. Who was the first to demonstrate moving hand-drawn pictures on the screen to many viewers at the same time?

2. How was T. Edison’s kinetoscope designed?

4. Describe the structure of black and white film.

5. What types of filming are used in film production?

6. What features characterize educational films and videos?

7. List the requirements for the educational film.

8. What types of films can be divided into?

9. What is the seal used for?

10. What types of phonograms are used in the production of films?

1. Modeling ensures the creation of a simplified model compared to the original. The model contains less irrelevant information than the original. The model focuses information on those features that are necessary for the investigation.

It is important for us that the “trace cast” reflects the most complete and accurate features of the sole (tread, pattern, wear, damage, etc.); other features are less interesting, the color of the material, etc.

The model is simpler than the original, it distracts from details and particulars and thereby helps solve cognitive problems.

In modeling, simplification determines its widespread use (drawing up terrain plans, communication diagrams, drawing up schedules).

SIMPLE is accessible, understandable, consisting of a small number of elements and relationships.

COMPLEX - on the contrary - difficult to understand.

Humanity has always tried to bring the complex to the simple and understandable. In mathematics, there is a term “simplifying an expression,” when a cumbersome formula is reduced to a simple one.

Everything that is ingenious is simple, and everything that is simple is brilliant.

2. Some types of modeling are characterized by VISUALIZATION.

Visualization of models with sensory perception and figurative reflection of objects and phenomena in consciousness. They revive memory, contribute to understanding the essence of the facts and phenomena being studied.

“Plan-schemes” for interrogating witnesses, victims, and accused.

Interrogation of drivers and other participants in road accidents with reconstruction of the traffic situation using special tablets, models, etc.

The investigative action of checking evidence on the spot speaks for itself and is used quite often.

3 Models serve an illustrative function. They serve as a clear confirmation of the points being proven.

The inspection protocol includes plans and diagrams.

To the SME report - diagrams of a person with existing injuries.

To the ballistic examination report - photographs of combinations.

To the fingerprint examination report - photographs of fingerprints indicating matches with arrows.

The creation and study of models contribute, first of all, to checking existing information and obtaining new information.

For the investigation of criminal cases, the cognitive, exploratory nature of the research is typical.

This is explained by the fact that the time factor has its influence on the traces of a crime: sometimes they favor their destruction, concealment, as well as the concealment of the crime itself and the person who committed it. Models and simulations reconstruct crime events and their participants.

The main and main feature of forensic modeling is the expression in this method of the laws of the universal connection of objects and phenomena.

Modeling is based on the laws of reflection and universal connection due to models and simulations being included in the process of cognition.

The basis of laws determines the scientific nature of the method and allows it to be used as a method of proof.

Thus, the simulation results can be used as evidence and form the basis of an indictment or sentence.

Knowledge of causal relationships is of great importance for scientific prediction, influencing processes and changing them in the right direction. No less important is the problem of the relationship between chaos and order. It is key in explaining the mechanisms of self-organization processes. We will return to this issue repeatedly in subsequent chapters. Let's try to understand how such fundamental categories as coexist in the world around us, being in the most diverse and bizarre combinations causality, necessity and accident.

The relationship between causality and chance

On the one hand, we intuitively understand that all the phenomena we encounter have their own causes, which, however, do not always act unambiguously. Necessity is understood as an even higher level of determination, meaning that certain causes under certain conditions must cause certain consequences. On the other hand, both in everyday life and when trying to discover some patterns, we are convinced of the objective existence of chance. How can these seemingly mutually exclusive processes be combined? Where is the place of chance if we assume that everything happens under the influence of certain causes? Although the problem of randomness and probability has not yet found its philosophical solution, it is simplified under by chance we will understand the influence of a large number of causes external to a given object. That is, it can be assumed that when we talk about defining necessity as absolute determination, we must no less clearly understand that in practice, most often it is impossible to rigidly fix all the conditions under which certain processes occur. These conditions (reasons) are external in relation to a given object, since it is always part of the system that embraces it, and this system is part of another wider system and so on, that is, there is a hierarchy systems. Therefore, for each of systems there is some kind of external system(environment), part of the impact of which is on the internal (small) system cannot be predicted or measured. Any measurement requires energy expenditure, and when trying to absolutely accurately measure all causes (effects), these costs can be so great that we will receive complete information about the causes, but the production of entropy will be so great that it will no longer be possible to do useful work.

Measurement problem

The problem of measurement and level of observability systems objectively exists and affects not only the level of cognition, but to a certain extent also the state of the system. Moreover, this occurs, including for thermodynamic macrosystems.

Temperature measurement problem

Relationship between temperature and thermodynamic equilibrium

Let us dwell on the problem of measuring temperature, turning to the excellently written (in the sense of pedagogy) book by Academician M.A. Leontovich. Let's start with the definition of the concept of temperature, which, in turn, is closely related to the concept of thermodynamic equilibrium and, as noted by M.A. Leontovich, outside of this concept it makes no sense. Let us dwell on this issue in a little more detail. By definition, at thermodynamic equilibrium, all internal parameters systems are functions of external parameters and the temperature at which system.

Function of external parameters and system energy. Fluctuations

On the other hand, it can be argued that in thermodynamic equilibrium all internal parameters systems – functions of external parameters and energy of the system. At the same time, internal parameters is a function of the coordinates and speed of the molecules. Naturally, we can somehow estimate or measure not individual, but their average values ​​over a sufficiently long period of time (assuming, for example, a normal Gaussian distribution of velocities or molecular energies). We consider these averages to be the values ​​of internal parameters at thermodynamic equilibrium. These include all the statements made, and outside of thermodynamic equilibrium they lose meaning, since the laws of energy distribution of molecules when they deviate from thermodynamic equilibrium will be different. Deviations from these averages caused by thermal motion are called fluctuations. The theory of these phenomena in relation to thermodynamic equilibrium is given by statistical thermodynamics. At thermodynamic equilibrium, fluctuations are small and, in accordance with Boltzmann's order principle and the law of large numbers (see Chapter 4 §1), are mutually compensated. In highly nonequilibrium conditions (see Chapter 4 §4) the situation changes radically.

Distribution of the energy of a system among its parts in a state of equilibrium

Now we have come close to the definition of the concept of temperature, which is derived from several provisions arising from experience related to the distribution of the energy of a system among its parts in a state of equilibrium. In addition to the definition of the state of thermodynamic equilibrium formed somewhat above, the following properties are postulated: transitivity, uniqueness of energy distribution among parts of the system, and the fact that in thermodynamic equilibrium the energy of parts of the system increases with the growth of its total energy.

Transitivity

By transitivity we mean the following. Let's say we have system, consisting of three parts (1, 2 and 3), which are in some states, and we are convinced that system, consisting of parts 1 and 2, and system, consisting of parts 2 and 3, each individually in states of thermodynamic equilibrium. Then it can be argued that system 1 – 3, will also be in a state of thermodynamic equilibrium. It is assumed that there are no adiabatic partitions between each pair of parts in each of these cases (i.e., heat transfer is ensured).

Temperature concept

The energy of each part of the system is an internal parameter of the entire system, therefore, when the energy of each part is in equilibrium, , are functions of external parameters, , relating to the entire system, and the energy of the entire system

(1.1) Resolving these equations for , we obtain

(1.2) Thus, for each system there is a certain function of its external parameters and its energy, which for all system, which are in equilibrium, have the same meaning when they are connected.

This function is called temperature. Designating temperatures systems 1 , 2 through , , and assuming

(1.3) we emphasize once again that conditions (1.1) and (1.2) are reduced to the requirement that the temperatures of the parts of the system be equal.

Physical meaning of the concept “temperature”

So far, this definition of temperature allows us to establish only the equality of temperatures, but does not yet allow us to attribute a physical meaning to which temperature is greater and which is less. To do this, the definition of temperature must be supplemented as follows.

The temperature of a body increases with an increase in its energy under constant external conditions. This is equivalent to the statement that when a body receives heat at constant external parameters, its temperature increases.

Such a refinement of the definition of temperature is possible only due to the fact that the following properties of the equilibrium state of physical bodies follow from experiment: systems.

At equilibrium, one completely definite distribution of the energy of the system among its parts is possible. As the total energy of the system increases (with constant external parameters), the energies of its parts increase.

From the uniqueness of the energy distribution it follows that an equation of the type gives one specific value corresponding to a given value (and given , ), i.e. gives one solution to the equation. It follows that the function is a monotonic function. The same conclusion applies to the function for any system. Thus, from the simultaneous increase in the energy of parts of the system, it follows that all functions , , etc. there are either monotonically increasing or monotonically decreasing functions , , etc. That is, we can always choose temperature functions so that it increases with increasing .

Selecting a temperature scale and temperature meter

After the definition of temperature outlined above, the question comes down to the choice of a temperature scale and a body that can be used as a temperature meter (primary sensor). It should be emphasized that this definition of temperature is valid when using a thermometer (for example, mercury or gas), and the thermometer can be any body that is part of the system whose temperature needs to be measured. The thermometer exchanges heat with this system, external parameters, which determine the state of the thermometer, must be fixed. In this case, the value of any internal parameter related to the thermometer is measured in equilibrium of the entire system consisting of the thermometer and the environment, the temperature of which is to be measured. This internal parameter, taking into account the definition stated above, is a function of the energy of the thermometer (and its external parameters, which are fixed, and the settings of which relate to the calibration of the thermometer). Thus, each measured value of the internal parameter of the thermometer corresponds to a certain energy, and therefore, taking into account relation (1.3), a certain temperature of the entire system.

Naturally, each thermometer has its own temperature scale. For example, for a gas expansion thermometer, the external parameter - the volume of the sensor - is fixed, and the measured internal parameter is pressure. The described measuring principle applies only to thermometers that do not use irreversible processes. Temperature measuring instruments such as thermocouples and resistance thermometers are based on more complex methods that involve (this is very important to note) the heat exchange between the sensor and the environment (the hot and cold junctions of the thermocouple).

Here we have a vivid example when the introduction of a measuring device into an object ( system), change to one degree or another the object itself. At the same time, the desire to increase measurement accuracy leads to an increase in energy consumption for measurement and to an increase in the entropy of the environment. At this level of technological development, this circumstance in a number of cases can serve as an objective boundary between deterministic and stochastic methods of description. This is even more clearly demonstrated, for example, when measuring flow using the throttling method. The contradiction associated with the desire for a deeper level of knowledge of matter and existing measurement methods is manifested more and more clearly in the physics of elementary particles, where, as physicists themselves admit, increasingly cumbersome measuring instruments are used to penetrate the microworld. For example, to detect neutrinos and some other elementary particles, huge “barrels” filled with special high-density substances, etc. are placed in deep caves in the mountains.

Limits of applicability of the concept of temperature

To conclude the discussion of the measurement problem, let us return to the question of the limits of applicability of the concept of temperature, which follows from its definition stated above, which emphasized that the energy of a system is the sum of its parts. Therefore, we can talk about a certain temperature of parts of the system (including the thermometer) only when the energy of these parts is added up additively. The entire conclusion leading to the introduction of the concept of temperature relates to thermodynamic equilibrium. For systems, close to equilibrium, temperature can only be considered as an approximate concept. For systems in states that differ greatly from equilibrium, the concept of temperature generally loses its meaning.

Temperature measurement using non-contact methods

And finally, a few words about measuring temperature using non-contact methods, such as total radiation pyrometers, infrared pyrometers and color pyrometers. At first glance, it seems that in this case it is finally possible to overcome the main paradox of the methodology of cognition associated with the influence of the measuring instrument on the measured object and the increase in the entropy of the environment due to measurement. In fact, only a slight shift in the level of cognition and entropy level occurs, but the fundamental formulation of the problem remains.

Firstly, pyrometers of this type allow you to measure only the temperature of the surface of the body, or rather not even the temperature, but heat flow, emitted by the surface of bodies.

Secondly, to ensure the functioning of the sensors of these devices, an energy supply is required (and now a connection to a computer), and the sensors themselves are quite complex and energy-intensive to manufacture.

Thirdly, if we set the task of estimating using similar parameters of the temperature field inside the body, then we will need to have a mathematical model with distributed parameters, connecting the temperature distribution over the surface measured by these parameters with the spatial distribution of temperatures inside the body. But to identify this model and to check its adequacy, we will again need an experiment related to the need to directly measure temperatures inside the body (for example, drilling a heated workpiece and pressing in thermocouples). In this case, the result, as follows from the rather strict formulation of the concept of temperature stated above, will be valid only when the object reaches a stationary state. In all other cases, the obtained temperature estimates should be considered with one degree or another of approximation and methods should be available to assess the degree of approximation.

Thus, in the case of using non-contact methods of temperature measurement, we ultimately come to the same problem, at best at a lower entropy level. As for metallurgical, and many other technological objects, the level of their observability (transparency) is quite low.

For example, by placing a large number of thermocouples over the entire surface of the heating furnace masonry, we will receive sufficient information about heat losses, but will not be able to heat the metal (Fig. 1.6).

Rice. 1.6 Energy loss when measuring temperature

Heat removal through the thermoelectrodes of thermocouples can be so great that the temperature difference and heat flow through masonry can exceed useful heat flow from torch to metal. Thus, most of the energy will be spent on heating the environment, that is, on increasing chaos in the universe.

An equally clear example of the same plan is the measurement of liquid and gas flow using the pressure drop method across a throttle device, when the desire to increase the accuracy of measurements leads to the need to reduce the cross-section of the throttle device. In this case, a significant part of the kinetic energy intended for useful use will be spent on friction and turbulence (Fig. 1.7).

Rice. 1.7 Energy losses during flow measurement

By striving for too precise measurements, we transfer a significant amount of energy into chaos. We believe that these examples are quite convincing evidence in favor of the objective nature of randomness.

Objective and biased randomness

Recognizing the objective nature of causality and necessity, and at the same time the objective nature of chance, the latter can apparently be interpreted as the result of a collision (combination) of a large number of necessary connections that are external to the given process.

Without forgetting the relative nature of randomness, it is very important to distinguish between truly objective randomness and “biased randomness,” i.e., caused by a lack of knowledge about the object or process being studied and relatively easily eliminated with a completely reasonable investment of time and money.

Although it is impossible to draw a clear line between objective and biased randomness, such a distinction is still fundamentally necessary, especially in connection with the “black box” approach that has spread in recent years, in which, according to W. Ashby, instead of studying each individual cause in connection with its individual consequence, which is a classical element of scientific knowledge, they mix all the causes and all the consequences into a common mass and connect only two results. The details of the formation of cause-and-effect pairs are lost in this process.

This approach, for all its apparent universality, is limited without combination with cause-and-effect analysis.

However, due to the fact that a number of probabilistic methods based on this approach have now been developed, many researchers prefer to use them, hoping to achieve their goal more quickly than with a sequential, analytical, cause-and-effect approach.

The use of a purely probabilistic approach without sufficient understanding of the results obtained, taking into account the physics of processes and the internal content of objects, leads to the fact that some researchers, wittingly or unwittingly, take the position of absolutizing randomness, since in this case all phenomena are considered random, even those whose cause-and-effect relationships can be disclosed with a relatively small investment of time and money.

The objective nature of chance certainly takes place in the sense that knowledge always goes from phenomenon to essence, from the external side of things to deep regular connections, and the essence is inexhaustible. This inexhaustible essence determines the level of objective randomness, which, of course, is relative for certain specific conditions.

Randomness is objective: a complete disclosure of cause-and-effect relationships is impossible, if only because information about the causes is necessary for their disclosure, i.e., measurement is necessary, and, as a rule, L. Brillouin claims, errors cannot be made “infinitesimal”, they always remain finite, since the energy consumption for their reduction increases, accompanied by an increase in entropy.

In this regard, objective randomness should be understood only as that level of interweaving of cause-and-effect relationships, the disclosure of which, at a given level of knowledge about the process and technology development, is accompanied by exorbitant energy costs and becomes economically inexpedient.

To successfully build meaningful models, an optimal combination of macro- and micro-approaches is required, that is, functional methods and methods for revealing internal content.

With a functional approach, one abstracts from the specific mechanism for implementing internal causal relationships and considers only the behavior of the system, i.e. its reaction to disturbances of one kind or another.

However, the functional approach and, especially, its simplified version, the “black box” method, is not universal and is almost always combined with other methods.

The functional approach can be considered as the first stage of the cognition process. When first considering a system, a macro approach is usually used, then they move to the micro level, where the “bricks” from which the systems are built are identified, penetration into the internal structure, division of a complex system into simpler, elementary systems, identification of their functions and interactions between themselves and the system in in general.

The functional approach does not exclude the cause-and-effect approach. On the contrary, it is with the right combination of these methods that the greatest effect is obtained.

Purpose of the lesson

Continue the discussion of wave diffraction, consider the problem of the limits of applicability of geometric optics, develop skills in the qualitative and quantitative description of the diffraction pattern, consider the practical applications of light diffraction.

This material is usually discussed briefly as part of the study of the topic “Diffraction of Light” due to lack of time. But, in our opinion, it must be considered for a deeper understanding of the phenomenon of diffraction, understanding that any theory describing physical processes has limits of applicability. Therefore, this lesson can be taught in basic classes instead of a problem-solving lesson, since the mathematical apparatus for solving problems on this topic is quite complex.

No. Lesson steps Time, min Techniques and methods
1 Organizational moment 2
2 Repetition of learned material 6 Frontal survey
3 Explanation of new material on the topic “Limits of applicability of geometric optics” 15 Lecture
4 Reinforcing the learned material using a computer model 15 Working on the computer with worksheets. Model "Diffraction limit of resolution"
5 Analysis of the work done 5 Frontal conversation
6 Homework explanation 2

Repetition of learned material

Repeat the questions on the topic “Diffraction of Light” from the front.

Explanation of new material

Limits of applicability of geometric optics

All physical theories reflect processes occurring in nature approximately. For any theory, certain limits of its applicability can be indicated. Whether a given theory can be applied in a particular case or not depends not only on the accuracy that the theory provides, but also on what accuracy is required when solving a particular practical problem. The boundaries of a theory can only be established after a more general theory covering the same phenomena has been constructed.

All these general provisions apply to geometric optics. This theory is approximate. It is unable to explain the phenomena of interference and diffraction of light. A more general and more accurate theory is wave optics. The law of rectilinear propagation of light and other laws of geometric optics are satisfied quite accurately only if the size of the obstacles in the path of light propagation is much greater than the wavelength of the light wave. But they are definitely never fulfilled.

The action of optical instruments is described by the laws of geometric optics. According to these laws, we can distinguish arbitrarily small details of an object using a microscope; Using a telescope, you can establish the existence of two stars at any arbitrarily small angular distances between them. However, in reality this is not the case, and only the wave theory of light makes it possible to understand the reasons for the limit of the resolution of optical instruments.

Resolution of microscope and telescope.

The wave nature of light limits the ability to distinguish details of an object or very small objects when observed with a microscope. Diffraction does not allow one to obtain clear images of small objects, since light does not travel strictly straight, but bends around objects. This causes images to appear blurry. This occurs when the linear dimensions of objects are comparable to the wavelength of light.

Diffraction also places a limit on the resolving power of a telescope. Due to wave diffraction, the image of a star will not be a point, but a system of light and dark rings. If two stars are at a small angular distance from each other, then these rings overlap each other and the eye is not able to distinguish whether there are two luminous points or one. The maximum angular distance between luminous points at which they can be distinguished is determined by the ratio of the wavelength to the diameter of the lens.

This example shows that diffraction always occurs on any obstacles. In very fine observations, it cannot be neglected even for obstacles much larger in size than the wavelength.

Diffraction of light determines the limits of applicability of geometric optics. The bending of light around obstacles places a limit on the resolution of the most important optical instruments - the telescope and microscope.

"Diffraction limit of resolution"

Worksheet for the lesson

Sample answers
"Diffraction of Light"

Last name, first name, class ________________________________________________

    Set the hole diameter to 2 cm, the angular distance between the light sources 4.5 ∙ 10 –5 rad . By changing the wavelength, determine from what wavelength the image of two light sources will be impossible to distinguish, and they will be perceived as one.

    Answer: from approximately 720 nm and longer.

    How does the resolution limit of an optical device depend on the wavelength of the observed objects?

    Answer: the longer the wave, the lower the resolution limit.

    Which double stars - blue or red - can we detect at greater distances with modern optical telescopes?

    Answer: blue.

    Set the minimum wavelength without changing the distance between the light sources. At what hole diameter will the image of two light sources be impossible to distinguish and will they be perceived as one?

    Answer: 1.0 cm or less.

    Repeat the experiment with the maximum wavelength.

    Answer: approximately 2 cm or less.

    How does the resolution limit of optical instruments depend on the diameter of the hole through which light passes?

    Answer: the smaller the hole diameter, the lower the resolution limit.

    Which telescope - with a lens of larger or smaller diameter - will allow you to view two nearby stars?

    Answer: with a larger diameter lens.

    Find experimentally at what minimum distance from each other (in angular value - radians) you can distinguish the image of two light sources in this computer model?

    Answer: 1.4∙10 –5 rad.

    Why can't we see molecules or atoms of a substance with an optical microscope?

    Answer: If the linear dimensions of the objects being observed are comparable to the wavelength of the light, then diffraction will not allow them to be clearly imaged in a microscope, since the light does not travel strictly linearly, but bends around the objects. This causes images to appear blurry..

    Give examples when it is necessary to take into account the diffraction nature of images.

    Answer: for all observations through a microscope or telescope, when the dimensions of the observed objects are comparable to the wavelength of light, with small sizes of the entrance aperture of telescopes, with observations in the range of long red waves of objects located at small angular distances from each other.

Victor Kuligin

Disclosure of content and specification of concepts should be based on one or another specific model of the mutual connection of concepts. The model, objectively reflecting a certain aspect of the connection, has limits of applicability, beyond which its use leads to false conclusions, but within the limits of its applicability it must have not only imagery, clarity and specificity, but also have heuristic value.

The variety of manifestations of cause-and-effect relationships in the material world has led to the existence of several models of cause-and-effect relationships. Historically, any model of these relationships can be reduced to one of two main types of models or a combination of them.

a) Models based on a time approach (evolutionary models). Here the main attention is focused on the temporal side of cause-and-effect relationships. One event – ​​“cause” – gives rise to another event – ​​“effect”, which lags behind the cause in time (lags). Lag is a hallmark of the evolutionary approach. Cause and effect are interdependent. However, reference to the generation of an effect by a cause (genesis), although legal, is introduced into the definition of a cause-and-effect relationship as if from the outside, from the outside. It captures the external side of this connection without deeply capturing the essence.

The evolutionary approach was developed by F. Bacon, J. Mill and others. The extreme polar point of the evolutionary approach was the position of Hume. Hume ignored genesis, denying the objective nature of causality, and reduced causality to the simple regularity of events.

b) Models based on the concept of “interaction” (structural or dialectical models). We will find out the meaning of the names later. The main focus here is on interaction as the source of cause-and-effect relationships. The interaction itself acts as a cause. Kant paid much attention to this approach, but the dialectical approach to causality acquired its clearest form in the works of Hegel. Of the modern Soviet philosophers, this approach was developed by G.A. Svechnikov, who sought to give a materialistic interpretation of one of the structural models of cause-and-effect relationships.

Existing and currently used models reveal the mechanism of cause-effect relationships in different ways, which leads to disagreements and creates the basis for philosophical discussions. The intensity of the discussion and the polar nature of the points of view indicate their relevance.

Let us highlight some of the issues being discussed.

a) The problem of simultaneity of cause and effect. This is the main problem. Are cause and effect simultaneous or separated by an interval of time? If cause and effect are simultaneous, then why does the cause give rise to the effect, and not vice versa? If cause and effect are not simultaneous, can there be a “pure” cause, i.e. a cause without an effect that has not yet occurred, and a “pure” effect, when the action of the cause has ended, but the effect is still ongoing? What happens in the interval between cause and effect, if they are separated in time, etc.?

b) The problem of unambiguity of cause-and-effect relationships. Does the same cause give rise to the same effect, or can one cause give rise to any effect from several potential ones? Can the same effect be generated by any of several causes?

c) The problem of the reverse influence of an effect on its cause.

d) The problem of connecting cause, occasion and conditions. Can, under certain circumstances, cause and condition change roles: the cause becomes a condition, and the condition becomes a cause? What is the objective relationship and distinctive features of cause, occasion and condition?

The solution to these problems depends on the chosen model, i.e. to a large extent, on what content will be included in the initial categories of “cause” and “effect”. The definitional nature of many difficulties is manifested, for example, in the fact that there is no single answer to the question of what should be understood by “cause”. Some researchers think of a cause as a material object, others as a phenomenon, others as a change in state, others as an interaction, etc.

Attempts to go beyond the model representation and give a general, universal definition of the cause-and-effect relationship do not lead to a solution to the problem. As an example, we can cite the following definition: “Causality is such a genetic connection of phenomena in which one phenomenon, called the cause, in the presence of certain conditions inevitably generates, causes, brings to life another phenomenon, called the effect.” This definition is formally valid for most models, but without relying on the model, it cannot solve the problems posed (for example, the problem of simultaneity) and therefore has limited theoretical-cognitive value.

When solving the problems mentioned above, most authors tend to proceed from the modern physical picture of the world and, as a rule, pay somewhat less attention to epistemology. Meanwhile, in our opinion, there are two problems here that are important: the problem of removing elements of anthropomorphism from the concept of causality and the problem of non-causal connections in natural science. The essence of the first problem is that causality as an objective philosophical category must have an objective character, independent of the cognizing subject and his activity. The essence of the second problem: should we recognize causal connections in natural science as universal and universal, or should we consider that such connections are limited in nature and that there are connections of a non-causal type that deny causality and limit the limits of applicability of the principle of causality? We believe that the principle of causality is universal and objective and its application knows no restrictions.

So, two types of models, objectively reflecting some important aspects and features of cause-effect relationships, are to a certain extent in contradiction, since they solve the problems of simultaneity, unambiguity, etc. in different ways, but at the same time, objectively reflecting some aspects of cause-effect relationships , they must be in mutual connection. Our first task is to identify this connection and refine the models.

Limit of applicability of models

Let us try to establish the limit of applicability of evolutionary type models. Causal chains that satisfy evolutionary models tend to have the property of transitivity. If event A is the cause of event B (B is a consequence of A), if, in turn, event B is the cause of event C, then event A is the cause of event C. If A → B and B → C, then A → C. Thus In this way, the simplest cause-and-effect chains are formed. Event B may act as a cause in one case, and as a consequence in another. This pattern was noted by F. Engels: “... cause and effect are representations that have meaning, as such, only when applied to a given individual case: but as soon as we consider this individual case in general connection with the entire world whole, these representations converge and intertwine in the representation of universal interaction, in which causes and effects constantly change places; what is a cause here or now becomes an effect there or then and vice versa” (vol. 20, p. 22).

The transitivity property allows for a detailed analysis of the causal chain. It consists of dividing the final chain into simpler cause-and-effect links. If A, then A → B 1, B 1 → B 2,..., B n → C. But does a finite cause-and-effect chain have the property of infinite divisibility? Can the number of links in a finite chain N tend to infinity?

Based on the law of the transition of quantitative changes into qualitative ones, it can be argued that when dividing the final cause-and-effect chain, we will be faced with such content of individual links in the chain that further division will become meaningless. Note that infinite divisibility, which denies the law of the transition of quantitative changes into qualitative ones, Hegel called “bad infinity”

The transition of quantitative changes into qualitative ones occurs, for example, when dividing a piece of graphite. When molecules are separated until a monatomic gas is formed, the chemical composition does not change. Further division of a substance without changing its chemical composition is no longer possible, since the next stage is the splitting of carbon atoms. Here, from a physicochemical point of view, quantitative changes lead to qualitative ones.

The above statement by F. Engels clearly shows the idea that the basis of cause-and-effect relationships is not spontaneous expression of will, not the whim of chance and not the divine finger, but universal interaction. In nature there is no spontaneous emergence and destruction of movement, there are mutual transitions of one form of motion of matter to others, from one material object to another, and these transitions cannot occur otherwise than through the interaction of material objects. Such transitions, caused by interaction, give rise to new phenomena, changing the state of interacting objects.

Interaction is universal and forms the basis of causation. As Hegel rightly noted, “interaction is a causal relation posited in its full development.” F. Engels formulated this idea even more clearly: “Interaction is the first thing that appears to us when we consider moving matter as a whole from the point of view of modern natural science... Thus, natural science confirms that... that interaction is a true causa finalis things. We cannot go further than the knowledge of this interaction precisely because behind it there is nothing more to know” (vol. 20, p. 546).

Since interaction is the basis of causality, let us consider the interaction of two material objects, the diagram of which is shown in Fig. 1. This example does not violate the generality of reasoning, since the interaction of several objects is reduced to paired interactions and can be considered in a similar way.

It is easy to see that during interaction both objects simultaneously influence each other (reciprocity of action). In this case, the state of each of the interacting objects changes. No interaction - no change of state. Therefore, a change in the state of any one of the interacting objects can be considered as a partial consequence of the cause - interaction. A change in the states of all objects in their totality will constitute a complete consequence.

It is obvious that such a cause-and-effect model of the elementary link of the evolutionary model belongs to the class of structural (dialectical). It should be emphasized that this model does not reduce to the approach developed by G.A. Svechnikov, since under investigation G.A. Svechnikov, according to V.G. Ivanov, understood “... a change in one or all interacting objects or a change in the nature of the interaction itself, up to its collapse or transformation.” As for the change of states, this is a change in G.A. Svechnikov classified it as a non-causal type of connection.

So, we have established that evolutionary models, as an elementary, primary link, contain a structural (dialectical) model based on the interaction and change of states. Somewhat later we will return to the analysis of the mutual connection of these models and the study of the properties of the evolutionary model. Here we would like to note that, in full accordance with the point of view of F. Engels, the change of phenomena in evolutionary models reflecting objective reality occurs not due to the simple regularity of events (as in D. Hume), but due to the conditionality generated by interaction (genesis ). Therefore, although references to generation (genesis) are introduced into the definition of cause-and-effect relationships in evolutionary models, they reflect the objective nature of these relationships and have a legal basis.

Fig. 2. Structural (dialectical) model of causality

Let's return to the structural model. In its structure and meaning, it perfectly agrees with the first law of dialectics - the law of unity and struggle of opposites, if interpreted:

– unity – as the existence of objects in their mutual connection (interaction);

– opposites – as mutually exclusive tendencies and characteristics of states caused by interaction;

– struggle – as interaction;

– development – ​​as a change in the state of each of the interacting material objects.

Therefore, a structural model based on interaction as a cause can also be called a dialectical model of causality. From the analogy of the structural model and the first law of dialectics, it follows that causality acts as a reflection of objective dialectical contradictions in nature itself, in contrast to the subjective dialectical contradictions that arise in the human mind. The structural model of causality is a reflection of the objective dialectics of nature.

Let's consider an example illustrating the application of a structural model of cause-and-effect relationships. Such examples, which are explained using this model, can be found quite a lot in the natural sciences (physics, chemistry, etc.), since the concept of “interaction” is fundamental in natural science.

Let us take as an example an elastic collision of two balls: a moving ball A and a stationary ball B. Before the collision, the state of each ball was determined by a set of attributes Ca and Cb (momentum, kinetic energy, etc.). After the collision (interaction), the states of these balls changed. Let us denote the new states C"a and C"b. The reason for the change in states (Ca → C"a and Cb → C"b) was the interaction of the balls (collision); the consequence of this collision was a change in the state of each ball.

As already mentioned, the evolutionary model is of little use in this case, since we are not dealing with a causal chain, but with an elementary cause-and-effect link, the structure of which cannot be reduced to the evolutionary model. To show this, let us illustrate this example with an explanation from the position of the evolutionary model: “Before the collision, ball A was at rest, so the cause of its movement is ball B, which hit it.” Here ball B is the cause, and the movement of ball A is the effect. But from the same positions, the following explanation can be given: “Before the collision, ball B was moving uniformly along a straight path. If it weren’t for ball A, then the nature of the movement of ball B would not have changed.” Here the cause is already ball A, and the effect is the state of ball B. The above example shows:

a) a certain subjectivity that arises when applying the evolutionary model beyond the limits of its applicability: the cause can be either ball A or ball B; this situation is due to the fact that the evolutionary model picks out one particular branch of the consequence and is limited to its interpretation;

b) a typical epistemological error. In the above explanations from the position of the evolutionary model, one of the material objects of the same type acts as an “active” principle, and the other as a “passive” principle. It turns out that one of the balls is endowed (in comparison with the other) with “activity”, “will”, “desire”, like a person. Therefore, it is only thanks to this “will” that we have a causal relationship. Such an epistemological error is determined not only by the model of causality, but also by the imagery inherent in living human speech, and the typical psychological transfer of properties characteristic of complex causality (we will talk about it below) to a simple cause-and-effect link. And such errors are very typical when using an evolutionary model beyond the limits of its applicability. They appear in some definitions of causation. For example: “So, causation is defined as such an effect of one object on another, in which a change in the first object (cause) precedes a change in another object and in a necessary, unambiguous way gives rise to a change in another object (effect).” It is difficult to agree with such a definition, since it is not at all clear why, during interaction (mutual action!), objects should not be deformed simultaneously, but one after another? Which object should deform first and which should deform second (priority problem)?

Model qualities

Let us now consider what qualities the structural model of causality contains. Let us note the following among them: objectivity, universality, consistency, unambiguity.

The objectivity of causality is manifested in the fact that interaction acts as an objective cause in relation to which interacting objects are equal. There is no room for anthropomorphic interpretation here. Universality is due to the fact that the basis of causality is always interaction. Causality is universal, just as interaction itself is universal. Consistency is due to the fact that, although cause and effect (interaction and change of states) coincide in time, they reflect different aspects of the cause-and-effect relationship. Interaction presupposes a spatial connection of objects, a change in state - a connection between the states of each of the interacting objects in time.

In addition, the structural model establishes an unambiguous relationship in cause-and-effect relationships, regardless of the method of mathematical description of the interaction. Moreover, the structural model, being objective and universal, does not impose restrictions on the nature of interactions in natural science. Within the framework of this model, instantaneous long- or short-range action and interaction with any finite velocities are valid. The appearance of such a limitation in determining cause-and-effect relationships would be a typical metaphysical dogma, once and for all postulating the nature of the interaction of any systems, imposing a natural philosophical framework on physics and other sciences from the side of philosophy, or it would limit the limits of applicability of the model so much that the benefits of such a model would be very modest.

Here it would be appropriate to dwell on issues related to the finiteness of the speed of propagation of interactions. Let's look at an example. Let there be two stationary charges. If one of the charges begins to move with acceleration, then the electromagnetic wave will approach the second charge with a delay. Doesn't this example contradict the structural model and, in particular, the property of reciprocity of action, since with such interaction the charges are in an unequal position? No, it doesn't contradict. This example does not describe a simple interaction, but a complex causal chain in which three different links can be distinguished.

Due to the generality and breadth of its laws, physics has always influenced the development of philosophy and has itself been influenced by it. While discovering new achievements, physics did not abandon philosophical questions: about matter, about motion, about the objectivity of phenomena, about space and time, about causality and necessity in nature. The development of atomism led E. Rutherford to the discovery of the atomic nucleus and...

Related articles

  • Test “Rus in the 9th – early 11th centuries”

    Task 1. Arrange historical events in chronological order. Write down the numbers that indicate historical events in the correct sequence in the table. The Baptism of Rus' The Calling of the Varangians The Emergence of an Empire...

  • Golovko Alexander Valentinovich

    Alexander Valentinovich Golovko Alexander Valentinovich Golovko Lua error in Module:Wikidata on line 170: attempt to index field "wikibase" (a nil value). Creed: Lua error in Module:Wikidata on line 170: attempt to...

  • Phrases from the joker Phrases from the dark knight

    "The Dark Knight" is a science-fiction thriller filmed in 2008. The high-quality and dynamic film was complemented by an excellent cast. The film stars Heath Ledger, Christian Bale, Maggie Gyllenhaal, Aaron Eckhart, Michael Caine, Morgan Freeman and...

  • Biology - the science of life

    Specifics of biological drawing for middle school students Biological drawing is one of the generally accepted tools for studying biological objects and structures. There are many good tutorials that address this issue....

  • Amino acids necessary for humans How to remember all the amino acids

    1. Amino acids Scarlet Waltz. Flies (from the log) Copper of Farewells, Grass of the Final. Clay Gray, Anxiety, Ceremony, Silence. Slate Depths of Falling Leaves (Fall into) Giant Arcades. That is: Alanine, Valine, Leucine, Isoleucine, Methionine, Proline,...

  • Independent reproduction of Andrea Rossi's cold fusion reactor in Russia

    Owners know firsthand how much it costs to provide a private home with electricity and heat. In this article I want to share the latest news about the development of a new type of heat generator. The likelihood of an energy revolution when...