长沙桑拿哪里好推荐

Although Friedmann found only one, there are in fact threedifferent kinds of models that obey Friedmann’s twofundamental assumptions. In the first kind (which Friedmannfound) the universe is expanding sufficiently slowly that thegravitational attraction between the different galaxies causes theexpansion to slow down and eventually to stop. The galaxiesthen start to move toward each other and the universecontracts. Fig. 3.2 shows how the distance between twoneighboring galaxies changes as time increases. It starts at zero,increases to a maximum, and then decreases to zero again. Inthe second kind of solution, the universe is expanding sorapidly that the gravitational attraction can never stop it, thoughit does slow it down a bit. Fig. 3.3 Shows the Separationbetween neighboring galaxies in this model. It starts at zero andeventually the galaxies are moving apart at a steady speed.
Finally, there is a third kind of solution, in which the universeis expanding only just fast enough to avoid recollapse. In thiscase the separation, shown in Fig. 3.4, also starts at zero andincreases forever. However, the speed at which the galaxies aremoving apart gets smaller and smaller, although it never quitereaches zero.
A remarkable feature of the first kind of Friedmann model isthat in it the universe is not infinite in space, but neither doesspace have any boundary. Gravity is so strong that space isbent round onto itself, making it rather like the surface of theearth. If one keeps traveling in a certain direction on thesurface of the earth, one never comes up against animpassable barrier or falls over the edge, but eventually comesback to where one started.
In the first kind of Friedmann model, space is just like this,but with three dimensions instead of two for the earth’ssurface. The fourth dimension, time, is also finite in extent, butit is like a line with two ends or boundaries, a beginning andan end. We shall see later that when one combines generalrelativity with the uncertainty principle of quantum mechanics, itis possible for both space and time to be finite without anyedges or boundaries.
The idea that one could go right round the universe andend up where one started makes good science fiction, but itdoesn’t have much practical significance, because it can beshown that the universe would recollapse to zero size beforeone could get round. You would need to travel faster than lightin order to end up where you started before the universecame to an end – and that is not allowed!
In the first kind of Friedmann model, which expands andrecollapses, space is bent in on itself, like the surface of theearth. It is therefore finite in extent. In the second kind ofmodel, which expands forever, space is bent the other way, likethe surface of a saddle. So in this case space is infinite. Finally,in the third kind of Friedmann model, with just the critical rateof expansion, space is flat (and therefore is also infinite).
But which Friedmann model describes our universe? Will theuniverse eventually stop expanding and start contracting, or willit expand forever? To answer this question we need to knowthe present rate of expansion of the universe and its presentaverage density. If the density is less than a certain criticalvalue, determined by the rate of expansion, the gravitationalattraction will be too weak to halt the expansion. If the densityis greater than the critical value, gravity will stop the expansionat some time in the future and cause the universe torecollapse.
We can determine the present rate of expansion bymeasuring the velocities at which other galaxies are movingaway from us, using the Doppler effect. This can be done veryaccurately. However, the distances to the galaxies are not verywell known because we can only measure them indirectly. Soall we know is that the universe is expanding by between 5percent and 10 percent every thousand million years. However,our uncertainty about the present average density of theuniverse is even greater. If we add up the masses of all thestars that we can see in our galaxy and other galaxies, thetotal is less than one hundredth of the amount required to haltthe expansion of the universe, even for the lowest estimate ofthe rate of expansion. Our galaxy and other galaxies, however,must contain a large amount of “dark matter” that we cannotsee directly, but which we know must be there because of theinfluence of its gravitational attraction on the orbits of stars inthe galaxies. Moreover, most galaxies are found in clusters, andwe can similarly infer the presence of yet more dark matter inbetween the galaxies in these clusters by its effect on themotion of the galaxies. When we add up all this dark matter,we still get only about one tenth of the amount required tohalt the expansion. However, we cannot exclude the possibilitythat there might be some other form of matter, distributedalmost uniformly throughout the universe, that we have not yetdetected and that might still raise the average density of theuniverse up to the critical value needed to halt the expansion.
The present evidence therefore suggests that the universe willprobably expand forever, but all we can really be sure of isthat even if the universe is going to recollapse, it won’t do sofor at least another ten thousand million years, since it hasalready been expanding for at least that long. This should notunduly worry us: by that time, unless we have colonizedbeyond the Solar System, mankind will long since have diedout, extinguished along with our sun!
All of the Friedmann solutions have the feature that at sometime in the past (between ten and twenty thousand millionyears ago) the distance between neighboring galaxies must havebeen zero. At that time, which we call the big bang, the densityof the universe and the curvature of space-time would havebeen infinite. Because mathematics cannot really handle infinitenumbers, this means that the general theory of relativity (onwhich Friedmann’s solutions are based) predicts that there is apoint in the universe where the theory itself breaks down. Sucha point is an example of what mathematicians call a singularity.
In fact, all our theories of science are formulated on theassumption that space-time is smooth and nearly fiat, so theybreak down at the big bang singularity, where the curvature ofspace-time is infinite. This means that even if there were eventsbefore the big bang, one could not use them to determinewhat would happen afterward, because predictability wouldbreak down at the big bang.
Correspondingly, if, as is the case, we know only what hashappened since the big bang, we could not determine whathappened beforehand. As far as we are concerned, eventsbefore the big bang can have no consequences, so they shouldnot form part of a scientific model of the universe. We shouldtherefore cut them out of the model and say that time had abeginning at the big bang.
Many people do not like the idea that time has a beginning,probably because it smacks of divine intervention. (The CatholicChurch, on the other hand, seized on the big bang model andin 1951officially pronounced it to be in accordance with theBible.) There were therefore a number of attempts to avoid theconclusion that there had been a big bang. The proposal thatgained widest support was called the steady state theory. It wassuggested in 1948 by two refugees from Nazi-occupied Austria,Hermann Bondi and Thomas Gold, together with a Briton, FredHoyle, who had worked with them on the development ofradar during the war. The idea was that as the galaxies movedaway from each other, new galaxies were continually forming inthe gaps in between, from new matter that was beingcontinually created. The universe would therefore look roughlythe same at all times as well as at all points of space. Thesteady state theory required a modification of general relativityto allow for the continual creation of matter, but the rate thatwas involved was so low (about one particle per cubic kilometerper year) that it was not in conflict with experiment. Thetheory was a good scientific theory, in the sense described inChapter 1: it was simple and it made definite predictions thatcould be tested by observation. One of these predictions wasthat the number of galaxies or similar objects in any givenvolume of space should be the same wherever and wheneverwe look in the universe. In the late 1950s and early 1960s asurvey of sources of radio waves from outer space was carriedout at Cambridge by a group of astronomers led by MartinRyle (who had also worked with Bondi, Gold, and Hoyle onradar during the war). The Cambridge group showed that mostof these radio sources must lie outside our galaxy (indeedmany of them could be identified with other galaxies) and alsothat there were many more weak sources than strong ones.
They interpreted the weak sources as being the more distantones, and the stronger ones as being nearer. Then thereappeared to be less common sources per unit volume of spacefor the nearby sources than for the distant ones. This couldmean that we are at the center of a great region in theuniverse in which the sources are fewer than elsewhere.
Alternatively, it could mean that the sources were morenumerous in the past, at the time that the radio waves left ontheir journey to us, than they are now. Either explanationcontradicted the predictions of the steady state theory.
Moreover, the discovery of the microwave radiation by Penziasand Wilson in 1965 also indicated that the universe must havebeen much denser in the past. The steady state theorytherefore had to be abandoned.
Another attempt to avoid the conclusion that there musthave been a big bang, and therefore a beginning of time, wasmade by two Russian scientists, Evgenii Lifshitz and IsaacKhalatnikov, in 1963. They suggested that the big bang mightbe a peculiarity of Friedmann’s models alone, which after allwere only approximations to the real universe. Perhaps, of allthe models that were roughly like the real universe, onlyFriedmann’s would contain a big bang singularity. InFriedmann’s models, the galaxies are all moving directly awayfrom each other – so it is not surprising that at some time inthe past they were all at the same place. In the real universe,however, the galaxies are not just moving directly away fromeach other – they also have small sideways velocities. So inreality they need never have been all at exactly the same place,only very close together. Perhaps then the current expandinguniverse resulted not from a big bang singularity, but from anearlier contracting phase; as the universe had collapsed theparticles in it might not have all collided, but had flown pastand then away from each other, producing the presentexpansion of the the universe that were roughly likeFriedmann’s models but took account of the irregularities andrandom velocities of galaxies in the real universe. They showedthat such models could start with a big bang, even though thegalaxies were no longer always moving directly away from eachother, but they claimed that this was still only possible incertain exceptional models in which the galaxies were all movingin just the right way. They argued that since there seemed tobe infinitely more Friedmann-like models without a big bangsingularity than there were with one, we should 长沙桑拿水磨 conclude thatthere had not in reality been a big bang. They later realized,however, that there was a much more general class ofFriedmann-like models that did have singularities, and in whichthe galaxies did not have to be moving any special way. Theytherefore withdrew their claim in 1970.
The work of Lifshitz and Khalatnikov was valuable because itshowed that the universe could have had a singularity, a bigbang, if the general theory of relativity was correct. However, itdid not resolve the crucial question: Does general relativitypredict that our universe should have had a big bang, abeginning of time? The answer to this carne out of acompletely different approach introduced by a Britishmathematician and physicist, Roger Penrose, in 1965. Using theway light cones behave in general relativity, together with thefact that gravity 长沙桑拿中心 is always attractive, he showed that a starcollapsing under its own gravity is trapped in a region whosesurface eventually shrinks to zero size. And, since the surface ofthe region shrinks to zero, so too must its volume. All thematter in the star will be compressed into a region of zerovolume, so the density of matter and the curvature ofspace-time become infinite. In other words, one has asingularity contained within a region of space-time known as ablack hole.
At first sight, Penrose’s result applied only to stars; it didn’thave anything to say about the question of whether the entireuniverse had a big bang singularity in its past. However, at thetime that Penrose produced his theorem, I was a researchstudent desperately looking for a problem with which tocomplete my Ph.D. thesis. Two years before, I had beendiagnosed 长沙桑拿休闲会所 as suffering from ALS, commonly known as LouGehrig’s disease, or motor neuron disease, and given tounderstand that I had only one or two more years to live. Inthese circumstances there had not seemed much point inworking on my Ph.D.- I did not expect to survive that long.
Yet two years had gone by and I was not that much worse.
In fact, things were going rather well for me and I had gottenengaged to a very nice girl, Jane Wilde. But in order to getmarried, I needed a job, and in order to get a job, I neededa Ph.D.
In 1965 I read about Penrose’s theorem that any bodyundergoing gravitational collapse must eventually form asingularity. I soon realized that if one reversed the direction oftime in Penrose’s theorem, so that the collapse became anexpansion, the conditions of his theorem would still hold,provided the universe were roughly like a Friedmann model onlarge scales at the present time. Penrose’s theorem had shownthat any collapsing star must end in a singularity; thetime-reversed argument showed that any Friedmann-likeexpanding universe must have begun with a singularity. Fortechnical reasons, Penrose’s theorem required that the universebe infinite in space. So I could in fact, use it to prove thatthere should be a singularity only if the universe was expandingfast enough to avoid collapsing again (since only thoseFriedmann models were infinite in space).
During the next few years I developed new mathematicaltechniques to remove this and other technical conditions fromthe theorems that proved that singularities must occur. Thefinal result was a joint paper by Penrose and myself in 1970,which at last proved that there must have been a big bangsingularity provided only that general relativity is correct andthe universe contains as much matter as we observe. Therewas a lot of opposition to our work, partly from the Russiansbecause of their Marxist belief in scientific determinism, andpartly from people who felt that the whole idea of singularitieswas repugnant and spoiled the beauty of Einstein’s theory.
However, one cannot really argue with a mathematical theorem.
So in the end our work became generally accepted andnowadays nearly everyone assumes that the universe startedwith a big bang singularity. It is perhaps ironic that, havingchanged my mind, I am now trying to convince other physiciststhat there was in fact no singularity at the beginning of theuniverse – as we shall see later, it can disappear once quantumeffects are taken into account.
We have seen in this chapter how, in less than half acentury, man’s view of the universe formed over millennia hasbeen transformed. Hubble’s discovery that the universe wasexpanding, and the realization of the insignificance of our ownplanet in the vastness of the universe, were just the startingpoint. As experimental and theoretical evidence mounted, itbecame more and more clear that the universe must have hada beginning in time, until in 1970 this was finally proved byPenrose and myself, on the basis of Einstein’s general theory ofrelativity. That proof showed that general relativity is only anincomplete theory: it cannot tell us how the universe startedoff, because it predicts that all physical theories, including itself,break down at the beginning of the universe. However, generalrelativity claims to be only a partial theory, so what thesingularity theorems really show is that there must have been atime in the very early universe when the universe was so smallthat one could no longer ignore the small-scale effects of theother great partial theory of the twentieth century, quantummechanics. At the start of the 1970s, then, we were forced toturn our search for an understanding of the universe from ourtheory of the extraordinarily vast to our theory of theextraordinarily tiny. That theory, quantum mechanics, will bedescribed next, before we turn to the efforts to combine thetwo partial theories into a single quantum theory of gravity.
CHAPTER 4 THE UNCERTAINTY PRINCIPLE
The success of scientific theories, particularly Newton’s theoryof gravity, led the French scientist the Marquis de Laplace atthe beginning of the nineteenth century to argue that theuniverse was completely deterministic. Laplace suggested thatthere should be a set of scientific laws that would allow us topredict everything that would happen in the universe, if only weknew the complete state of the universe at one time. Forexample, if we knew the positions and speeds of the sun andthe planets at one time, then we could use Newton’s laws tocalculate the state of the Solar System at any other time.
Determinism seems fairly obvious in this case, but Laplace wentfurther to assume that there were similar laws governingeverything else, including human behavior.
The doctrine of scientific determinism was strongly resistedby many people, who felt that it infringed God’s freedom tointervene in the world, but it remained the standard assumptionof science until the early years of this century. One of the firstindications that this belief would have to be abandoned camewhen calculations by the British scientists Lord Rayleigh and SirJames Jeans suggested that a hot object, or body, such as astar, must radiate energy at an infinite rate. According to thelaws we believed at the time, a hot body ought to give offelectromagnetic waves (such as radio waves, visible light, or Xrays) equally at all frequencies. For example, a hot body shouldradiate the same amount of energy in waves with frequenciesbetween one and two million million waves a second as inwaves with frequencies between two and three million millionwaves a second. Now since the number of waves a second isunlimited, this would mean that the total energy radiated wouldbe infinite.
In order to avoid this obviously ridiculous result, the Germanscientist Max Planck suggested in 1900 that light, X rays, andother waves could not be emitted at an arbitrary rate, but onlyin certain packets that he called quanta. Moreover, eachquantum had a certain amount of energy that was greater thehigher the frequency of the waves, so at a high enoughfrequency the emission of a single quantum would require moreenergy than was available. Thus the radiation at highfrequencies would be reduced, and so the rate at which thebody lost energy would be finite.
The quantum hypothesis explained the observed rate ofemission of radiation from hot bodies very well, but itsimplications for determinism were not realized until 1926, whenanother German scientist, Werner Heisenberg, formulated hisfamous uncertainty principle. In order to predict the futureposition and velocity of a particle, one has to be able tomeasure its present position and velocity accurately. Theobvious way to do this is to shine light on the particle. Someof the waves of light will be scattered by the particle and thiswill indicate its position. However, one will not be able todetermine the position of the particle more accurately than thedistance between the wave crests of light, so one needs to uselight of a short wavelength in order to measure the position ofthe particle precisely. Now, by Planck’s quantum hypothesis, onecannot use an arbitrarily small amount of light; one has to useat least one quantum. This quantum will disturb the particleand change its velocity in a way that cannot be predicted.
moreover, the more accurately one measures the position, theshorter the wavelength of the light that one needs and hencethe higher the energy of a single quantum. So the velocity ofthe particle will be disturbed by a larger amount. In otherwords, the more accurately you try to measure the position ofthe particle, the less accurately you can measure its speed, andvice versa. Heisenberg showed that the uncertainty in theposition of the particle times the uncertainty in its velocity timesthe mass of the particle can never be smaller than a certainquantity, which is known as Planck’s constant. Moreover, thislimit does not depend on the way in which one tries tomeasure the position or velocity of the particle, or on the typeof particle: Heisenberg’s uncertainty principle is a fundamental,inescapable property of the world.
The uncertainty principle had profound implications for theway in which we view the world. Even after more than seventyyears they have not been fully appreciated by manyphilosophers, and are still the subject of much controversy. Theuncertainty principle signaled an end to Laplace’s dream of atheory of science, a model of the universe that would becompletely deterministic: one certainly cannot predict futureevents exactly if one cannot even measure the present state ofthe

长沙桑拿论坛网

universe precisely! We could still imagine that there is a setof laws that determine events completely for some supernaturalbeing, who could observe the present state of the universewithout disturbing it. However, such models of the universe arenot of much interest to us ordinary mortals. It seems better toemploy the principle of economy known as Occam’s razor andcut out all the features of the theory that cannot be observed.
This approach led Heisenberg, Erwin Schrodinger, and PaulDirac in the 1920s to reformulate mechanics into a new theorycalled quantum mechanics, based on the uncertainty principle.
In this theory particles no longer had separate, well-definedpositions and velocities that could not be observed, Instead,they had a quantum state, which was a combination of positionand velocity.
In general, quantum mechanics does not predict a singledefinite result for an observation. Instead, it predicts a numberof different possible outcomes and tells us how likely each ofthese is. That is to say, if one made the same measurementon a large number of similar systems, each of which started offin the same way, one would find that the result of themeasurement would be A in a certain number of cases, B in adifferent number, and so on. One could predict theapproximate number of times that the result would be A or B,but one could not predict the specific result of an individualmeasurement. Quantum mechanics therefore introduces anunavoidable element of unpredictability or randomness intoscience. Einstein objected to this very strongly, despite theimportant role he had played in the development of these ideas.
Einstein was awarded the Nobel Prize for his contribution toquantum theory. Nevertheless, Einstein never accepted that theuniverse was governed by chance; his feelings were summedup in his famous statement “God does not play dice.” Mostother scientists, however, were willing to accept quantummechanics because it agreed perfectly with experiment. Indeed,it has been an outstandingly successful theory and underliesnearly all of modern science and technology. It governs thebehavior of transistors and integrated circuits, which are theessential components of electronic devices such as televisionsand computers, and is also the basis of modern chemistry andbiology. The only areas of physical science into which quantummechanics has not yet been properly incorporated are gravityand the large-scale structure of the universe.
Although light is made up of waves, Planck’s quantumhypothesis tells us that in some ways it behaves as if it werecomposed of particles: it can be emitted or absorbed only inpackets, or quanta. Equally, Heisenberg’s uncertainty principleimplies that particles behave in some respects like waves: theydo not have a definite position but are “smeared out” with acertain probability distribution. The theory of quantummechanics is based on an entirely new type of mathematicsthat no longer describes the real world in terms of particlesand waves; it is only the observations of the world that maybe described in thoseterms. There is thus a duality between waves and particles inquantum mechanics: for some purposes it is helpful to think ofparticles as waves and for other purposes it is better to thinkof waves as particles. An important consequence of this is thatone can observe what is called interference between two sets ofwaves or particles. That is to say, the crests of one set ofwaves may coincide with the troughs of the other set. The twosets of waves then cancel each other out rather than addingup to a stronger wave as one might expect (Fig. 4.1). Afamiliar example of interference in the case of light is the colorsthat are often seen in soap bubbles. These are caused byreflection of light from the two sides of the thin film of waterforming the bubble. White light consists of light waves of alldifferent wavelengths, or colors, For certain wavelengths thecrests of the waves reflected from one side of the soap filmcoincide with the troughs reflected from the other side. Thecolors corresponding to these wavelengths are absent from thereflected light, which therefore appears to be colored.
Interference can also occur for particles, because of the dualityintroduced by quantum mechanics. A famous example is theso-called two-slit experiment (Fig. 4.2). Consider a partition withtwo narrow parallel slits in it. On one side of the partition oneplaces a source of fight of a particular color (that is, of aparticular wavelength). Most of the light will hit the partition,but a small amount will go through the slits. Now suppose oneplaces a screen on the far side of the partition from the light.
Any point on the screen will receive waves from the two slits.
However, in general, the distance the light has to travel fromthe source to the screen via the two slits will be different. Thiswill mean that the waves from the slits will not be in phasewith each other when they arrive at the screen: in some placesthe waves will cancel each other out, and in others they willreinforce each other. The result is a characteristic pattern oflight and dark fringes.
The remarkable thing is that one gets exactly the same kindof fringes if one replaces the source of light by a source ofparticles such as electrons with a definite speed (this meansthat the corresponding waves have a definite length). It seemsthe more peculiar because if one only has one slit, one doesnot get any fringes, just a uniform distribution of electronsacross the screen. One might therefore think that openinganother slit would just increase the number of electrons hittingeach point of the screen, but, because of interference, it actuallydecreases it in some places. If electrons are sent through theslits one at a time, one would expect each to pass through oneslit or the other, and so behave just as if the slit it passedthrough were the only one there – giving a uniform distributionon the screen. In reality, however, even when the electrons aresent one at a time, the fringes still appear. Each electron,therefore, must be passing through both slits at the same time!